Lucene

In understanding Lucene Before , Let's first understand the full-text data query .

Full text data query

Our data are generally divided into two types : Structured data and unstructured data

  • Structured data : Data with fixed format or limited length , Such as data in the database 、 Metadata
  • Unstructured data : Also called full-text data , Data of indefinite length or without fixed format , Like email 、word file

Database is suitable for precise query of structured data , It is not suitable for semi-structured 、 Fuzzy query and flexible search of unstructured data ( Especially when there is a large amount of data ), Unable to provide the desired real-time .

Full text data query

  1. Sequential scanning

So called sequential scanning , Is to find a file that contains a string , It's a document, a document view . For every document , From the beginning to the end , If this document contains this string , Then this document is the file we are looking for , Then look at the next file , Until all the files are scanned .

  1. Full text search

Full text search refers to the computer index program through scanning every word in the article , Make an index of each word , Indicate the number and location of the word in the article , When the user inquires , The search program will search according to the index established in advance , And the search results will be fed back to the user's search method . This process is similar to the process of looking up a word through a key list in a dictionary .

The basic idea of full text retrieval , Is to extract some information from unstructured data , Reorganize , To make it have a certain structure , Then search the data with certain structure , So as to achieve the purpose of relatively fast search .

This part of information extracted from unstructured data and then reorganized , We call it index , This kind of index first , The process of searching index is called full-text search (Full-text Search) .

Specific applications include the search of stand-alone software (word Search in ) On-site search ( JD.COM 、 taobao、 Check position search ) Professional search engine companies (google、baidu) Search for .

Full text retrieval is usually realized by inverted index .

3. Forward index

A forward index is a document ID by key, The table records the number and position of each keyword , When searching, scan the information of words in each document in the table , Until you find all the documents that contain the query keywords .

The format is as follows :

file 1 Of ID > word 1: Number of occurrences , A list of locations appears ; word 2: Number of occurrences , A list of locations appears …………

file 2 Of ID > word 1: Number of occurrences , A list of locations appears ; word 2: Number of occurrences , A list of locations appears …………

When users search for keywords on the home page “ Huawei mobile phones ” when , Suppose there are only positive indexes (forward index), Then you need to scan all the documents in the index library , Find all the keywords that contain “ Huawei mobile phones ” Documents , Then according to the scoring model to score , After ranking, it will be presented to users . Because the number of documents included in search engines on the Internet is astronomical , This kind of index structure can not meet the requirement of real-time return of ranking results

4. Inverted index

A map used to store the location of a word in a document or group of documents under full-text search . It is a common data structure in document retrieval system . By inverted index , You can quickly get a list of documents containing this word according to the word .

The format is as follows :

key word 1 > file 1 Of ID : Number of occurrences , Position of appearance ; file 2 Of ID: Number of occurrences , Position of appearance …………

key word 2 > file 1 Of ID : Number of occurrences , Position of appearance ; file 2 Of ID: Number of occurrences , Position of appearance …………

Lucene Basic introduction

Lucene brief introduction

Lucene The author of Doug Cutting Is a senior full-text index / Search expert , First published on his own home page ,2000 In open source ,2001 year 10 The moon is dedicated to Apache, Become Apache A sub project of the fund . Official website https://lucene.apache.org/core. It is now an important choice for open source full-text retrieval solutions .

Lucene Is a very good mature open source free pure java Language full-text indexing toolkit .

Lucene It's a high performance 、 Scalable information search (IR) library . Information Retrieval (IR) library. It can add indexing and search capabilities to your application .

Lucene It is an easy-to-use toolkit for software developers , In order to facilitate the realization of full-text retrieval in the target system , Or build a complete full-text search engine based on this . from Apache The software foundation supports and provides ,Lucene Provides a simple but powerful application interface , Be able to do full-text indexing and search .Lucene It is a very popular free... At present and in recent years Java Information retrieval library .

Lucene Realized products

As an open source project ,Lucene Since its inception , It triggered a huge response from the open source community , Programmers not only use it to build specific full-text retrieval applications , And integrate it into various system software , And build Web application , Even some commercial software uses Lucene As the core of its internal full-text retrieval subsystem .

Nutch:Apache Top open source projects , Contains web crawlers and search engines ( be based on lucene) The system of ( Same as Baidu 、google).

Hadoop Because of it .

Solr : Lucene Sub projects under , be based on Lucene Build an independent enterprise level open source search platform , A service . It provides the basis xml/JSON/http Of api For outside access , also web Management interface .

Elasticsearch: be based on Lucene Enterprise level distributed search platform , It provides restful-web Interface , Make it easy for programmers 、 Easy to use search platform .

And what we all know OSChina、Eclipse、MyEclipse、JForum And so on Lucene Do a search framework to implement your own search part , It is necessary to add his search ability to our own project , It can greatly improve the search experience of our development system .

Lucene Characteristics of

  1. Stable 、 High indexing performance
  • Can index every hour 150GB The above data .
  • Small memory requirements , It only needs 1MB Heap memory for
  • Incremental indexing is as fast as batch indexing .
  • The size of the index is about the size of the index text 20%~30%.
  1. Efficient 、 accuracy 、 High performance
  • Range search - The priority to return the best results is a lot of powerful
  • Good search sorting .
  • Powerful query support : Phrase query 、 Wildcard query 、 Proximity query 、 Range query, etc .
  • Support field search ( As the title 、 author 、 Content ).
  • You can sort by any field
  • Support the merging of multiple index query results
  • It supports both update and query operations
  • Support highlighting 、join、 Grouping result function
  • Fast
  • Extensible sorting module , Built in vector space model 、BM25 Models are optional
  • Configurable storage engine
  1. Cross platform
  • pure java To write
  • Lucene There are multiple language implementations ( Such as C、C++、Python etc. )

Lucence Module composition

Lucene It's a use. Java Write high performance 、 Scalable full text search engine toolkit , It can be easily embedded into various applications to achieve full-text indexing for applications 、 Search function .Lucene Our goal is to add full-text search capabilities to a variety of small and medium-sized applications

Lucene Practical application

Index creation process

First step : Collect some original document data to be indexed

Classification of collected data :

1、 For web pages on the Internet , You can use tools to grab web pages and generate them locally html file .

2、 Data in the database , You can directly connect to the database to read the data in the table .

3、 A file in the file system , Can pass I/O Operation reads the contents of the file .

The second step : Create document object , Grammatical analysis , Pass the document to the word breaker (Tokenizer) Form a series of words (Term)

The purpose of getting the original content is to index , Before indexing, you need to create the original content into a document (Document), The document includes one field after another (Field), Store content in the domain , Then analyze the contents in the domain , Analyze the words one by one (Term). Every Document There can be multiple Field.

The third step : Index creation , Pass the resulting word to the index component (Indexer) Form inverted index structure

Index the lexical units of all documents , The purpose of the index is to search for , Finally, only the indexed vocabulary units are searched to find Document( file ).

To create an index is to index vocabulary units , Find documents by words , This kind of index structure is called inverted index structure .

Step four : Through index memory , Write index to disk

Java The code implements index creation

Introduce dependencies :

<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId>
<version>${lucene-version}</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-queryparser</artifactId>
<version>${lucene-version}</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-common</artifactId>
<version>${lucene-version}</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.5</version>
</dependency>
public class TestLuceneIndex {

    public static void main(String[] args) throws Exception{
// 1. Collect data
List<Book> bookList = new ArrayList<Book>();
Book book1=new Book();
book1.setId(1);
book1.setName("Lucene");
book1.setPrice(new BigDecimal("100.45"));
book1.setDesc("Lucene Core is a Java library providing powerful indexing\n" +
"and search features, as well as spellchecking, hit highlighting and advanced\n" +
"analysis/tokenization capabilities. The PyLucene sub project provides Python\n" +
"bindings for Lucene Core");
bookList.add(book1); Book book2=new Book();
book2.setId(2);
book2.setName("Solr");
book2.setPrice(new BigDecimal("66.45"));
book2.setDesc("Solr is highly scalable, providing fully fault tolerant\n" +
"distributed indexing, search and analytics. It exposes Lucene's features through\n" +
"easy to use JSON/HTTP interfaces or native clients for Java and other languages");
bookList.add(book2); Book book3=new Book();
book3.setId(3);
book3.setName("Hadoop");
book3.setPrice(new BigDecimal("318.33"));
book3.setDesc("The Apache Hadoop software library is a framework that\n" +
"allows for the distributed processing of large data sets across clusters of\n" +
"computers using simple programming models");
bookList.add(book3); //2. establish docment Document object
List<Document> documents = new ArrayList<>();
bookList.forEach(x->{
Document document=new Document();
document.add(new TextField("id",x.getId().toString(), Field.Store.YES));
document.add(new TextField("name",x.getName(), Field.Store.YES));
document.add(new TextField("price",x.getPrice().toString(), Field.Store.YES));
document.add(new TextField("desc",x.getDesc(), Field.Store.YES));
documents.add(document);
});
//3. establish Analyzer Word segmentation is , Segmentation of documents
Analyzer analyzer=new StandardAnalyzer();
// establish Directory object , Declare the location of the index library
Directory directory=FSDirectory.open(Paths.get("D://lucene/index"));
// establish IndexWriteConfig object , The configuration required to write the index
IndexWriterConfig config=new IndexWriterConfig(analyzer);
//4. establish IndexWriter object , Add the document document
IndexWriter indexWriter=new IndexWriter(directory,config);
documents.forEach(doc-> {
try {
indexWriter.addDocument(doc);
} catch (IOException e) {
e.printStackTrace();
}
});
// Release resources
indexWriter.close(); }
}

Index search process

  1. The user enters a query statement
  2. After lexical analysis and language analysis of query statements, a series of words are obtained (Term)
  3. Get a query tree through syntax analysis
  4. Read the index into memory through index storage
  5. Use the query tree to search the index , So you get every word (Term) List of documents , Submit the document list 、 Bad 、 And get the result document
  6. Sort the search result documents according to the relevance of query statements
  7. Return the query result to the user

Java The code implements index query

public class TestLuceneSearch {

    public static void main(String[] args) throws IOException, ParseException {
//1. establish Query Search for
Analyzer analyzer=new StandardAnalyzer();
// Create a search parser
QueryParser queryParser=new QueryParser("id",analyzer);
Query query=queryParser.parse("desc:data");
//2. establish Directory Stream object , Declare the location of the index library
Directory directory=FSDirectory.open(Paths.get("D:/lucene/index"));
//3. Create index read object IndexReader
IndexReader reader=DirectoryReader.open(directory);
// 4. Create index search objects
IndexSearcher searcher= new IndexSearcher(reader);
//5. Execution search , Specifies that the topmost... Is returned 10 Data
TopDocs topDocs = searcher.search(query, 10);
ScoreDoc[] scoreDocs = topDocs.scoreDocs;
//6. Parse the result set
Stream.of(scoreDocs).forEach(doc->{
// Get document
Document document = null;
try {
document = searcher.doc(doc.doc);
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(document.get("name"));
System.out.println(document.get("id"));
});
reader.close(); }
}

Field Domain

  1. Field attribute

Lucene Storage objects are stored in Document For storage units , The relevant attribute values in the object are stored in Field in .Field It's a field in the document , Include Field Name and Field It's worth two parts , A document includes multiple Field,Field The value is the content to be indexed , It's also the content to search for .

Field Three attributes of :

  • Participle or not (tokenized)

Whether to do word segmentation . yes : the Field Value for word segmentation , The purpose of participle is to index .

  • Index or not

Whether to index , take Field The word after the participle or the whole Field Value to index , The purpose of the index is to search for .

  • Whether to store

    take Field The value of is stored in the document , Stored in a document Field In order to learn from Document In order to get .
  1. Field Common types
Field type data type Participle or not Index or not Whether to store explain
StringField(FieldName,FieldValue, Store.YES) character string NYY/N String type Field, No participle , Index as a whole ( Such as : ID number , The order no. ), Whether storage is required is determined by Store.YES or Store.NO decision
TextField(FieldName,FieldValue, Store.NO) Text type YYY/N Text type Field, Word segmentation and indexing , Whether storage is required is determined by Store.YES or Store.NO decision
LongField(FieldName,FieldValue, Store.YES) or LongPoint(String name,int... point) etc. Numerical type represents YYY/N stay Lucene 6.0 in ,LongField Replace with LongPoint,IntField Replace with IntPoint,FloatField Replace with FloatPoint,DoubleField Replace with DoublePoint. Index numeric fields , Index does not store . To store a combination StoredField that will do .
StoredField(FieldName,FieldValue) Support for multiple types NNY Build different types of Field, No participle , No index , To store
  1. Field Code application
public static void main(String[] args) throws IOException {
// 1. Collect data
List<Book> bookList = Book.buildBookData(); List<Document> documents=new ArrayList<>();
bookList.forEach(book -> {
Document document=new Document();
Field id=new IntPoint("id",book.getId());
Field id_v=new StoredField("id",book.getId());
Field name=new TextField("name",book.getName(),Field.Store.YES);
Field price=new FloatPoint("price",book.getPrice().floatValue());
Field desc=new TextField("desc",book.getDesc(),Field.Store.NO);
document.add(id);
document.add(id_v);
document.add(name);
document.add(price);
document.add(desc);
documents.add(document);
});
StandardAnalyzer analyzer = new StandardAnalyzer();
Directory directory=FSDirectory.open(Paths.get("D:/lucene/index2"));
IndexWriterConfig indexWriterConfig=new IndexWriterConfig(analyzer);
IndexWriter indexWriter=new IndexWriter(directory,indexWriterConfig);
documents.forEach(doc-> {
try {
indexWriter.addDocument(doc);
} catch (IOException e) {
e.printStackTrace();
}
});
indexWriter.close();
}

Index maintenance

  1. Index add
indexWriter.addDocument(document);
  1. Index delete

according to Term Item Deletion

indexWriter.deleteDocuments(new Term("name", "solr"));

Delete all

indexWriter.deleteAll();
  1. Update index
public static void main(String[] args) throws IOException {
Analyzer analyzer=new StandardAnalyzer();
Directory directory=FSDirectory.open(Paths.get("d:/lucene/index2"));
IndexWriterConfig config=new IndexWriterConfig(analyzer);
IndexWriter indexWriter=new IndexWriter(directory,config); Document document=new Document();
document.add(new TextField("id","1002", Field.Store.YES));
document.add(new TextField("name"," After modification ", Field.Store.YES));
indexWriter.updateDocument(new Term("name","solr"),document); indexWriter.close(); }

Word segmentation is

Related concepts of word breaker

Word segmentation is : The collected data will be stored in Document Object's Field domain , A word breaker is a device that will Document in Field Of value The value of is segmented into words one by one .

Stop words : Stop words are used to save storage space and improve search efficiency , The search program automatically ignores certain words or phrases when indexing pages or processing search requests , These words are called Stop Wordds( Stop words ). For example, the modal particle 、 adverb 、 Preposition 、 Conjunctions, etc . Such as :“ Of ”、“ ah ”、“a”、“the”

Extended words : It is the word that the word breaker will not cut out by default , But we want the word splitter to cut out such words

With some tools , We can see the result after the participle :

It can be seen that he put our words “ After modification ” Divided into 3 A word :“ repair ”、“ Change ”、“ after ”. In addition, English is divided by word .

Chinese word segmentation

English is based on words , Words are separated by spaces or commas , So the English program is easier to handle .

Chinese is based on characters , Words form words , Words and words form sentences . such as “ I love sweet potatoes ”, The program doesn't know “ Sweet potato ” Is it a word or “ Eat red ” It's a word .

To solve this problem , Chinese word segmentation IKAnalyzer emerge as the times require

You can see that it's putting “ I love sweet potatoes ” It is divided into many words that conform to our semantics . But there is one in it “ Eat red ” We don't need . This requires our own custom configuration

Expand the Chinese Thesaurus

If you want to configure extension words and stop words , Create files for extension words and stop words .ik It provides us with the extension of custom configuration , from IKAnalyzer.cfg.xml The configuration file shows :

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>IK Analyzer Extended configuration </comment>
<!-- Users can configure their own extended dictionary here -->
<entry key="ext_dict">ext.dic;</entry> <!-- Users can configure their own extended stop word dictionary here -->
<entry key="ext_stopwords">stopword.dic;</entry> </properties>

We build a new one ext.dic, And match it with “ Eat red ”.

Now there is no “ Eat red ” That's the word . The same goes for extended dictionaries .

Be careful : Do not use window The built-in Notepad saves the extension word file and the stop word file , In that case , The format contains bom Of

Search for

There are two ways to create a query .

1) Use Lucene Provided Query Subclass

2) Use QueryParse Parse query expression

Query Subclass

  1. TermQuery

TermQuery Term query ,TermQuery Don't use a word splitter , Exact search Field Words in the domain .

public class TestSearch {

    public static void main(String[] args) throws IOException {
Query query=new TermQuery(new Term("name","solr"));
doSearch(query);
} private static void doSearch(Query query) throws IOException {
Directory directory=FSDirectory.open(Paths.get("D:/lucene/index"));
IndexReader indexReader=DirectoryReader.open(directory);
IndexSearcher searcher=new IndexSearcher(indexReader); TopDocs topDocs = searcher.search(query, 10);
System.out.println(" The total number of data queried :"+topDocs.totalHits); Stream.of(topDocs.scoreDocs).forEach(doc->{
// according to docId Query the document
Document document = null;
try {
document = searcher.doc(doc.doc);
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(document);
}); }
}
  1. BooleanQuery

BooleanQuery, Realize combined condition query .

public static void testBooleanQuery() throws IOException {
Query query1=new TermQuery(new Term("name","lucene"));
Query query2=new TermQuery(new Term("desc","java"));
BooleanQuery.Builder builder = new BooleanQuery.Builder();
builder.add(query1,BooleanClause.Occur.MUST);
builder.add(query2,BooleanClause.Occur.SHOULD); doSearch(builder.build());
}

The meaning of combination relation is as follows :

  • MUST and MUST Express “ And ” The relationship between , namely “ intersection ”.
  • MUST and MUST_NOT The former contains the latter does not contain .
  • MUST_NOT and MUST_NOT Have no meaning , No data
  • SHOULD And MUST Express MUST,SHOULD Lose meaning , Equivalent to only MUST A condition
  • SHOULD And MUST_NOT amount to MUST And MUST_NOT.
  • SHOULD And SHOULD Express “ or ” The relationship between , namely “ Combine ”.
  1. Phrase query PhraseQuery
PhraseQuery phraseQuery = new PhraseQuery("desc","lucene");

A query with a spacer between two phrases :

PhraseQuery phraseQuery = new PhraseQuery(3,"desc","lucene","java");

Can find out similar sentences :

Lucene Core is a Java library providing

lucene and java There is a gap between 3 A word

4. Span query

A query between two words in which there are other words

public static void testSpanTermQuery() throws IOException {
SpanTermQuery tq1 = new SpanTermQuery(new Term("desc", "lucene"));
SpanTermQuery tq2 = new SpanTermQuery(new Term("desc", "java"));
SpanNearQuery spanNearQuery = new SpanNearQuery(new SpanQuery[] { tq1, tq2
},3,true);
doSearch(spanNearQuery);
}
  1. Fuzzy query

WildcardQuery: Wildcard query ,* representative 0 Or more characters ,? representative 1 Characters ,\ It's the escape sign . Wildcard queries can be slow , Cannot start with a wildcard ( That's all the words )

public static void testWildcardQuery() throws IOException {
WildcardQuery wildcardQuery=new WildcardQuery(new Term("name","so*"));
doSearch(wildcardQuery);
}

FuzzyQuery: Allow typos in queries

FuzzyQuery fuzzyQuery = new FuzzyQuery(new Term("name", "slors"), 2);

As above, I put solr It's like slors, You can also find , The parameters above 2 It stands for how many wrong words can be made , The maximum value of this parameter is 2.

  1. Numeric query

adopt IntPoint, LongPoint,FloatPoint,DoublePoint The method in builds the corresponding query .

public static void testPointQuery() throws IOException {
Query query = IntPoint.newRangeQuery("id", 1, 4);
doSearch(query);
}

QueryParser Search for

  1. Basic query

The query syntax :

Field domain name +":"+ Search keywords . for example : name:java

  1. Range queries

Field domain name +":"+[ minimum value TO Maximum ]. for example : size:[A TO C]

Be careful :QueryParser Search of numeric ranges is not supported , String range is supported

  1. Combined condition query

There are two ways of writing :

Writing a :

Use +、 Minus sign and no sign

Logic Realization
Occur.MUST The query condition must satisfy , amount to AND+( plus )
Occur.SHOULD The query conditions are optional , amount to OR empty ( No symbols )
Occur.MUST_NOT The query condition cannot be satisfied , amount to NOT Not -( minus sign )

Example :

+filename:lucene + content:lucene

+filename:lucene content:lucene

filename:lucene content:lucene

-filename:lucene content:lucene

Write two :

Use AND、OR 、NOT

QueryParser

    public static void testQueryParser() throws ParseException, IOException {
Analyzer analyzer=new StandardAnalyzer();
QueryParser queryParser=new QueryParser("desc",analyzer);
Query query = queryParser.parse("desc:java AND name:lucene");
doSearch(query);
}

MultiFieldQueryParser

Multiple Field Query for , The following query is equivalent to :name:lucene desc:lucene

  public static void testSearchMultiFieldQuery() throws IOException, ParseException {
Analyzer analyzer=new IKAnalyzer();
String[] fields={"name","desc"};
MultiFieldQueryParser multiFieldQueryParser=new MultiFieldQueryParser(fields,analyzer);
Query query = multiFieldQueryParser.parse("lucene");
System.out.println(query);
doSearch(query);
}

StandardQueryParser

    public static void testStandardQuery() throws QueryNodeException, IOException {
Analyzer analyzer=new StandardAnalyzer();
StandardQueryParser parser = new StandardQueryParser(analyzer);
Query query = parser.parse("desc:java AND name:lucene", "desc");
System.out.println(query);
doSearch(query);
}

Other inquiries :

/ Wildcard match   It is recommended that wildcards be followed by   Wildcards are inefficient before 
query = parser.parse("name:L*","desc");
query = parser.parse("name:L???","desc");
// Fuzzy matching
query = parser.parse("lucene~","desc");
// Interval query
query = parser.parse("id:[1 TO 100]","desc");
// Span query ~2 Indicates that there are two words between words
query= parser.parse("\"lucene java\"~2","desc");

Lucene More related articles from introduction to actual combat

  1. Lucene 02 - Lucene How to get started with (Java API Simple use )

    Catalog 1 Prepare the environment 2 Prepare the data 3 Create a project 3.1 establish Maven Project( How to pack jar that will do ) 3.2 To configure pom.xml, Import dependence 4 Write the basic code 4.1 Write books POJO 4. ...

  2. dried food |《 from Lucene To Elasticsearch Full text retrieval practice 》 Dismantling practice

    1. . 2018 year 3 At the beginning of , An idea came into being : Yes Elasticsearch Relevant technical books for disassembly reading , The idea comes from the long-standing popularity of non computer field [ Fan deng book club ]. I got a book every day .XX Book opening Gang, etc . Currently on the market Elast ...

  3. Praise one kindle E-books have the latest computer books to buy 【Docker Introduction to technology and actual combat 】

    Recently on docker I'm more interested in this , Look for a complete book , stay z.cn I found an e-book on the Internet ,jd dangdang It seems that we need to refuel Docker Introduction to technology and actual combat [Kindle e-book ] ~  Yang Baohua Dai Wangjian Cao Yalun ...

  4. docker-9 supervisord Reference resources docker From introduction to practice

    Reference resources docker From introduction to practice Use Supervisor To manage the process Docker The container starts a single process at startup , such as , One ssh perhaps apache Of daemon service . But we often need to be in a machine ...

  5. 【 Reprint 】Lucene.Net Introductory tutorials and examples

    I see this article is very good Lucene.Net Introduction to the basic course , I'd like to share it with you , I hope you can use it in your work practice . One . A simple example // Indexes Private void Index(){    Index ...

  6. webpack Introduction and actual combat ( One ):webpack Configuration and skills

    One . Fully understand webpack 1. What is?  webpack? webpack It is the most popular module loader and packaging tool recently , It can bring all kinds of resources together , for example JS( contain JSX).coffee. style ( contain less/sass). Pictures, etc ...

  7. CMake Quick start tutorial - actual combat

    http://www.ibm.com/developerworks/cn/linux/l-cn-cmake/ http://blog.csdn.net/dbzhang800/article/detai ...

  8. Lucene.net Introduction learning

    Lucene.net Introduction learning ( Combined with Pangu participle )   Lucene brief introduction Lucene yes apache Software foundation 4 jakarta A sub project of the project team , Is an open source full-text search engine toolkit , That is, it is not a complete whole ...

  9. Lucene.net Introductory Learning Series (2)

    Lucene.net Introductory Learning Series (2) Lucene.net Introductory Learning Series (1)- participle Lucene.net Introductory Learning Series (2)- Create index Lucene.net Introductory Learning Series (3)- Full text search In the use of Luce ...

  10. Lucene.net Introductory Learning Series (1)

    Lucene.net Introductory Learning Series (1)   Lucene.net Introductory Learning Series (1)- participle Lucene.net Introductory Learning Series (2)- Create index Lucene.net Introductory Learning Series (3)- Full text search These days in the public ...

Random recommendation

  1. 【JQuery】 ajax invalid JSON Primitives

    [ As the title ] Personal understanding is You send data to [josn Format ] 了 , But backstage acceptance is not json Format data , Post code var strJson = '{ "usercode": "123 ...

  2. file And byte[] Interturn

    byte turn file String filepath="D:\\"+getName();          File file=new File(filepath);        ...

  3. XUTils Frame learning ( 3、 ... and )

    The previous two chapters say xutils The introduction of framework and the use of annotation module and database module , Friends who want to know more can go and have a look . The previous operation of the database module is to manually create a database and save it in asset In the folder , Re pass I/O Write the database into the application ...

  4. 【 turn 】PyDev for Eclipse brief introduction

      from :http://www.ibm.com/developerworks/cn/opensource/os-cn-ecl-pydev/index.html PyDev for Eclipse It's one ...

  5. Bitmap recycle()

    Bitmap call recycle? When? Bitmap There is one recycle Method , Very interesting easy, Recycling Bitmap Space . Q 1: Bitmap Is there a call recycle The necessity of the method ? A: The embedded ...

  6. java concurrency: daemon Threads

    daemon The concept of thread When learning the concept of operating system , We have heard of daemon The concept of .daemon Itself refers to the process or thread running in the background , It is generally used to provide some services that do not require direct interaction with users , A bit like some of the systems we've seen ...

  7. &#39;datetime.datetime&#39; has no attribute &#39;datetime&#39; problem

    Write python when , Date calculation is used . So I wrote datetime.datetime(*d_startTime[0:6]) This code . As a result, the compilation failed , newspaper  'datetime.datetime' has no ...

  8. Front end work, daily pit climbing —— Single page wechat development Jssdk relevant , as well as jssdk The realization of image direct transmission to your own server .

    Daily pit climbing The situation that we met roughly shows that : Project based on Vue2 The whole family can achieve ,vue-router Control front end routing , The routing mode is History( The main reason is that leaders pursue too high , Think hash belt # It's ugly , And then there's a little pit ...), Mainly for ...

  9. 【Unity Shader actual combat 】 Cartoon style Shader( One )

    Write it at the front Other articles in this series : Cartoon style Shader( Two ) oooo , In fact, I have seen this kind of Shader, There are many ways to do it , The effect is a little different . From here on , Learn the cartoon types you come into contact with Shader Compiling . The most ...

  10. linux Device drivers -- Wait for the queue to implement

    #include <linux/module.h> #include <linux/fs.h> #include <linux/sched.h> #include ...