欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

lucene 分词器分析

程序员文章站 2022-07-01 15:30:42
...
lucene的Analyzer类里面主要包含这个方法:

public abstract class Analyzer{

public abstract TokenStream tokenStream(String fieldName, Reader reader);

}

-------------------------------------------------------------------

public abstract class TokenStream {
......
public Token next(final Token reusableToken) throws IOException {
// We don't actually use inputToken, but still add this assert
assert reusableToken != null;
return next();
}
......
}

所有要实现analyzer的类都要重写这个方法,获取自己制定的Tokenizer(Tokenizer继承于TokenStream)。而分词的主要方法就是TokenStream里面提供的next()方法,所有继承TokenStream类的方法都要重写这个next()方法,以制定自己分词器。


//下面分析一下StandAnalyzer,它继承于Analyzer,StandAnalyzer里面的tokenStream方法如下:
public class StandardAnalyzer extends Analyzer {
......
public TokenStream tokenStream(String fieldName, Reader reader) {
StandardTokenizer tokenStream = new StandardTokenizer(reader, replaceInvalidAcronym); //使用了StandardTokenizer
tokenStream.setMaxTokenLength(maxTokenLength);
TokenStream result = new StandardFilter(tokenStream);
result = new LowerCaseFilter(result);
result = new StopFilter(result, stopSet);
return result;
}

}
-----------------------------------------------------------------
//StandardTokenizer的next方法如下:
public class StandardTokenizer extends Tokenizer {
.......
public Token next(final Token reusableToken) throws IOException {
assert reusableToken != null;
int posIncr = 1;

while(true) {
int tokenType = scanner.getNextToken();

if (tokenType == StandardTokenizerImpl.YYEOF) {
return null;
}

if (scanner.yylength() <= maxTokenLength) {
reusableToken.clear();
reusableToken.setPositionIncrement(posIncr);
scanner.getText(reusableToken);
final int start = scanner.yychar();
reusableToken.setStartOffset(start);
reusableToken.setEndOffset(start+reusableToken.termLength());
// This 'if' should be removed in the next release. For now, it converts
// invalid acronyms to HOST. When removed, only the 'else' part should
// remain.
if (tokenType == StandardTokenizerImpl.ACRONYM_DEP) {
if (replaceInvalidAcronym) {
reusableToken.setType(StandardTokenizerImpl.TOKEN_TYPES[StandardTokenizerImpl.HOST]);
reusableToken.setTermLength(reusableToken.termLength() - 1); // remove extra '.'
} else {
reusableToken.setType(StandardTokenizerImpl.TOKEN_TYPES[StandardTokenizerImpl.ACRONYM]);
}
} else {
reusableToken.setType(StandardTokenizerImpl.TOKEN_TYPES[tokenType]);
}
return reusableToken;
} else
// When we skip a too-long term, we still increment the
// position increment
posIncr++;
}
}
......
}


测试代码:


public class TestAnalyzer {

final static String TEXT = "hadoop is very good,xina a tooken finished!";

@SuppressWarnings("deprecation")
public void testTokenizer() throws IOException{
Reader reader = new StringReader(TEXT);
StandardTokenizer tokenizer = new StandardTokenizer(reader);
for (Token t = tokenizer.next();t!=null;t = tokenizer.next()) {
System.out.println(t.startOffset() +"--" + t.endOffset() + ": "+ t.termText());
}
}

public static void main(String[] args) throws IOException {
TestAnalyzer t = new TestAnalyzer();
t.testTokenizer();
}
}

//lucene2.4.1新API的测试用法:
TokenStream ts = analyzer.tokenStream("text", new StringReader(txt));
for(Token t= new Token(); (t=ts.next(t)) !=null;) {
System.out.println(t);
}

相关标签: lucene Hadoop