欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

python提取内容关键词的方法

程序员文章站 2022-05-15 21:15:25
本文实例讲述了python提取内容关键词的方法。分享给大家供大家参考。具体分析如下: 一个非常高效的提取内容关键词的python代码,这段代码只能用于英文文章内容,中文因...

本文实例讲述了python提取内容关键词的方法。分享给大家供大家参考。具体分析如下:

一个非常高效的提取内容关键词的python代码,这段代码只能用于英文文章内容,中文因为要分词,这段代码就无能为力了,不过要加上分词功能,效果和英文是一样的。

复制代码 代码如下:

# coding=utf-8
import nltk
from nltk.corpus import brown
# this is a fast and simple noun phrase extractor (based on nltk)
# feel free to use it, just keep a link back to this post
# http://thetokenizer.com/2013/05/09/efficient-way-to-extract-the-main-topics-of-a-sentence/
# create by shlomi babluki
# may, 2013
 
# this is our fast part of speech tagger
#############################################################################
brown_train = brown.tagged_sents(categories='news')
regexp_tagger = nltk.regexptagger(
    [(r'^-?[0-9]+(.[0-9]+)?$', 'cd'),
     (r'(-|:|;)$', ':'),
     (r'\'*$', 'md'),
     (r'(the|the|a|a|an|an)$', 'at'),
     (r'.*able$', 'jj'),
     (r'^[a-z].*$', 'nnp'),
     (r'.*ness$', 'nn'),
     (r'.*ly$', 'rb'),
     (r'.*s$', 'nns'),
     (r'.*ing$', 'vbg'),
     (r'.*ed$', 'vbd'),
     (r'.*', 'nn')
])
unigram_tagger = nltk.unigramtagger(brown_train, backoff=regexp_tagger)
bigram_tagger = nltk.bigramtagger(brown_train, backoff=unigram_tagger)
#############################################################################
# this is our semi-cfg; extend it according to your own needs
#############################################################################
cfg = {}
cfg["nnp+nnp"] = "nnp"
cfg["nn+nn"] = "nni"
cfg["nni+nn"] = "nni"
cfg["jj+jj"] = "jj"
cfg["jj+nn"] = "nni"
#############################################################################
class npextractor(object):
    def __init__(self, sentence):
        self.sentence = sentence
    # split the sentence into singlw words/tokens
    def tokenize_sentence(self, sentence):
        tokens = nltk.word_tokenize(sentence)
        return tokens
    # normalize brown corpus' tags ("nn", "nn-pl", "nns" > "nn")
    def normalize_tags(self, tagged):
        n_tagged = []
        for t in tagged:
            if t[1] == "np-tl" or t[1] == "np":
                n_tagged.append((t[0], "nnp"))
                continue
            if t[1].endswith("-tl"):
                n_tagged.append((t[0], t[1][:-3]))
                continue
            if t[1].endswith("s"):
                n_tagged.append((t[0], t[1][:-1]))
                continue
            n_tagged.append((t[0], t[1]))
        return n_tagged
    # extract the main topics from the sentence
    def extract(self):
        tokens = self.tokenize_sentence(self.sentence)
        tags = self.normalize_tags(bigram_tagger.tag(tokens))
        merge = true
        while merge:
            merge = false
            for x in range(0, len(tags) - 1):
                t1 = tags[x]
                t2 = tags[x + 1]
                key = "%s+%s" % (t1[1], t2[1])
                value = cfg.get(key, '')
                if value:
                    merge = true
                    tags.pop(x)
                    tags.pop(x)
                    match = "%s %s" % (t1[0], t2[0])
                    pos = value
                    tags.insert(x, (match, pos))
                    break
        matches = []
        for t in tags:
            if t[1] == "nnp" or t[1] == "nni":
            #if t[1] == "nnp" or t[1] == "nni" or t[1] == "nn":
                matches.append(t[0])
        return matches
# main method, just run "python np_extractor.py"
def main():
    sentence = "swayy is a beautiful new dashboard for discovering and curating online content."
    np_extractor = npextractor(sentence)
    result = np_extractor.extract()
    print "this sentence is about: %s" % ", ".join(result)
if __name__ == '__main__':
    main()

希望本文所述对大家的python程序设计有所帮助。