欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

python实现朴素贝叶斯算法

程序员文章站 2023-11-22 09:26:40
本代码实现了朴素贝叶斯分类器(假设了条件独立的版本),常用于垃圾邮件分类,进行了拉普拉斯平滑。 关于朴素贝叶斯算法原理可以参考博客中原理部分的博文。 #!/us...

本代码实现了朴素贝叶斯分类器(假设了条件独立的版本),常用于垃圾邮件分类,进行了拉普拉斯平滑。

关于朴素贝叶斯算法原理可以参考博客中原理部分的博文。

#!/usr/bin/python
# -*- coding: utf-8 -*-
from math import log
from numpy import*
import operator
import matplotlib
import matplotlib.pyplot as plt
from os import listdir
def loaddataset():
  postinglist=[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],
         ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],
         ['my', 'dalmation', 'is', 'so', 'cute', 'i', 'love', 'him'],
         ['stop', 'posting', 'stupid', 'worthless', 'garbage'],
         ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],
         ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
  classvec = [0,1,0,1,0,1]
  return postinglist,classvec
def createvocablist(dataset):
  vocabset = set([]) #create empty set
  for document in dataset:
    vocabset = vocabset | set(document) #union of the two sets
  return list(vocabset)
 
def setofwords2vec(vocablist, inputset):
  returnvec = [0]*len(vocablist)
  for word in inputset:
    if word in vocablist:
      returnvec[vocablist.index(word)] = 1
    else: print "the word: %s is not in my vocabulary!" % word
  return returnvec
def trainnb0(trainmatrix,traincategory):  #训练模型
  numtraindocs = len(trainmatrix)
  numwords = len(trainmatrix[0])
  pabusive = sum(traincategory)/float(numtraindocs)
  p0num = ones(numwords); p1num = ones(numwords)  #拉普拉斯平滑
  p0denom = 0.0+2.0; p1denom = 0.0 +2.0      #拉普拉斯平滑
  for i in range(numtraindocs):
    if traincategory[i] == 1:
      p1num += trainmatrix[i]
      p1denom += sum(trainmatrix[i])
    else:
      p0num += trainmatrix[i]
      p0denom += sum(trainmatrix[i])
  p1vect = log(p1num/p1denom)    #用log()是为了避免概率乘积时浮点数下溢
  p0vect = log(p0num/p0denom)
  return p0vect,p1vect,pabusive
 
def classifynb(vec2classify, p0vec, p1vec, pclass1):
  p1 = sum(vec2classify * p1vec) + log(pclass1)
  p0 = sum(vec2classify * p0vec) + log(1.0 - pclass1)
  if p1 > p0:
    return 1
  else:
    return 0
 
def bagofwords2vecmn(vocablist, inputset):
  returnvec = [0] * len(vocablist)
  for word in inputset:
    if word in vocablist:
      returnvec[vocablist.index(word)] += 1
  return returnvec
 
def testingnb():  #测试训练结果
  listoposts, listclasses = loaddataset()
  myvocablist = createvocablist(listoposts)
  trainmat = []
  for postindoc in listoposts:
    trainmat.append(setofwords2vec(myvocablist, postindoc))
  p0v, p1v, pab = trainnb0(array(trainmat), array(listclasses))
  testentry = ['love', 'my', 'dalmation']
  thisdoc = array(setofwords2vec(myvocablist, testentry))
  print testentry, 'classified as: ', classifynb(thisdoc, p0v, p1v, pab)
  testentry = ['stupid', 'garbage']
  thisdoc = array(setofwords2vec(myvocablist, testentry))
  print testentry, 'classified as: ', classifynb(thisdoc, p0v, p1v, pab)
 
def textparse(bigstring): # 长字符转转单词列表
  import re
  listoftokens = re.split(r'\w*', bigstring)
  return [tok.lower() for tok in listoftokens if len(tok) > 2]
 
def spamtest():  #测试垃圾文件 需要数据
  doclist = [];
  classlist = [];
  fulltext = []
  for i in range(1, 26):
    wordlist = textparse(open('email/spam/%d.txt' % i).read())
    doclist.append(wordlist)
    fulltext.extend(wordlist)
    classlist.append(1)
    wordlist = textparse(open('email/ham/%d.txt' % i).read())
    doclist.append(wordlist)
    fulltext.extend(wordlist)
    classlist.append(0)
  vocablist = createvocablist(doclist) 
  trainingset = range(50);
  testset = [] 
  for i in range(10):
    randindex = int(random.uniform(0, len(trainingset)))
    testset.append(trainingset[randindex])
    del (trainingset[randindex])
  trainmat = [];
  trainclasses = []
  for docindex in trainingset: 
    trainmat.append(bagofwords2vecmn(vocablist, doclist[docindex]))
    trainclasses.append(classlist[docindex])
  p0v, p1v, pspam = trainnb0(array(trainmat), array(trainclasses))
  errorcount = 0
  for docindex in testset: 
    wordvector = bagofwords2vecmn(vocablist, doclist[docindex])
    if classifynb(array(wordvector), p0v, p1v, pspam) != classlist[docindex]:
      errorcount += 1
      print "classification error", doclist[docindex]
  print 'the error rate is: ', float(errorcount) / len(testset)
 
 
 
listoposts,listclasses=loaddataset()
myvocablist=createvocablist(listoposts)
print myvocablist,'\n'
# print setofwords2vec(myvocablist,listoposts[0]),'\n'
trainmat=[]
for postindoc in listoposts:
  trainmat.append(setofwords2vec(myvocablist,postindoc))
print trainmat
p0v,p1v,pab=trainnb0(trainmat,listclasses)
print pab
print p0v,'\n',p1v
testingnb()

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。