欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

JGibbLDA的输出文件

程序员文章站 2024-03-17 17:19:10
...

五个输出文件:


  • model-final.towords
    每个topic下面的words和words分布,并按分布排序:
1Topic 0th:
    bill 0.005843543826578699
    lai 0.003958529688972668
    seventh 0.0020735155513666352
    immedi 0.0020735155513666352
    anaheim 0.0020735155513666352
    concern 0.0020735155513666352
    month 0.0020735155513666352
    american 0.0020735155513666352
    decision 0.0020735155513666352
    risk 0.0020735155513666352
    demis 0.0020735155513666352
    maintain 0.0020735155513666352
    rose 0.0020735155513666352
Topic 1th:
    photo 0.0040191387559808615
    seri 0.0040191387559808615
    fashion 0.0040191387559808615
    subject 0.0021052631578947372
    left 0.0021052631578947372
    briefli 0.0021052631578947372
    discov 0.0021052631578947372
    hick 0.0021052631578947372
    line 0.0021052631578947372
    lip 0.0021052631578947372
    whom 0.0021052631578947372
    beij 0.0021052631578947372
    man 0.0021052631578947372
    xiaop 0.0021052631578947372
    highli 0.0021052631578947372
    california 0.0021052631578947372
Topic 2th:
    enjoi 0.0039069767441860465
    fan 0.0039069767441860465
    shuttle 0.0039069767441860465
    return 0.002046511627906977
    little 0.002046511627906977
    plai 0.002046511627906977
    songwrit 0.002046511627906977
    sai 0.002046511627906977
    debat 0.002046511627906977
    depart 0.002046511627906977
    disagre 0.002046511627906977
    iii 0.002046511627906977
    jan 0.002046511627906977
    mere 0.002046511627906977
    relat 0.002046511627906977
    stress 0.002046511627906977
  • model-final.phi
    topic -words distributions
    是上面towords文件的原始数据文件,未排序,是数组里面直接保存的数据.
  • model-final.theta
    每个doc下的topic分布,无排序

  • model-final.assign
    每个word所属的topic代号:
    0:35
    0代表word的id,35代表第36个topic
0:35 1:30 1:52 2:11 2:11 2:11 2:11 3:52 4:11 5:26 6:52 7:38 8:98 9:93 10:11 11:11 12:33 12:52 12:11 12:11 13:11 14:52 14:52 15:12 16:11 16:11 
7:11 18:11 18:11 19:13 20:52 20:52 21:66 22:93 23:21 24:98 25:81 26:11 27:11 28:7 29:52 30:11 31:11 31:11 31:11 32:11 32:11 33:11 34:57 34:52 34:52 35:11 36:25 37:11 37:11 37:11 38:32 39:11 40:1 40:52 41:52 42:52 42:52 43:52 43:52 44:11 45:14 46:11 47:11 48:38 49:11 50:36 51:52 52:12 
53:65 54:83 55:10 56:27 57:4 58:52 59:52 60:24 60:35 61:63 62:83 14:52 14:52 63:89 63:83 64:5 65:43 66:52 20:52 20:52 67:81 68:41 69:65 70:41 71:41 72:83 
73:88 74:83 75:24 76:83 77:73 78:10 79:24 80:5 81:52 40:52 40:52 82:52 42:52 42:52 43:52 43:52 51:52 
  • model-final.others
    一些参数
alpha=0.5
beta=0.1
ntopics=100
ndocs=100
nwords=4815
liters=999

根据结果文件,发现训练集和测试集的结果文件类型数量相同,并且类型也相同.所以没必要把estimate和inference拆开写.
另外inference的测试过程也是一种训练,只不过训练时要参考原有训练集训练的结果,这样会使训练更准确.
测试集new的model叫newModel,如果测试集的数据量比原有测试集还要大,这newModel可以代替原有的训练集.


问题:

测试集的训练过程是以自身为主,还是以原有训练集为主.要弄清楚这一点.


下一步计划:

1.下面的工作是继续分析测试集的内容,并且重新分析这个代码如何完成.
2.给源代码添加文件,让各种结果可视化.
3.中文lda.

posted on 2015-04-23 21:17 cynorr 阅读(...) 评论(...) 编辑 收藏