word2vec的若干demo在win10平臺上的實(shí)現(xiàn)

網(wǎng)上找到了很多帖子,但是都千篇一律地相似,但是其中的一個關(guān)鍵點(diǎn)樓主又總覺得沒有說清楚,即python3.5.2中,byte和str類型的不同使得直接從github上下載的代碼在樓主的機(jī)子上并不能跑起來,經(jīng)過幾次迅(蛋)速(疼)地嘗試和摸索,還是找到了解決方案的,所以在這里po一下(沒錯,我就是這么小白)。
好的,還是先從下載語料開始。

平臺

  • win10
  • Anaconda3 for python3.5(就是之前那篇文章里搭載的環(huán)境)
  • 機(jī)器 ASUS G501JW(i7 8G)

語料下載

英文wiki語料鏈接 https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2
中文wiki語料鏈接 https://dumps.wikimedia.org/zhwiki/latest/zhwiki-latest-pages-articles.xml.bz2
text8鏈接 http://mattmahoney.net/dc/text8.zip

代碼修改

下載語料的同時,我們進(jìn)入process_wiki.py,修改這幾行代碼用與處理wiki,原作是這樣的:

__author__ = 'huang'

import os
import logging
import sys

from gensim.corpora import WikiCorpus

if __name__=='__main__':

    program = os.path.basename(sys.argv[0])
    logger = logging.getLogger(program)

    logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
    logging.root.setLevel(level=logging.INFO)

    if len(sys.argv) < 3:
        print(globals()['__doc__'] %locals())
        sys.exit(1)
        
    inp, outp = sys.argv[1:3]
    space = ' '
    i = 0

    output = open(outp, 'w')
    wiki = WikiCorpus(inp, lemmatize=False, dictionary={})
    for text in wiki.get_texts():
        output.write(space.join(text) + '\n')
        i = i + 1
        if i % 10000 == 0:
            logger.info('Saved ' + str(i) + ' articles')

    output.close()
    logger.info('Finished ' + str(i) + ' articles')

恩,就刪了幾行空行,語料下載完以后,放到同一個文件夾里,我們可以先直接嘗試在cmd里運(yùn)行如下代碼

python process_wiki.py enwiki-latest-pages-articles.xml.bz2 wiki.en.text

如果沒問題,congratulations!但是樓主的平臺并不能跑起來,這也是這篇文章存在的原因。問題出在這里

(C:\Anaconda3) E:\NLP\word2vec-for-wiki-master>python process_wiki.py enwiki-latest-pages-articles.xml.bz2 wiki.en.text
Traceback (most recent call last):
  File "process_wiki.py", line 30, in <module>
    output.write(space.join(text).decode() + '\n')
TypeError: sequence item 0: expected str instance, bytes found

度娘了一下,是因?yàn)閜ython3里面bytes和str是兩種東西,這個在原文里面也說過,但是原文的解決方案,還是讓樓主一頭霧水(沒錯,就是這么小白)。根據(jù)之前的文章,再查閱了一下文檔,原來是因?yàn)閖oin()函數(shù)返回的類型也是bytes或bytearray,而write()函數(shù)只接受str類型。樓主原先嘗(zuo)試(si)過如下幾種方案,但是都失敗了

Traceback (most recent call last):
  File "process_wiki.py", line 30, in <module>
    output.write(bytes.join(space,text).decode() + '\n')
TypeError: descriptor 'join' requires a 'bytes' object but received a 'str'

(C:\Anaconda3) E:\NLP\word2vec-for-wiki-master>python process_wiki.py enwiki-latest-pages-articles.xml.bz2 wiki.en.text
Traceback (most recent call last):
  File "process_wiki.py", line 30, in <module>
    output.write(bytes.join(space.encode(),text).decode() + '\n')
UnicodeEncodeError: 'gbk' codec can't encode character '\u1f00' in position 1714: illegal multibyte sequence

(C:\Anaconda3) E:\NLP\word2vec-for-wiki-master>python process_wiki.py enwiki-latest-pages-articles.xml.bz2 wiki.en.text
Traceback (most recent call last):
  File "process_wiki.py", line 30, in <module>
    output.write(bytes.join(''.encode(),text).decode() + '\n')
UnicodeEncodeError: 'gbk' codec can't encode character '\u1f00' in position 1474: illegal multibyte sequence

(C:\Anaconda3) E:\NLP\word2vec-for-wiki-master>python process_wiki.py enwiki-latest-pages-articles.xml.bz2 wiki.en.text
Traceback (most recent call last):
  File "process_wiki.py", line 30, in <module>
    output.write(bytes.join(b'',text).decode() + '\n')
UnicodeEncodeError: 'gbk' codec can't encode character '\u1f00' in position 1474: illegal multibyte sequence

最終樓主將下面這行代碼進(jìn)行了改寫

        space = ' '.encode()

先用下面的代碼進(jìn)行了測試

        data = sapce.join(text)
        print(data)
        #output.write(str(data) + '\n')

在cmd中看到了輸出,但是print可以直接輸出bytes而wirte只接受str,所以運(yùn)行代碼最終改成了這樣

        data = sapce.join(text)
        output.write(str(data) + '\n')

然后結(jié)果如下

(C:\Anaconda3) E:\NLP\word2vec-for-wiki-master>python process_wiki.py enwiki-latest-pages-articles.xml.bz2 wiki.en.text
2016-07-28 10:48:11,057: INFO: Saved 10000 articles
2016-07-28 10:49:44,660: INFO: Saved 20000 articles
2016-07-28 10:51:04,023: INFO: Saved 30000 articles
2016-07-28 10:52:13,199: INFO: Saved 40000 articles
2016-07-28 10:53:07,548: INFO: Saved 50000 articles
2016-07-28 10:53:45,695: INFO: Saved 60000 articles
2016-07-28 10:54:18,993: INFO: Saved 70000 articles
2016-07-28 10:54:51,188: INFO: Saved 80000 articles
2016-07-28 10:55:50,520: INFO: Saved 90000 articles
·
·
·
·
2016-07-28 15:24:22,182: INFO: Saved 4040000 articles
2016-07-28 15:25:09,770: INFO: Saved 4050000 articles
2016-07-28 15:25:46,915: INFO: Saved 4060000 articles
2016-07-28 15:26:24,892: INFO: Saved 4070000 articles
2016-07-28 15:27:05,343: INFO: Saved 4080000 articles
2016-07-28 15:27:48,280: INFO: Saved 4090000 articles
2016-07-28 15:28:22,146: INFO: finished iterating over Wikipedia corpus of 4099408 documents with 2229304913 positions (total 16753779 articles, 2290359456 positions before pruning articles shorter than 50 words)
2016-07-28 15:28:22,155: INFO: Finished 4099408 articles

五個小時四十分鐘,文章數(shù)量已經(jīng)有409萬多了,還真不少···
而訓(xùn)練生成的text文件已經(jīng)長到無法用記事本打開了,而后輸入

python train_word2vec_model.py wiki.en.text wiki.en.text.model wiki.en.text.vector

開始訓(xùn)練英文wiki詞向量

2016-07-28 15:47:35,297: INFO: running train_word2vec_model.py wiki.en.text wiki.en.text.model wiki.en.text.vector
2016-07-28 15:47:35,302: INFO: collecting all words and their counts
2016-07-28 15:47:35,370: INFO: PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2016-07-28 15:48:05,500: INFO: PROGRESS: at sentence #10000, processed 29336126 words, keeping 434884 word types
2016-07-28 15:48:39,042: INFO: PROGRESS: at sentence #20000, processed 55594275 words, keeping 628122 word types

訓(xùn)練時打開另一個cmd,進(jìn)入文件夾,處理中文wiki語料

python process_wiki.py zhwiki-latest-pages-articles.xml.bz2 wiki.zh.text
E:\NLP\word2vec-for-wiki-master>python process_wiki.py zhwiki-latest-pages-articles.xml.bz2 wiki.zh.text
2016-07-28 16:28:21,686: INFO: Saved 10000 articles
2016-07-28 16:29:07,536: INFO: Saved 20000 articles

然后就是漫長的等待了···
text8是現(xiàn)成的文本,所以直接進(jìn)行訓(xùn)練,

python train_word2vec_model.py text8 text8.model text8.vector.

結(jié)果如下

2016-07-28 20:03:42,295: INFO: PROGRESS: at 99.82% examples, 405001 words/s, in_qsize 12, out_qsize 3
2016-07-28 20:03:42,435: INFO: worker thread finished; awaiting finish of 7 more threads
2016-07-28 20:03:42,445: INFO: worker thread finished; awaiting finish of 6 more threads
2016-07-28 20:03:42,445: INFO: worker thread finished; awaiting finish of 5 more threads
2016-07-28 20:03:42,445: INFO: worker thread finished; awaiting finish of 4 more threads
2016-07-28 20:03:42,465: INFO: worker thread finished; awaiting finish of 3 more threads
2016-07-28 20:03:42,495: INFO: worker thread finished; awaiting finish of 2 more threads
2016-07-28 20:03:42,495: INFO: worker thread finished; awaiting finish of 1 more threads
2016-07-28 20:03:42,505: INFO: worker thread finished; awaiting finish of 0 more threads
2016-07-28 20:03:42,505: INFO: training on 85026035 raw words (62532401 effective words) took 154.3s, 405163 effective words/s
2016-07-28 20:03:42,505: INFO: saving Word2Vec object under text8.model, separately None
2016-07-28 20:03:42,505: INFO: storing numpy array 'syn0' to text8.model.syn0.npy
2016-07-28 20:03:43,506: INFO: not storing attribute syn0norm
2016-07-28 20:03:43,506: INFO: not storing attribute cum_table
2016-07-28 20:03:43,506: INFO: storing numpy array 'syn1neg' to text8.model.syn1neg.npy
2016-07-28 20:03:45,225: INFO: storing 71290x400 projection weights into text8.vector.

此時可以進(jìn)行如下測試:

In [1]: import gensim

In [2]: model = gensim.models.Word2Vec.l
gensim.models.Word2Vec.load                 gensim.models.Word2Vec.log_accuracy
gensim.models.Word2Vec.load_word2vec_format

In [2]: model = gensim.models.Word2Vec.looad('text8.model')

In [3]: model = gensim.models.Word2Vec.load('text8.model')

In [4]: model.mo
model.most_similar        model.most_similar_cosmul

In [4]: model.most_similar('man')
Out[4]:
[('woman', 0.6650575399398804),
 ('girl', 0.5865204334259033),
 ('creature', 0.5350353717803955),
 ('boy', 0.510942816734314),
 ('person', 0.5094308257102966),
 ('men', 0.5073959827423096),
 ('evil', 0.48292240500450134),
 ('totoro', 0.47985178232192993),
 ('god', 0.476554274559021),
 ('vanity', 0.47478240728378296)]

In [5]: model.most_similar('girl')
Out[5]:
[('blonde', 0.7728073596954346),
 ('baby', 0.7689986824989319),
 ('kid', 0.7603048086166382),
 ('woman', 0.7313079833984375),
 ('girls', 0.7117128968238831),
 ('boy', 0.6976305246353149),
 ('joey', 0.6945637464523315),
 ('boys', 0.6894382238388062),
 ('bride', 0.685029149055481),
 ('rabbit', 0.6838369369506836)]

英文wiki的處理時間太長,樓主就沒有堅(jiān)持訓(xùn)練下去了,接下來中文wiki語料在與處理之后還要進(jìn)行繁簡轉(zhuǎn)換,參考的文章
以及網(wǎng)上普遍看好opencc,在github下載并閱讀完下面的安裝指導(dǎo)后,發(fā)現(xiàn)這對于但是在樓主搭建的win10平臺上并不是很適用,最后還是參考了Licstar的文章,直接進(jìn)入下載opencc0.4.2-win32,根據(jù)安裝說明,解壓以后,其實(shí)還要添加到path里面去才能直接在cmd中使用,可是輸入

opencc -help

看安裝是否成功,輸出

Open Chinese Convert (OpenCC) Command Line Tool
Version 0.4.2

Author: BYVoid <byvoid@byvoid.com>
Bug Report: http://github.com/BYVoid/OpenCC/issues

Usage:
 opencc [Options]

Options:
 -i [file], --input=[file]   Read original text from [file].
 -o [file], --output=[file]  Write converted text to [file].
 -c [file], --config=[file]  Load configuration of conversion from [file].
 -v, --version               Print version and build information.
 -h, --help                  Print this help.

With no input file, reads standard input and writes converted stream to standard output.
Default configuration(zhs2zht.ini) will be loaded if not set.

說明成功了,就直接輸入

opencc -i wiki.zh.text -o wiki.zh.text.jian -c zht2zhs.ini

大約3分鐘后,任務(wù)完成,如果沒有安裝jieba,先安裝一下

pip install jieba

安裝完成以后,就可以直接運(yùn)行

python separate_words.py wiki.zh.text.jian wiki.zh.text.jian.seq 

然后等待完成,輸入

python train_word2vec_model.py wiki.zh.text.jian

訓(xùn)練完成后仿照text8的模式進(jìn)行測試即可。
還可參考鏈接 https://www.zybuluo.com/Wayne-Z/note/450893

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

推薦閱讀更多精彩內(nèi)容