很少記錄自己的學習歷程,無奈太健忘,而且剛入手Python,還是寫下來供以后參考和思考。
本篇主要利用python jieba分詞和wordcloud進行詞的可視化,其中去了停用詞,單獨計算了詞頻。也可以利用結巴自帶的關鍵詞提取方法。
附網址jieba:https://github.com/fxsjy/jieba
wordcloud:https://github.com/amueller/word_cloud
#導入要用的包
import pandas as pd
import numpy as np
import jieba
import jieba.analyse
import wordcloud.WordCloud
import os
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
%matplotlib inline
#我將要處理的文件放在了D盤,文件類似dataframe結構
os.chdir('D:')
comtent = pd.read_csv('dataframe.csv',dtype = 'object')
#將每列的keyword合并成一個字符串以便于處理
action = ''
for kw in content['keyword']:
? ? ? action += kw.strip() + ' '
stopwords = open('stopword.txt').read().strip().splitlines()
seg = jieba.cut(action)
seg = ' '.join(seg).split() ? ? ?######分詞后的詞都是Unicode格式
words = ''
for word in seg:
? ? word = word.encode('utf-8') ? ? ?#####因為停用詞是utf-8編碼的,所以將其也編碼為utf-8
? ? if word not in stopwords:
? ? ? ? words += word.strip() + ' '
words =words.decode('utf-8')
此處可以直接用wordcloud直接畫圖啦,用generate()函數
#計算詞頻
words = ?words.split()
word_freq = {}
for word in words:
? ? if word in word_freq:
? ? ? ? word_freq[word] += 1
? ? else:
? ? ? ? word_freq[word] = 1
#按詞頻排序,將dict類型轉換成list類型
sort_word = []
for word,freq in word_freq.items():
? ? sort_word.append((word,freq))
sorted_word = sorted(sort_word,key = lambda x:x[1],reverse = True)
##查看前100個高頻詞
for word in sorted_word[:100]:
? ? print word[0],word[1]
#發現一個字的挺多,所以選擇長度大于2的詞
lengther = []
for word in sorted_word:
? ? if len(word[0]) > 1:
? ? ? ? lengther.append(word)
#畫圖啦
wordcloud1 = WordCloud(font_path = '..matplotlib\\mpl-data\\fonts\\ttf\\msyh.ttf',background_color = 'white',max_words = 200,stopwords = ? ? ? ? ? ? ? stopwords).generate_from_frequencies(dict(lengther))
plt.imshow(wordcloud1)
plt.axis('off')
plt.show()
結巴自動關鍵詞提?。╰f-idf,textrank)
tf-idf = jieba.analyse.extract_tags(action,topK = 200,withWeight = True)
textrank = jieba.analyse.textrank(action,topK = 200,withWeight = True)
畫圖部分省略,和上面的一樣。。。
注:本文為原創,轉載請注明出處。