中文分词、词频统计并制作词云图是统计数据常用的功能,这里用到了三个模块快速实现这个功能。
中文分词、词频统计
import jieba
from collections import Counter
# 1. 读取文本内容并进行分词
with open('demo.txt', mode='r', encoding='gbk') as f:
report = f.read()
words = jieba.cut(report)
# 2. 按指定长度提取词
report_words = []
for word in words:
if len(word) >= 4:
report_words.append(word)
print(report_words)
# 3. 统计高频词汇
result = Counter(report_words).most_common(50)
print(result)
上面代码用jieba模块进行分词,用collections进行词频统计。
jieba是一个优秀的第三方中文词库,用于中文分词。中文分词指的是将一个汉字序列切分成一个一个单独的词。jieba可以帮助你快速高效地完成中文分词,支持三种分词模式:精确模式、全模式和搜索引擎模式。
collections是Python标准库中的一个模块,提供了一些额外的容器类型,以提供Python标准内建容器dict
、list
、set
和tuple
的替代选择。这些容器类型包括namedtuple
、deque
、Counter
等。
简单词云图
import jieba.posseg as pseg
from collections import Counter
from wordcloud import WordCloud
# 1. 读取文本内容并进行分词
with open('demo.txt', mode='r', encoding='gbk') as f:
report = f.read()
words = pseg.cut(report)
# 2. 按指定长度和词性提取词
report_words = []
for word, flag in words:
if (len(word) >= 4) and ('n' in flag):
report_words.append(word)
# print(report_words)
# 3. 统计高频词汇
result = Counter(report_words).most_common(50)
# print(result)
# 4. 绘制词云图
content = dict(result)
# print(content)
wc = WordCloud(font_path='PINGFANG MEDIUM.TTF', background_color='white', width=1000, height=600)
wc.generate_from_frequencies(content)
wc.to_file('词云图1.png')
这里用到了wordcloud模块来生成词云图。
按照图片绘制词云服务器托管网图
import jieba.posseg as pseg
from collections import Counter
from PIL import Image
import numpy as np
from wordcloud import WordCloud
# 1. 读取文本内容并进行分词
with open('demo.txt', mode='r', encoding='gbk') as f:
report = f.read()
words = pseg.cut(report)
# 2. 按指定长度和词性提取词
report_words = []
for word, flag in words:
if (len(word) >= 4) and ('n' in flag):
report_words.append(word)
# print(report_words)
# 3. 统计高频词汇
result = Counter(report_words).most_common(300)
# print(result)
# 4. 绘制词云图
mask_pic = Image.open('map.png')
mask_data = np.array(mask_pic)
print(mask_data)
content = dict(result)
wc = WordCloud(font_path='PINGFANG MEDIUM.TTF', background_color='white', mask=mask_data)
wc.generate_from_frequencies(content)
wc.to_file('词云图2.png')
这里给WordCloud加了mask遮罩参数。
按照图片绘制渐变词云图
import jieba.posseg as pseg
from collections import Counter
from PIL import Image
import numpy as np
from wordcloud import WordCloud, ImageColorGenerator
# 1. 读取文本内容并进行分词
with open('demo.txt', mode='r', encoding='gbk') as f:
report = f.read()
words = pseg.cut(report)
# 2. 按指定长度和词性提取词
report_words = []
for word, flag in words:
if (len(word) >= 4) and ('n' in flag):
report_words.append(word)
# print(report_words)
# 3. 统计高频词汇
result = Counter(report_words).most_common(300)
# print(result)
# 4. 绘制词云图
mask_pic = Image.open('map.png')
mask_data = np.array(mask_pic)
content = dict(result)
wc = WordCloud(font_path='PINGFANG MEDIUM.TTF', background_color='white', mask=mask_data)
wc.generate_from_frequencies(content)
mask_colors = ImageColorGenerator(mask_data)
wc.reco服务器托管网lor(color_func=mask_colors)
wc.to_file('词云图3.png')
这里用recolor重绘了颜色。
服务器托管,北京服务器托管,服务器租用 http://www.fwqtg.net
机房租用,北京机房租用,IDC机房托管, http://www.fwqtg.net
相关推荐: 天天在用Stream,那你知道如此强大的Stream的实现原理吗?
作者:CarpenterLee github.com/CarpenterLee/JavaLambdaInternals 我们已经学会如何使用Stream API,用起来真的很爽,但简洁的方法下面似乎隐藏着无尽的秘密,如此强大的API是如何实现的呢? 比如Pip…