【Educoder数据挖掘实训】用S服务器托管网MC相似度计算文本之间的相似度
开挖!
还是计算文本之间相似度的实训,跟前两关区别不大。
需要注意的是
S
M
C
SMC
SMC的计算方式
s
=
f
11
+
f
00
f
11
+
f
00
+
f
10
+
f
01
s = frac{f11+f00}{f11+f00+f10+f01}
s=f11+f00+f10+f01f11+f00
代码如下:
import numpy as np
import jieba
jieba.setLogLevel(jieba.logging.INFO)
def smc_similarity(sentence1: str, sentence2: str) -> float:
# 1. 实现文本分词
########## Begin ##########
seg1 = [word for word in jieba.cut(sentence1)]
seg2 = [word for word in jieba.cut(sentence2)]
########## End ##########
# 2. 建立词库
########## Begin ##########
word_list = list(set([word for word in seg1 + seg2]))
########## End ##########
# 3. 统计各个文本在词典里出现词的次数
########## Begin ##########
word_counts_1 = np.array([len([word for word in seg1 if word==w]) for w in word_list])
word_counts_2 = np.array([len([word for word in seg2 if word==w]) for w in word_list])
########## End ##########
# 4. 余弦公式
########## Begin ##########
f00 = np.sum((word_counts_1 == 0) & (word_counts_2 == 0))
f01 = np.sum((word_counts_1 == 0) & (word_counts_2 != 0))
f10 = np.sum((word_counts_1 != 0) & (word_counts_2 == 0))
f11 = np.sum((word_counts_1 服务器托管网!= 0) & (word_counts_2 != 0))
smc = (f00 + f11) / (f01 + f10 + f00 + f11)
########## End ##########
return smc
str1 = "我爱北京天安门"
str2 = "天安门雄伟壮阔让人不得不爱"
sim1 = smc_similarity(str1, str2)
print(sim1)
服务器托管,北京服务器托管,服务器租用 http://www.fwqtg.net