Topic: Word Representation
Dataset: Sogou-T, HowNet
- 1889 distinct sememes
- 2.4 average senses for each word
- 1.6 average sememes for each sense
- 42.2% of words have multiple senses.
Methodology:
Sememe-Encoded Word Representation Learning(SE WRL)
This framework regards each word sense as a combination of its sememes, and iteratively performs word sense disambiguation according to their contexts and learn representations of sememes, senses and words by extending Skip-gram in word2vec (Mikolov et al., 2013)
A). Simple Sememe Aggregation Model
For each word, SSA considers all sememes in all senses of the word together, and represents the target word using the average of all its sememe embeddings.
簡(jiǎn)單的sememe聚合模型在低頻詞上能有更好的表現(xiàn),因?yàn)樵趥鹘y(tǒng)skipgram模型中低頻詞不能得到很好的訓(xùn)練,然而在SSA中通過(guò)sememe embeddings 低頻詞被解碼為sememe并通過(guò)其他詞得到良好的訓(xùn)練。
B). Sememe Attention over Context Model
The SSA Model replaces the target word embedding with the aggregated sememe embeddings to encode sememe information into word representa- tion learning. However, each word in SSA model still has only one single representation in different contexts, which cannot deal with polysemy of most words. It is intuitive that we should construct distinct embeddings for a target word according to specific contexts, with the favor of word sense annotation in HowNet.
每一個(gè)context word擁有一個(gè)attention weight,attention weight 由目標(biāo)詞w和sense向量之間的相關(guān)度,其中sense向量由其組成sememe向量的平均值表示
C). Sememe Attention over Target Model
The process can also be applied to select appropriate senses for the target word, by taking context words as attention.
對(duì)于Context Model,只有一個(gè)target word用來(lái)學(xué)習(xí)context words 的sense權(quán)重;
對(duì)于Target Model,使用多個(gè)context words 來(lái)聯(lián)合學(xué)習(xí)target word 的sense 權(quán)重。
因而Target Model能夠產(chǎn)生更好的語(yǔ)義去模糊化結(jié)果,產(chǎn)生更準(zhǔn)確的語(yǔ)義表示。