白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Information processing device, learning method, and storage medium

專利號(hào)
US11176327B2
公開(kāi)日期
2021-11-16
申請(qǐng)人
FUJITSU LIMITED(JP Kawasaki)
發(fā)明人
Yuji Mizobuchi
IPC分類
G06F40/58; G06F40/30; G06F16/00; G06F40/45; G06F40/216; G06F40/284; G06N20/00
技術(shù)領(lǐng)域
word,learning,language,words,parameter,in,section,target,space,vector
地域: Kawasaki

摘要

A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process includes learning distributed representations of words included in a word space of a first language using a learner for learning the distributed representations; classifying words included in a word space of a second language different from the first language into words common to words included in the word space of the first language and words not common to words included in the word space of the first language; and replacing distributed representations of the common words included in the word space of the second language with distributed representations of the words, corresponding to the common words, in the first language and adjusting a parameter of the learner.

說(shuō)明書(shū)

When the number C of output vectors are different from predefined predicted vectors, the Skip-gram model updates the weights serving as parameters in the order of the weight W′N×V between the hidden layer and the output layer and the weight WV×N between the input layer and the hidden layer in order to learn differences between the vectors. The parameters are updated by, for example, back propagation.

A word vector h, obtained by repeatedly executing learning, of the hidden layer is a distributed representation of the given word (input vector x).

A technique for learning distributed representations of words in two different tasks and using the learned distributed representations of the words to learn vector mapping between the tasks is known (refer to, for example, Madhyastha, Pranava Swaroop, et al. “Mapping Unseen Words to Task-Trained Embedding Spaces”). In this technique, to produce a distributed representation of an unknown word in a certain task, a distributed representation learned in another task is mapped via an objective function.

FIG. 9 is a diagram illustrating an example of vector mapping to be executed between tasks using distributed representations of words. As illustrated in FIG. 9, when a word space of a mapping source as a task and a word space of a mapping destination as a task exist, a mapping function is learned from distributed representations of a pair of words between the different tasks. When the word that forms the pair with the word of the mapping source does not exist in the mapping destination, or when the word of the mapping destination is unknown, a distributed representation of the unknown word of the mapping destination is produced from the distributed representation of the word of the mapping source and the mapping function.

權(quán)利要求

1
微信群二維碼
意見(jiàn)反饋