白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Information processing device, learning method, and storage medium

專利號
US11176327B2
公開日期
2021-11-16
申請人
FUJITSU LIMITED(JP Kawasaki)
發(fā)明人
Yuji Mizobuchi
IPC分類
G06F40/58; G06F40/30; G06F16/00; G06F40/45; G06F40/216; G06F40/284; G06N20/00
技術(shù)領(lǐng)域
word,learning,language,words,parameter,in,section,target,space,vector
地域: Kawasaki

摘要

A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process includes learning distributed representations of words included in a word space of a first language using a learner for learning the distributed representations; classifying words included in a word space of a second language different from the first language into words common to words included in the word space of the first language and words not common to words included in the word space of the first language; and replacing distributed representations of the common words included in the word space of the second language with distributed representations of the words, corresponding to the common words, in the first language and adjusting a parameter of the learner.

說明書

Next, as illustrated in FIG. 3B, when the actually calculated output vectors y are different from predefined predicted vectors, the distributed representation learning section 11 updates the weights serving as parameters in the order of W′N×V and WV×N based on the differences between the output vectors y and the predefined predicted vectors. The update of the parameters is referred to as back propagation, for example. Then, the distributed representation learning section 11 multiplies the updated weight WV×N by the input vector x to generate a word vector h of the hidden layer. Then, the distributed representation learning section 11 multiplies the updated weight W′N×V by the word vector h to generate output vectors y of the output layer. For example, the distributed representation learning section 11 executes prediction using the updated W′N×V and WV×N. As a result, the distributed representation learning section 11 predicts that the word preceding the given word is “drink” with a probability of 0.1236 and that the word succeeding the given word is “juice” with a probability of 0.1289. These probabilities are slightly higher than the previously predicted probabilities.

權(quán)利要求

1
微信群二維碼
意見反饋