白丝美女被狂躁免费视频网站,500av导航大全精品,yw.193.cnc爆乳尤物未满,97se亚洲综合色区,аⅴ天堂中文在线网官网

Information processing device, learning method, and storage medium

專利號(hào)
US11176327B2
公開日期
2021-11-16
申請(qǐng)人
FUJITSU LIMITED(JP Kawasaki)
發(fā)明人
Yuji Mizobuchi
IPC分類
G06F40/58; G06F40/30; G06F16/00; G06F40/45; G06F40/216; G06F40/284; G06N20/00
技術(shù)領(lǐng)域
word,learning,language,words,parameter,in,section,target,space,vector
地域: Kawasaki

摘要

A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process includes learning distributed representations of words included in a word space of a first language using a learner for learning the distributed representations; classifying words included in a word space of a second language different from the first language into words common to words included in the word space of the first language and words not common to words included in the word space of the first language; and replacing distributed representations of the common words included in the word space of the second language with distributed representations of the words, corresponding to the common words, in the first language and adjusting a parameter of the learner.

說明書

In the output layer, V-th dimensional output vectors yc are generated for a number C of panels that are not illustrated. C is the number of predetermined panels, and yc indicates the output vectors corresponding to words preceding and succeeding the given word. W′N×V is a weight between the hidden layer and the output layer and is expressed by a matrix of N×V. As initial states of elements of WN×V, random values are given, for example.

As illustrated in FIG. 3A, the distributed representation learning section 11 uses the Skip-gram model in the neural network composed of the input layer, the hidden layer, and the output layer to learn a distributed representation of the given word. For example, it is assumed that the input vector x is a one-hot vector indicating that an element corresponding to the given word “apple” included in the reference language learning corpus 21 is 1 and that other elements are 0. When the distributed representation learning section 11 receives the input vector x corresponding to the given word “apple”, the distributed representation learning section 11 multiplies the weight WV×N by the input vector x to generate a word vector h of the hidden layer. Then, the distributed representation learning section 11 multiplies the weight W′N×V by the word vector h to generate output vectors y of the output layer. For example, the distributed representation learning section 11 executes prediction using WV×N in the initial state. As a result, the distributed representation learning section 11 predicts that a word preceding the given word is “drink” with a probability of 0.1230 and that a word succeeding the given word is “juice” with a probability of 0.1277.

權(quán)利要求

1
微信群二維碼
意見反饋