What is claimed is:1. A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process comprising:causing a learner to learn distributed representations of words included in a word space of a first language;classifying words included in a word space of a second language different from the first language into common words common to words included in the word space of the first language and uncommon words not common to words included in the word space of the first language, language resources of the word space of the first language being larger than the language resources of the word space of the second language;replacing distributed representations of the common words included in the word space of the second language with distributed representations of the words included in the word space of the first language corresponding to the common words;adjusting a weight for obtaining an output result based on the replaced distributed representations to be used in the learner;inputting the adjusted weight to the learner; andcausing the learner to learn distributed representations of the uncommon words among the words included in the word space of the second language.2. The storage medium according to claim 1, the process further comprising:outputting the distributed representations of the common words and the distributed representations of the uncommon words as a result of the learning.3. The storage medium according to claim 1,wherein the learner learns the distributed representations of the words included in the word space of the first language and the distributed representations of the uncommon words included in the word space of the second language using Skip-gram model of Word2Vec.4. The storage medium according to claim 3, wherein the replacing includesreplacing a hidden layer of the Skip-gram model with the distributed representations of the words included in the word space of the first language corresponding to the common words.5. The storage medium according to claim 4,wherein the adjusting includes adjusting the weight that is a parameter between the hidden layer and an output layer of the Skip-gram model.6. The storage medium according to claim 1, wherein the classifying includes:executing morphological analysis on a corpus for learning the second language,outputting words indicated by results of the morphological analysis,using the words indicated by the results of the morphological analysis and an alignment dictionary to acquire correspondence relationships between words in the second language and words in the first language, andclassifying the words included in the word space of the second language based on the acquired correspondence relationships.7. An information processing device comprising:a memory; anda processor coupled to the memory and configured to:cause a learner to learn distributed representations of words included in a word space of a first language;classify words included in a word space of a second language different from the first language into common words common to words included in the word space of the first language and uncommon words not common to words included in the word space of the first language, language resources of the word space of the first language being larger than the language resources of the word space of the second language;replace distributed representations of the common words included in the word space of the second language with distributed representations of the words included in the word space of the first language corresponding to the common words;adjust a weight for obtaining an output result based on the replaced distributed representations to be used in the learnerinput the adjusted weight to the learner; andcause the learner to learn distributed representations of the uncommon words among the words included in the word space of the second language.8. The information processing device according to claim 7, wherein the processor is configured to:output the distributed representations of the common words and the distributed representations of the uncommon words as a result of learning the distributed representations of the uncommon words.9. The information processing device according to claim 7,wherein the learner learns the distributed representations of the words included in the word space of the first language and the distributed representations of the uncommon words included in the word space of the second language using Skip-gram model of Word2Vec.10. A learning method to be executed by a computer, the learning method comprising:causing a learner to learn distributed representations of words included in a word space of a first language;classifying words included in a word space of a second language different from the first language into common words common to words included in the word space of the first language and uncommon words not common to words included in the word space of the first language, language resources of the word space of the first language being larger than the language resources of the word space of the second language;replacing distributed representations of the common words included in the word space of the second language with distributed representations of the words included in the word space of the first language corresponding to the common words;adjusting a weight for obtaining an output result based on the replaced distributed representations to be used in the learner;inputting the adjusted weight to the learner; andcausing the learner to learn distributed representations of the uncommon words among the words included in the word space of the second language.