In some embodiments, the sentence graph 512 may be represented as (
,ε), where 
 is a node set and ε is an edge set. Each node in the node set V represents a word of the sentence 402. In some embodiments, each node is represented by word embedding of the corresponding word. Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from a vocabulary are mapped to vectors of real numbers. Word embedding is used by NLP systems as one mechanism for reasoning over natural language sentences. Each edge in the edge set ε is a tuple eij=(vi,vj,rij) where rij is a label for a relationship between the words vi and vj, and i and j each range from 1 to N with N representing the number of nodes in the sentence 402.