Modelling word information in freebase is difficult, because a freebase topic maps the 'meaning', not the word. For example, it has aliases, there are things that can be said about “dog” that are not true of “chien” in French.
So how would you relate the WordNet word "Water" to the English display name for the freebase topic /en/water?
One of the things that's prevented anyone from tackling this yet is the issue of polysemy. WordNet handles this by having separate entities for each sense, which might be very cumbersome in Freebase. Each sense could be a CVT, but if you're linking synonyms, you have to be linking two CVTs (that is, a sense of a word is only synonymous with a sense of another word), which is not easily done in the client (although it can be done via MQL).
so maybe, the only way to map a topic's word-life correctly is to give words their specific topic, essentially multiplying every topic we have by how many languages we decide to map.
Talking of senses opens another issue -- whose breakdown do you use? Different dictionaries break down senses differently (lumping vs. splitting is an age-old issue in lexicography). Would there be a need to represent sense breakdowns by multiple authorities?
Right now, it is not possible to model in freebase the origin of names, etc. there is some schema work using /common/resource in the wordnet import...
Additional Research and software regarding NLP: Natural Language Processing