Presented by:

Ken Kahn

from University of Oxford
No materials for the event yet, sorry!

Word embeddings is a technique in natural language processing whereby words are embedded in a high- dimensional space. They are used in sentiment analysis, entity detection, recommender systems, paraphrasing, text summarization, question answering, translation, and historical and geographic linguistics. We describe a Snap! Library that contains 20,000-word embeddings in 15 languages. Using a block that reports a list of 300 numbers for any of the known words, one can create programs that search for similar words, find words that are the average of other words, explore cultural biases, and solve word analogy problems. These programs can work in a single language or rely upon the alignment of the word embedding spaces of different languages to perform rough translations. To compute with word embeddings one needs perform vector arithmetic. This can be accomplished by providing vector arithmetic blocks. More advanced users can instead take advantage of Snap!’s support of higher-order functions to use list mapping blocks to perform the vector operations.

20 min
Snap!Con 2019