Co-occurrence based Word Embeddings

In co-occurrence based methods, word embeddings are typically learned from a unsupervised corpus, where the model is optimized for different tasks and then used the first layer of the word features as embedding. There are different methods for this embedding. In the co-occurrence based word embeddings, the main assumption is that co-occurring words are related to each other. In that way, the models learn a dense vector representation for each word.

  1. Word2Vec Embedding
  2. GloVe Embedding
  3. FastText Embedding

Co-occurence based word embeddings can get the semantic meaning but not contextual meaning.


References


Related Notes