Resources
vSTS: Visual Semantic Textual Similarity
The vSTS dataset aims to become a standard benchmark to test the contribution of visual information when evaluating the similarity of sentences and the quality of multimodal representations, allowing to test the complementarity of visual and textual information for improved language understanding.
- website: https://oierldl.github.io/vsts
- github: https://github.com/oierldl/vsts
- paper: Evaluating Multimodal Representations on Visual Semantic Textual Similarity
Older resources
- Sensecorpus, a corpus of examples from the web for all nouns in WordNet 1.6. The senses can be easily mapped to other WN versions here. (Smaller subset used in our EMNLP 2004 paper here).
- Topic signatures for all nominal senses in WordNet
- Sense Clustering data for WN 1.6 (RANLP 2003 paper)