


default search action
1st RepEval@ACL 2016: Berlin, Germany
- Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, RepEval@ACL 2016, Berlin, Germany, August 2016. Association for Computational Linguistics 2016, ISBN 978-1-945626-14-2

- Billy Chiu, Anna Korhonen, Sampo Pyysalo:

Intrinsic Evaluation of Word Vectors Fails to Predict Extrinsic Performance. 1-6 - Miroslav Batchkarov, Thomas Kober, Jeremy Reffin

, Julie Weeds
, David J. Weir:
A critique of word similarity as a method for evaluating distributional semantic models. 7-12 - Tal Linzen:

Issues in evaluating semantic spaces using word analogies. 13-18 - Neha Nayak, Gabor Angeli, Christopher D. Manning:

Evaluating Word Embeddings Using a Representative Suite of Practical Tasks. 19-23 - Nasrin Mostafazadeh, Lucy Vanderwende, Wen-tau Yih, Pushmeet Kohli, James F. Allen:

Story Cloze Evaluator: Vector Space Representation Evaluation by Predicting What Happens Next. 24-29 - Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, Chris Dyer:

Problems With Evaluation of Word Embeddings Using Word Similarity Tasks. 30-35 - Anna Gladkova

, Aleksandr Drozd:
Intrinsic Evaluations of Word Embeddings: What Can We Do Better? 36-42 - José Camacho-Collados

, Roberto Navigli:
Find the word that does not belong: A Framework for an Intrinsic Evaluation of Word Vector Representations. 43-50 - Alicia Krebs, Denis Paperno:

Capturing Discriminative Attributes in a Distributional Space: Task Proposal. 51-54 - Farhana Ferdousi Liza

, Marek Grzes:
An Improved Crowdsourcing Based Evaluation Technique for Word Embedding Methods. 55-61 - Sahar Ghannay, Yannick Estève, Nathalie Camelin, Paul Deléglise:

Evaluation of acoustic word embeddings. 62-66 - Arne Köhn:

Evaluating Embeddings using Syntax-based Classification Tasks as a Proxy for Parser Performance. 67-71 - Allyson Ettinger, Tal Linzen:

Evaluating vector space models using human semantic priming results. 72-77 - Judit Ács, András Kornai:

Evaluating embeddings on dictionary-based similarity. 78-82 - Gábor Borbély, Márton Makrai, Dávid Márk Nemeskey, András Kornai:

Evaluating multi-sense embeddings for semantic resolution monolingually and in word translation. 83-89 - Ali Seyed:

Subsumption Preservation as a Comparative Measure for Evaluating Sense-Directed Embeddings. 90-93 - Naomi Saphra:

Evaluating Informal-Domain Word Representations With UrbanDictionary. 94-98 - Asad Basheer Sayeed, Clayton Greenberg, Vera Demberg:

Thematic fit evaluation: an aspect of selectional preferences. 99-105 - Oded Avraham, Yoav Goldberg

:
Improving Reliability of Word Similarity Evaluation by Redesigning Annotation Task and Performance Measure. 106-110 - Yulia Tsvetkov, Manaal Faruqui, Chris Dyer:

Correlation-based Intrinsic Evaluation of Word Vector Representations. 111-115 - Anders Søgaard:

Evaluating word embeddings with fMRI and eye-tracking. 116-121 - Iuliana-Elena Parasca, Andreas Lukas Rauter, Jack Roper, Aleksandar Rusinov, Guillaume Bouchard, Sebastian Riedel, Pontus Stenetorp:

Defining Words with Words: Beyond the Distributional Hypothesis. 122-126 - Dmitrijs Milajevs, Sascha S. Griffiths:

A Proposal for Linguistic Similarity Datasets Based on Commonality Lists. 127-133 - Allyson Ettinger, Ahmed Elgohary, Philip Resnik:

Probing for semantic evidence of composition by means of simple classification tasks. 134-139 - Laura Rimell, Eva Maria Vecchi:

SLEDDED: A Proposed Dataset of Event Descriptions for Evaluating Phrase Representations. 140-144 - Tal Baumel, Raphael Cohen, Michael Elhadad

:
Sentence Embedding Evaluation Using Pyramid Annotation. 145-149

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














