[go: up one dir, main page]

This Page Pertains to the (Now Closed) 2019 Edition of the MRP Shared Task

Motivation & Goals

The 2019 Conference on Computational Language Learning (CoNLL) hosts a shared task (or ‘system bake-off’) on Cross-Framework Meaning Representation Parsing (MRP 2019). The goal of the task is to advance data-driven parsing into graph-structured representations of sentence meaning. All things semantic are receiving heightened attention in recent years. And despite remarkable advances in vector-based (continuous and distributed) encodings of meaning, ‘classic’ (discrete and hierarchically structured) semantic representations will continue to play an important role in ‘making sense’ of natural language. While parsing has long been dominated by tree-structured target representations, there is now growing interest in general graphs as more expressive and arguably more adequate target structures for sentence-level analysis beyond surface syntax and in particular for the representation of semantic structure.

For the first time, this task combines formally and linguistically different approaches to meaning representation in graph form in a uniform training and evaluation setup. Participants are invited to develop parsing systems that support five distinct semantic graph frameworks—which all encode core predicate–argument structure, among other things—in the same implementation. Training and evaluation data will be provided for all five frameworks. Participants are asked to design and train a system that predicts sentence-level meaning representations in all frameworks in parallel. Architectures that utilize complementary knowledge sources (e.g. via parameter sharing) are encouraged (though not required). Learning from multiple flavors of meaning representation in tandem has hardly been explored (with notable exceptions, e.g. the parsers of Peng et al., 2017; 2018; Hershcovich et al., 2018; or Stanovsky & Dagan, 2018).

The task seeks to reduce framework-specific ‘balkanization’ in the field of meaning representation parsing.  Expected outcomes include (a) a unifying formal model over different semantic graph banks, (b) uniform representations and scoring, (c) systematic contrastive evaluation across frameworks, and (d) increased cross-fertilization via transfer and multi-task learning.  We hope to engage the combined community of parser developers for graph-structured output representations, including from six prior framework-specific tasks at the Semantic Evaluation (SemEval) exercises between 2014 and 2019.  Owing to scarcity of semantic annotations across frameworks, the shared task is regrettably limited to parsing English for the time being.

Some semi-formal definitions and a brief review of the five semantic graph frameworks represented in the shared task are available on separate pages.  The task will provide training data across frameworks in a uniform JSON serialization, as well as conversion and scoring software. If the task sounds potentially interesting to you, please follow the instructions for prospective participants.

Tentative Schedule

March 6, 2019
First Call for Participation
March 25, 2019
Specification of Uniform Interchange Format
Availability of Sample Training Graphs
April 15, 2019
Second Call for Participation
Initial Release of Training Data
May 20, 2019
Update of UCCA Training Data (More and Better Graphs)
Availability of Morpho-Syntactic Companion Trees
June 3, 2019
Availability of Evaluation Software
Closing Date for Additional Data Nominations
June 17, 2019
Cross-Framework Evaluation Metric
July 8–25, 2019
Evaluation Period (Held-Out Data)
August 1, 2019
Official Evaluation Results (LibreOffice)
August 9, 2019
Partial Gold-Standard Evaluation Graphs
All System Submissions and Scores
Template for System Description Papers
September 2, 2019
Submission of System Descriptions
September 16, 2019
Reviewer Feedback Available
September 30, 2019
Camera-Ready Manuscripts
November 3–4, 2019
Presentation and Discussion of Results
XHTML 1.0 | Last updated: 2020-03-25 (10:03)