==================================================================

EMNLP-IJCNLP 2019 Workshop on Machine Reading for Question Answering

Date / Location: Nov 3 or 4, 2019 / Hong Kong

Website: http://mrqa.github.io

==================================================================

 

**OVERVIEW**

Machine Reading for Question Answering (MRQA) is a dedicated workshop for research on machine reading systems that answer questions by understanding context documents. This year, we seek submissions in two tracks: research track with a call for papers and a new shared task track.

 

 

**CALL FOR PAPERS**

We seek regular paper submissions on various aspects of machine reading systems for question answering, including but not limited to: accuracy, interpretability, speed, scalability, robustness, dataset creation, dataset analysis, and error analysis.

 

Submissions should be at least 4 and at most 8 pages, not including citations. All submissions will be reviewed in a single track, regardless of length. Please format your papers using the standard style files for EMNLP-IJCNLP 2019. Submission is electronic via the Softconf START system.

 

We accept submissions on work published or submitted elsewhere. Recently published work should clearly indicate the original venue and will be accepted if the organizers think the work will benefit from exposure to the audience of this workshop. Work published elsewhere will not be included in the workshop proceedings. All other submissions will go through a double-blind review process.

 

Call for Papers: https://mrqa.github.io/cfp

Submission link: https://www.softconf.com/emnlp2019/ws-MRQA/

 

Submission deadline: August 19, 2019

Notification of acceptance: September 16, 2019

Camera-ready deadline: September 30, 2019

 

 

**CALL FOR SHARED TASK SUBMISSIONS**

The 2019 MRQA Shared Task focuses on generalization to new test domains. A truly effective question answering system should do more than merely interpolate from the training set to answer test examples drawn from the same distribution: it should also be able to extrapolate to test examples drawn from different distributions. We have released an official training dataset containing examples from existing QA datasets: SQuAD, NewsQA, TriviaQA, SearchQA, HotpotQA, and NaturalQuestions. Submitted models will be allowed to train on this data, and will be tested on several QA datasets. Six of them are known: BioASQ, DROP, DuoRC, RACE, RelationExtraction, and TextbookQA. The others are hidden.

 

Both train and test datasets have the same format and this year we focus on extractive question answering. That is, given a question and context passage, systems must find a segment of text, or span in the document that best answers the question. This format allows us to leverage many existing datasets, and its simplicity helps us focus on out-of-domain generalization, instead of other important but orthogonal challenges.

 

Each participant will submit a single QA system trained on the provided training data. No other question answering data may be used for training. We will then privately evaluate each system on the hidden test data. Please visit our website for more details, including released training and development datasets, a baseline model, and instructions on how to participate.

 

Please register as soon as possible (link below) to receive important announcements and help us estimate the number of participants beforehand.

 

Call for Submissions: https://mrqa.github.io/shared

Instructions: https://github.com/mrqa/MRQA-Shared-Task-2019

Registration form: https://forms.gle/wBy5Ph3WWgGPw9dY7

 

Model submission deadline: July 29, 2019

Test results announced: August 12, 2019

Description paper submission deadline: August 30, 2019

Notification of acceptance: September 16, 2019

Camera-ready deadline: September 30, 2019