We have announced our Shared Task AWARD WINNERS!
DialDoc Shared Task focuses on building goal-oriented information-seeking dialogue systems. In particular, the goal is to teach a dialogue system to identify the most relevant knowledge in the associated document for generating agent responses in natural language. It includes two subtasks for building goal-oriented document-grounded dialogue systems. The first subtask is to predict the grounding in the given document for next agent response; the second subtask is to generate agent response in natural language given the contexts.
Please join our Google Group for important notifications.
Our training data is derived from Doc2Dial dataset. The dataset contains goal-oriented conversations between an end user and an assistive agent. Each turn is annotated with a dialogue scene, which includes role, dialogue act, and grounding in a document (or irrelevant to domain documents). The documents are from different domains, such as va and studentaid.
- Subtask1 - knowledge identification
Goal: identifying the grounding knowledge in form of document span for the
next agent turn.
Input: dialogue history, current user utterance and associated document.
Output: a text span.
Evaluation: Exact Match and F1 metrics.
- Subtask2 - text generation
Goal: generating next agent response in natural language.
Input: dialogue history and associated document.
Output: agent utterance.
Evaluation: sacrebleu and human evaluations.
AwardsFor each subtask, we would like to reward top three participant teams with prizes $1000, $500 and $200 respectively. While we cannot recognize every team, we earnestly thank everyone who submitted to leaderboard !
The following are the award recipients:Subtask 1
|3rd Prize||RWTH code|
Call for Participation
In order to qualify for the competition prizes, each participant will be asked to make a paper submission
and complete at least one of the two subtasks.
All accepted submissions will be presented at the workshop.
Please submit a paper that describes your models and systems to the technical system track. You can choose to describe all the methods for all participated tasks in one single paper submission.
The paper should be up to four pages including references. The format should conform to ACL submission information.
All submissions will be peer-reviewed by at least two reviewers. The reviewing process will be double-blinded at the level of the reviewers. Authors are responsible for anonymizing the submissions
- Leaderboard Submission Final Date: April 26th, 2021 (AoE)
- Workshop Paper Due Date:
May 1, 2021 (AoE)→ May 6, 2021 (AoE)
- Notification of Acceptance:
May 30, 2021 (AoE)→ June 1, 2021 (AoE)
- Camera-ready papers due: June 7, 2021 (AoE)
- Workshop Dates: August 5, 2021
Feng (IBM Research)