About

DialDoc Workshop

Welcome to 2nd DialDoc Workshop co-located with ACL 2022!

DialDoc Workshop focuses on document-grounded dialogue and conversational question answering. There is vast amount of document content created every day by human writers to communicate with human readers for sharing knowledge, ranging from encyclopedia to social benefits. Making the document content accessible to users via conversational systems and scaling it to various domains could be a meaningful yet challenging task. There are significant individual research threads that show promise in handling heterogeous knowledge embedded in documents for building conversational systems, including (1) unstructured content, such as text passages; (2) semi-structured content, such as tables or lists; (3) multi-modal content, such as images and videos along with text descriptions, and so on. The purpose of this workshop is to invite researchers and practitioners to bring their individual perspectives on the subject of document-grounded dialogue and conversational question answering to advance the field in a joint effort.

Topics of interests include but not limited to

  • document-grounded dialogue and conversational machine reading, such as CMUDoG, DREAM, ShARC and Doc2Dial;
  • open domain dialogue and conversational QA, such as ORConvQA, TopiOCQA, Abg-CoQA, QReCC and MultiDoc2Dial;
  • conversational search among domain documents, such as MANtIS
  • parsing semi-structured document content for dialogue and conversational QA, table reading, such as HybridQA and TAT-QA;
  • summarization for dialogue, query-based summarization, such as AQUAMUSE;
  • evaluations for document-grounded dialogue;
  • interpretability and faithfulness in dialogue modeling;
  • knowledge-grounded multimodal dialogue and question answering.

Calls

Call for Papers

We welcome submissions of original work as long or short papers, as well as non-archival papers. All accepted papers will be presented at the workshop.

We will have the Best Paper Award and Best Student Paper Award, which will be announced during the workshop.


Submission Instructions

Formatting Guidelines:
We accept long (eight pages plus unlimited references) and short (four pages plus unlimited references) papers, which should conform to ARR CFP guidelines.

Non-Archival Submissions:
The accepted papers can opt to be non-archival.


Review Process

All submissions will be peer-reviewed by at least two reviewers. The reviewing process will be two-way anonymized. Authors are responsible for anonymizing the submissions.


Important Dates

  • Workshop Paper Due Date: February 28, 2022 (AoE)
  • Notification of Acceptance: March 26, 2022 (AoE)
  • Camera-ready papers due: April 10, 2022 (AoE)
  • Workshop Dates: May 26, 2022

Special Theme

This workshop focuses on scaling up document-grounded dialogue systems especially for low-resource domains, e.g., the applications in low-resource languages or emerging unforseen situations such as COVID-19 pandemic. We seek submissions that tackles the challenge on different aspects, including but not limited to

  • effective adaption of pre-trained models;
  • data augmentation and data generation for dialogue models;
  • domain adaption and meta learning for knowledge-grounded dialogue;

We also maintain a list of resources on COVID-19 datasets in different languages. Please contact us if you would like to suggest a related dataset.
Learn More

Data Competition

Shared Task

About

The shared task centers on building open-book goal-oriented dialogue systems, where an agent could provide an answer or ask follow-up questions for clarification or verification. The main goal is to generate grounded agent responses in natural language based on the dialogue context and domain knowledge in the documents.

Please star our leaderboard if you are interested in the task! Please see more details at Call for Participants section.


Rewards

We would like to offer the top participating teams the following prizes for each of the two leaderboards.

  • 1st place: $1000
  • 2nd place: $500
  • 3rd place: $200

Resources

The training and test data is based on MultiDoc2Dial dataset. Please check out for running baselines.


Important Dates

The final date for leaderboard submission is April 3, 2022.

Join our Google Group for important updates! If you have any question, ask in our Google Group or email us.

Task and Data

The task is to generate grounded agent response given dialogue query and domain documents. Specifically,

  • Input: latest user turn, dialogue history and all domain documents.
  • Output: agent response in natural language.

The training data includes the training and validation splits from MultiDoc2Dial dataset. Given the low-resource nature of the domains in MultiDoc2Dial, we also encourage the teams to explore the approaches such as data augmentation for the tasks in two ways: (1) utilizing existing public datasets; (2) utilizing automatically generated synthetic data without involving any additional human labeled data on MultiDoc2Dial.

Evaluations

The test data is derived from the MultiDoc2Dial dataset. There are two tasks based on different settings in test data,

  • MultiDoc2Dial-seen-domain (MDD-SEEN): all dialogues in the test data are grounded in the documents from the same domains as the training data.
  • MultiDoc2Dial-unseen-domain (MDD-UNSEEN): all dialogues in the test data are grounded in the documents from an unseen domain.

The results will be evaluated based on both automatic metrics and human evaluations. For automatic evaluation, check out the evaluation script.

Figure 1: a sample dialogue grounded in multiple documents.

Social Security Credits  

                                                   
You must earn at least 40 Social Security credits to qualify for social security benefits.

Number of Credit Needed for Disability Benefits                                                             To be eligible for disability benefits, you must meet a recent work test and a duration work test.




 Number of Credit Needed for Retirement Benefits
If you are born after 1928, you will need 40 credits to qualify for retirement benefits.

30 years or older - In general, you must have at least 20 credits in the 10-year period immediately before you become disabled.

 U1: I need help with SSDI. I heard that it could benefit my relatives too. I am in my 50s.

 A2: Yes SSDI pays benefits to you 
and family members if you are insured.  

A3: Do you know if you are “insured”?
U4: Could you tell me more about it?

 A5: We measure it in “work credits”..

U6: How many credits do I need to get the benefit?

          A7: Since you are over 31 years old, you must have
 at least 20 credits in the 10-year period … 

 U8: OK. My wife is currently unemployed. I want to know what benefit she gets from me.

 A9: The qualifying member could receive
 up to 50% of your benefit. 

  

Access Your Benefit Information Online

Sign up a new account                                                 
To sign up an new account

 Recover your username and password
If you can't log in your account, you can fill out this form to recover your account information.




If you can't log in your account, you can fill out this form to recover your account information.

 U10: hmm, I haven't checked my SSA online account for long time Could you remind me where to start?

 A11: Do you plan to apply online? If so, 
you can follow the link to fill out  an application form 

 The Basics about Disability Benefits

 The Social Security Disability Insurance (SSDI) program pays benefits to you and certain family members  if you are “insured”. When you start receiving disability benefits, certain members of your family may qualify for benefits based on your work.


   Benefits For Your Spouse
 Benefits are payable to qualifying family members on your record. The maximum amount is up to 50%.
If your spouse is eligible for retirement benefits on their own record, we will always pay that amount first. But, if the spouse’s benefit that is payable on your record is a higher amount, they will get a combination of the two benefits that equals the higher amount.

   Benefit For Your Children
   Benefits are payable to qualifying family members .
    
   
Social Security Disability Insurance (SSDI) program pays benefits to you and certain family members if you are
" insured ".

The qualifying member could receive up to 50% of your benefit.

>
ssa2
Home
>
Benefit Planner
Online
>
>
ssa2
Home
>
Benefit Planner
Credits
>
Home
>
Disability Benefits
Family
>

Please check out the for our Shared Task challenge.

The challenge includes leaderboards for two task settings with two phases, Dev (TestDev) and Test phase,

You can find more details regarding the phases and submission once your team is registered.

Shared Task Paper Submission Guidelines

We welcome techincal paper submissions based on the Shared Task. To qualify for the competition prizes, a participanting team will need to complete at least one of the two task settings and make a paper submission.

The paper should be up to four pages with unlimited references. The format should conform to ARR CFP guidelines.


Review Process

All submissions will be peer-reviewed by at least two reviewers. The reviewing process will be double-blinded at the level of the reviewers. Authors are responsible for anonymizing the submissions.


Import Dates

  • Leaderboard Submission Final Date: April 3, 2022
  • Techincal Paper Due Date: March 27, 2022 (AoE)
  • Notification of Acceptance: April 3, 2022
  • Camera-ready papers due: April 10, 2022 (AoE)
  • Workshop Dates: May 26, 2022

Talks

Invited Speakers

Jeff Dalton

University of Glasgow

Michel Galley

Microsoft Research

Mari Ostendorf

University of Washington

Siva Reddy

MILA/MCQLL/CIFAR

Zhou Yu

Columbia University

Organization

Workshop Organizers

Song Feng

IBM Research

Chengguang Tang

Alibaba DAMO Academy

Zeqiu (Ellen) Wu

University of Washington

Caixia Yuan

Beijing University of Posts and Telecommunications

Program Committee

  • Amanda Buddemeyer (University of Pittsburgh)
  • Asli Celikyilmaz (Microsoft Research)
  • Bowen Yu (Chinese Academy of Sciences)
  • Chen Henry Wu (Carnegie Mellon University)
  • Chulaka Gunasekara (IBM Research AI)
  • Chun Gan (JD Research)
  • Cunxiang Wang (Westlake University)
  • Danish Contractor (IBM Research)
  • Dian Yu (Tencent)
  • Diane Litman (University of Pittsburgh)
  • Ehud Reiter (University of Aberdeen)
  • Elizabeth Clark (University of Washington)
  • Fanghua Ye (University College London)
  • Tao Feng (Monash University)
  • Guanyi Chen (Utrecht University)
  • Hanjie Chen (University of Virginia)
  • Hao Zhou (Tencent)
  • Haotian Cui (Toronto University)
  • Haochen Liu (Michigan State University)
  • Houyu Zhang (Amazon)
  • Ioannis Konstas (Heriot-Watt University)
  • Jingjing Xu (Peking University)
  • Jia-Chen Gu (USTC)
  • Jinfeng Xiao (UIUC)
  • Jian Wang (The Hong Kong Polytechnic University)
  • Jingyang Li (Alibaba DAMO Academy)
  • Jiwei Li (SHANNON.AI)
  • Jun Xu (Baidu)
  • Kai Song (ByteDance)
  • Ke Shi (Tencent)
  • Kun Qian (Columbia University)
  • Kaixuan Zhang (Northwestern University)
  • Libo Qin (MLNLP)
  • Michael Johnston (Interactions)
  • Meng Qu (MILA)
  • Minjoon Seo (KAIST)
  • Pei Ke (Tsinghua University)
  • Peng Qi (Stanford University)
  • Ravneet Singh (University of Pittsburgh)
  • Ryuichi Takanobu (Tsinghua University)
  • Rongxing Zhu (The University of Melbourne)
  • Seokhwan Kim (Amazon Alexa AI)
  • Shehzaad Dhuliawala (Microsoft Research Montreal)
  • Srinivas Bangalore (Interactions)
  • Zejiang Shen (AllenAI)
  • Vaibhav Adlakha (MCGill and MILA)
  • Wanjun Zhong (MSRA)
  • Xi Chen (Tencent)
  • Yifan Gao (The Chinese University of Hong Kong)
  • Yekun Chai (Baidu)
  • Yinhe Zheng (Alibaba DAMO Academy)
  • Yiwei Jiang (Ghent University)
  • Yajing Sun (Chinese Academy of Sciences)
  • Yunqi Qiu (Chinese Academy of Sciences)
  • Yosi Mass (IBM Research)
  • Yutao Zhu (University of Montreal)
  • Zheng Zhang (Tsinghua University)
  • Zhenyu Zhang (Chinese Academy of Sciences)
  • Zhenzhong Lan (Westlake University)
  • Zhixing Tian (JD)

Contact us

Join and post at our Google Group!
Email the organziers at dialdoc2022@googlegroups.com .