About

DialDoc Workshop

Welcome to 2nd DialDoc Workshop co-located with ACL 2022!

DialDoc Workshop focuses on document-grounded dialogue and conversational question answering. There is a vast amount of document content created every day by human writers to communicate with human readers for sharing knowledge, ranging from encyclopedias to social benefits. Making the document content accessible to users via conversational systems and scaling it to various domains could be a meaningful yet challenging task. There are significant individual research threads that show promise in handling heterogeous knowledge embedded in documents for building conversational systems, including (1) unstructured content, such as text passages; (2) semi-structured content, such as tables or lists; (3) multi-modal content, such as images and videos along with text descriptions, and so on. The purpose of this workshop is to invite researchers and practitioners to bring their individual perspectives on the subject of document-grounded dialogue and conversational question answering to advance the field in a joint effort.

Topics of interests include but not limited to

  • document-grounded dialogue and conversational machine reading, such as CMUDoG, DREAM, ShARC and Doc2Dial;
  • open domain dialogue and conversational QA, such as ORConvQA, TopiOCQA, Abg-CoQA, QReCC and MultiDoc2Dial;
  • conversational search among domain documents, such as MANtIS
  • parsing semi-structured document content for dialogue and conversational QA, table reading, such as HybridQA and TAT-QA;
  • summarization for dialogue, query-based summarization, such as AQUAMUSE;
  • knowledge-grounded dialogue generation;
  • evaluations for document-grounded dialogue;
  • interpretability and faithfulness in dialogue modeling;
  • knowledge-grounded multimodal dialogue and question answering.

Workshop Schedule

Program

PDF version of all the papers could be found here.

May 26, 2022 - Dublin local time (GMT+1)

  • 09:00 - 09:05 | Opening Remark
  • 09:05 - 09:40 | Invited Talk I by Siva Reddy - How Grounded is Document-grounded Conversational AI?
  • 09:40 - 10:25 | Oral Presentation I
    • Conversation- and Tree-Structure Losses for Dialogue Disentanglement
      Tianda Li, Jia-Chen Gu, Zhen-Hua Ling, Quan Liu
    • Construction of Hierarchical Structured Knowledge-based Recommendation Dialogue Dataset and Dialogue System
      Takashi Kodama, Ribeka Tanaka, Sadao Kurohashi
    • Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters
      Yan Xu, Etsuko Ishii, Samuel Cahyawijaya, Zihan Liu, Genta Indra Winata, Andrea Madotto, Dan SU, Pascale Fung
  • 10:25 - 10:40 | Coffee Break
  • 10:40 - 11:15 | Invited Talk II by Jeff Dalton - Knowledge-Grounded Conversation Search and Understanding: Current Progress and Future Directions
  • 11:15 - 11:55 | Paper Lightning Talk
    1. TRUE: Re-evaluating Factual Consistency Evaluation
      Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, Yossi Matias
    2. Pseudo Ambiguous and Clarifying Questions Based on Sentence Structures Toward Clarifying Question Answering System
      Yuya Nakano, Seiya Kawano, Koichiro Yoshino, Katsuhito Sudoh, Satoshi Nakamura
    3. Parameter-Efficient Abstractive Question Answering over Tables or Text
      Vaishali Pal, Evangelos Kanoulas, Maarten de Rijke
    4. Conversational Search with Mixed-Initiative - Asking Good Clarification Questions backed-up by Passage Retrieval
      Yosi Mass, Doron Cohen, Asaf Yehudai, David Konopnicki
    5. Graph-combined Coreference Resolution Methods on Conversational Machine Reading Comprehension with Pre-trained Language Model
      Zhaodong Wang, Kazunori Komatani
    6. MSAMSum: Towards Benchmarking Multi-lingual Dialogue Summarization
      Xiachong Feng, Xiaocheng Feng, Bing Qin
    7. UniDS: A Unified Dialogue System for Chit-Chat and Task-oriented Dialogues
      Xinyan Zhao, Bin He, Yasheng Wang, Yitong Li, Fei Mi, Yajiao LIU, Xin Jiang, Qun Liu, Huanhuan Chen
    8. G4: Grounding-guided Goal-oriented Dialogues Generation with Multiple Documents
      Shiwei Zhang, Yiyang Du, Guanzhong Liu, Zhao Yan, Yunbo Cao
  • 11:55 - 12:55 | Poster Session
  • 12:55 - 14:00 | Lunch Break
  • 14:00 - 14:35 | Invited Talk III by Zhou Yu - Will Big Models Solve Dialog Systems?
  • 14:35 - 15:05 | Oral Presentation II
    • Low-Resource Adaptation of Open-Domain Generative Chatbots
      Greyson Gerhard-Young, Raviteja Anantha, Srinivas Chappidi, Bjorn Hoffmeister
    • Task2Dial: A Novel Task and Dataset for Commonsense-enhanced Task-based Dialogue Grounded in Documents
      Carl Strathearn, Dimitra Gkatzia
  • 15:05 - 15:20 | Coffee Break
  • 15:20 - 15:35 | Shared Task Results
    • presentation by Team CPII-NLP
  • 15:35 - 15:50 | Shared Task Prizes and Best Paper Awards presented by Luis Lastras, Director, IBM Research
  • 15:50 - 16:25 | Invited Talk IV by Michel Galley - Interactive Document Generation
  • 16:25 - 17:00 | Invited Talk V by Mari Ostendorf - Understanding Conversation Context is Central to Conversational AI
  • 17:00 - 17:05 | Ending Remark

Calls

Call for Papers

We welcome submissions of original work as long or short papers, as well as non-archival papers. We also accept paper submissions via ARR. All accepted papers will be presented at the workshop.

We will have the Best Paper Award and Best Student Paper Award, which will be announced during the workshop.


Submission Instructions

Formatting Guidelines:
We accept long (eight pages plus unlimited references) and short (four pages plus unlimited references) papers, which should conform to ARR CFP guidelines.

Non-Archival Submissions:
The accepted papers can opt to be non-archival.

Submit your paper:

  • For submitting papers directly to our workshop, use button "ACL 2022 Workshop DialDoc Submission" on the submission site.
  • For submitting papers with ARR reviews, use button "ACL 2022 Workshop DialDoc Commitment Submission" on the submission site.



Review Process

All submissions will be peer-reviewed by at least two reviewers. The reviewing process will be two-way anonymized. Authors are responsible for anonymizing the submissions.


Important Dates

  • For regular workshop paper submissions (regular/non-archival long/short tracks)
    • Paper Due Date: February 28, 2022 (AoE)March 7, 2022 (AoE)
    • Notification of Acceptance: March 26, 2022
  • For paper submissions with ARR reviews
    • Paper Due Date: February 28, 2022 (AoE)March 24, 2022 (AoE)
    • Notification of Acceptance: March 26, 2022
  • For technical paper submissions (Shared Task track)
    • Paper Due Date: March 27, 2022 (AoE)
    • Notification of Acceptance: April 3, 2022
  • For all
    • Camera-ready Paper Due Date: April 10, 2022 (AoE)
    • Workshop Date: May 26, 2022

Special Theme

This workshop focuses on scaling up document-grounded dialogue systems especially for low-resource domains, e.g., the applications in low-resource languages or emerging unforseen situations such as COVID-19 pandemic. We seek submissions that tackles the challenge on different aspects, including but not limited to

  • effective adaption of pre-trained models;
  • data augmentation and data generation for dialogue models;
  • domain adaption and meta learning for knowledge-grounded dialogue;

We also maintain a list of resources on COVID-19 datasets in different languages. Please contact us if you would like to suggest a related dataset.
Learn More

Data Competition

Shared Task

Congratulations to the winning teams!

Rewards - Task on SEEN

1st Prize CPII-NLP
2nd Prize zsw_dyy_lgz
3rd Prize UGent-T2K

Rewards - Task on UNSEEN

1st Prize CMU_QA
2nd Prize CPII-NLP
3rd Prize UGent-T2K



About

The shared task centers on building open-book goal-oriented dialogue systems, where an agent could provide an answer or ask follow-up questions for clarification or verification. The main goal is to generate grounded agent responses in natural language based on the dialogue context and domain knowledge in the documents.

Please star our leaderboard if you are interested in the task! Please see more details at Call for Participants section.


Rewards

We offer the following awards to top participating teams for each task. The prizes are sponsored by IBM Research AI.

  • 1st place: $1000
  • 2nd place: $500
  • 3rd place: $200

Resources

The training and test data is based on MultiDoc2Dial dataset. Please check out for running baselines.


Important Dates

The final date for leaderboard submission is April 3, 2022.

The due date for techincal paper submission is March 27, 2022.

Join our Google Group for important updates! If you have any question, ask in our Google Group or email us.

Task and Data

The task is to generate grounded agent response given dialogue query and domain documents. Specifically,

  • Input: latest user turn, dialogue history and all domain documents.
  • Output: agent response in natural language.

The training data includes the training and validation splits from MultiDoc2Dial dataset. Given the low-resource nature of the domains in MultiDoc2Dial, we also encourage the teams to explore the approaches such as data augmentation for the tasks in two ways: (1) utilizing existing public datasets; (2) utilizing automatically generated synthetic data without involving any additional human labeled data on MultiDoc2Dial.

Evaluations

The test data is derived from the MultiDoc2Dial dataset. There are two tasks based on different settings in test data,

  • MultiDoc2Dial-seen-domain (MDD-SEEN): all dialogues in the test data are grounded in the documents from the same domains as the training data.
  • MultiDoc2Dial-unseen-domain (MDD-UNSEEN): all dialogues in the test data are grounded in the documents from an unseen domain.

The results will be evaluated based on both automatic metrics and human evaluations. For automatic evaluation, check out the evaluation script.

Figure 1: a sample dialogue grounded in multiple documents.

Social Security Credits  

                                                   
You must earn at least 40 Social Security credits to qualify for social security benefits.

Number of Credit Needed for Disability Benefits                                                             To be eligible for disability benefits, you must meet a recent work test and a duration work test.




 Number of Credit Needed for Retirement Benefits
If you are born after 1928, you will need 40 credits to qualify for retirement benefits.

30 years or older - In general, you must have at least 20 credits in the 10-year period immediately before you become disabled.

 U1: I need help with SSDI. I heard that it could benefit my relatives too. I am in my 50s.

 A2: Yes SSDI pays benefits to you 
and family members if you are insured.  

A3: Do you know if you are “insured”?
U4: Could you tell me more about it?

 A5: We measure it in “work credits”..

U6: How many credits do I need to get the benefit?

          A7: Since you are over 31 years old, you must have
 at least 20 credits in the 10-year period … 

 U8: OK. My wife is currently unemployed. I want to know what benefit she gets from me.

 A9: The qualifying member could receive
 up to 50% of your benefit. 

  

Access Your Benefit Information Online

Sign up a new account                                                 
To sign up an new account

 Recover your username and password
If you can't log in your account, you can fill out this form to recover your account information.




If you can't log in your account, you can fill out this form to recover your account information.

 U10: hmm, I haven't checked my SSA online account for long time Could you remind me where to start?

 A11: Do you plan to apply online? If so, 
you can follow the link to fill out  an application form 

 The Basics about Disability Benefits

 The Social Security Disability Insurance (SSDI) program pays benefits to you and certain family members  if you are “insured”. When you start receiving disability benefits, certain members of your family may qualify for benefits based on your work.


   Benefits For Your Spouse
 Benefits are payable to qualifying family members on your record. The maximum amount is up to 50%.
If your spouse is eligible for retirement benefits on their own record, we will always pay that amount first. But, if the spouse’s benefit that is payable on your record is a higher amount, they will get a combination of the two benefits that equals the higher amount.

   Benefit For Your Children
   Benefits are payable to qualifying family members .
    
   
Social Security Disability Insurance (SSDI) program pays benefits to you and certain family members if you are
" insured ".

The qualifying member could receive up to 50% of your benefit.

>
ssa2
Home
>
Benefit Planner
Online
>
>
ssa2
Home
>
Benefit Planner
Credits
>
Home
>
Disability Benefits
Family
>

Please check out the for our Shared Task challenge.

The challenge includes leaderboards for two task settings with two phases, Dev (TestDev) and Test phase,

You can find more details regarding the phases and submission once your team is registered.

Shared Task Paper Submission Guidelines

We welcome techincal paper submissions based on the Shared Task. To qualify for the competition prizes, a participating team will need to complete at least one of the two task settings and make a paper submission.

The paper should be up to four pages with unlimited references. The format should conform to ARR CFP guidelines.

Review Process

All submissions will be peer-reviewed by at least two reviewers. The reviewing process will be double-blinded at the level of the reviewers. Authors are responsible for anonymizing the submissions.


Important Dates

  • Leaderboard Submission Final Date: April 3, 2022
  • Techincal Paper Due Date: March 27, 2022 (AoE)
  • Notification of Acceptance: April 3, 2022
  • Camera-ready Paper Due: April 10, 2022 (AoE)
  • Workshop Dates: May 26, 2022

Talks

Invited Speakers

Jeff Dalton

University of Glasgow

Michel Galley

Microsoft Research

Mari Ostendorf

University of Washington

Siva Reddy

MILA/MCQLL/CIFAR

Zhou Yu

Columbia University

Organization

Workshop Organizers

Song Feng

Amazon - AWS AI Labs

Chengguang Tang

Tencent AI Lab

Hui Wan

IBM Research

Zeqiu (Ellen) Wu

University of Washington

Caixia Yuan

Beijing University of Posts and Telecommunications

Program Committee

  • Amanda Buddemeyer (University of Pittsburgh)
  • Asli Celikyilmaz (Meta AI Research)
  • Bowen Yu (Chinese Academy of Sciences)
  • Chen Henry Wu (Carnegie Mellon University)
  • Chulaka Gunasekara (IBM Research AI)
  • Chun Gan (JD Research)
  • Cunxiang Wang (Westlake University)
  • Danish Contractor (IBM Research)
  • Dian Yu (Tencent)
  • Diane Litman (University of Pittsburgh)
  • Ehud Reiter (University of Aberdeen)
  • Elizabeth Clark (University of Washington)
  • Fanghua Ye (University College London)
  • Tao Feng (Monash University)
  • Guanyi Chen (Utrecht University)
  • Hanjie Chen (University of Virginia)
  • Hao Zhou (Tencent)
  • Haotian Cui (Toronto University)
  • Haochen Liu (Michigan State University)
  • Houyu Zhang (Amazon)
  • Ioannis Konstas (Heriot-Watt University)
  • Jingjing Xu (Peking University)
  • Jia-Chen Gu (USTC)
  • Jinfeng Xiao (UIUC)
  • Jian Wang (The Hong Kong Polytechnic University)
  • Jingyang Li (Alibaba DAMO Academy)
  • Jiwei Li (SHANNON.AI)
  • Jun Xu (Baidu)
  • Kai Song (ByteDance)
  • Ke Shi (Tencent)
  • Kun Qian (Columbia University)
  • Kaixuan Zhang (Northwestern University)
  • Libo Qin (MLNLP)
  • Michael Johnston (Interactions)
  • Meng Qu (MILA)
  • Minjoon Seo (KAIST)
  • Pei Ke (Tsinghua University)
  • Peng Qi (Stanford University)
  • Ravneet Singh (University of Pittsburgh)
  • Ryuichi Takanobu (Tsinghua University)
  • Rongxing Zhu (The University of Melbourne)
  • Seokhwan Kim (Amazon Alexa AI)
  • Shehzaad Dhuliawala (Microsoft Research Montreal)
  • Srinivas Bangalore (Interactions)
  • Zejiang Shen (AllenAI)
  • Vaibhav Adlakha (MCGill and MILA)
  • Wanjun Zhong (MSRA)
  • Xi Chen (Tencent)
  • Yifan Gao (The Chinese University of Hong Kong)
  • Yekun Chai (Baidu)
  • Yinhe Zheng (Alibaba DAMO Academy)
  • Yiwei Jiang (Ghent University)
  • Yajing Sun (Chinese Academy of Sciences)
  • Yunqi Qiu (Chinese Academy of Sciences)
  • Yosi Mass (IBM Research)
  • Yutao Zhu (University of Montreal)
  • Zheng Zhang (Tsinghua University)
  • Zhenyu Zhang (Chinese Academy of Sciences)
  • Zhenzhong Lan (Westlake University)
  • Zhixing Tian (JD)

Contact us

Join and post at our Google Group!
Email the organziers at dialdoc2022@googlegroups.com .