Accepted Special Sessions
- Deep Learning for Speech Synthesis
The authors of the special session should submit their papers via this link: https://www2.securecms.com/SLT2018/Papers/Submission.asp?Mode=SpecialSLT
Placeholder papers deadline: Monday 16 July 2018, 23.59 (EST)
Full paper submission deadline: Monday 23 July 2018, 23.59hrs (EST)
- Panel Session on Dialogue Models and Systems: from Research Labs, to the Cloud to Your Living Room
Industry Special Session: Future of Spoken Language Technologies
- Microsoft Dialogue Challenge: Building End-to-End Task-Completion Dialogue Systems
- 12/18/2018 – 12/21/2018: SLT Workshop
– Dec. 18, 1:00PM – 2:00PM: Invited talks: 1hr, Speakers: Dilek Hakkani-Tur (Amazon) and Gokhan Tur (Uber)
– Dec. 18, 2:00PM – 2:45PM: Oral presentations: 45mins
– Dec. 18, 2:45PM – 4:15PM: Coffee/Poster/Demo session: 1.5hr
– Dec. 18, 4:15PM – 5:00PM: Panel discussion: 45mins, Panelist: Alex Acero (Apple), Jianfeng Gao (Microsoft), Dilek Hakkani-Tur (Amazon) and Gokhan Tur (Uber)
- 11/25/2018: Paper acceptance announcement.
- 11/18/2018: Paper submission. Call for Papers.
- 11/11/2018: Results (including human evaluation) Announcement.
- 10/25/2018: System submission (https://msrprograms.cloudapp.net/MDC2018)
- 08/03/2018: Movie domain is up, see cmd.md for instruction.
- 07/28/2018: Restaurant and Taxi domains: Data and Simulators are up, see cmd.md for instruction.
- 07/16/2018: Registration is now open.
- 07/06/2018: Task description is up.
This special session introduces a Dialogue Challenge for building end-to-end task-completion dialogue systems, with the goal of encouraging the dialogue research community to collaborate and benchmark on standard datasets and unified experimental environment. In this special session, we will release human-annotated conversational data in three domains (movie-ticket booking, restaurant reservation, and taxi booking), as well as an experiment platform with built-in simulators in each domain, for training and evaluation purposes. The final submitted systems will be evaluated both in simulated setting and by human judges.
Please check this description for more details about the task.