Workshop on Shared Tasks and Comparative Evaluation in Natural Language Generation

Accepted Position Papers

The accepted position papers appear together in this volume or individually below.

Anja Belz
Putting development and evaluation of core technology first
Donna Byron, Alexander Koller, Jon Oberlander, Laura Stoia and Kristina Striegnitz
Generating Instructions in Virtual Environments (GIVE): A Challenge and an Evaluation Testbed for NLG
Barbara Di Eugenio
Shared Tasks and Comparative Evaluation for NLG: to go ahead, or not to go ahead?
Albert Gatt, Ielka van der Sluis and Kees van Deemter
Corpus-based evaluation of Referring Expressions Generation
Nancy L. Green
Position Statement for Workshop on STEC in NLG
Kathleen F. McCoy
To Share a Task or Not: Some Ramblings from a Mad (i.e., crazy) NLGer
David McDonald
Flexibility counts more than precision
Chris Mellish and Donia Scott
NLG Evaluation: Let's open up the box
Cécile Paris, Nathalie Colineau and Ross Wilkinson
NLG Systems Evaluation: a framework to measure impact on and cost for all stakeholders
Ehud Reiter
NLG Shared Tasks: Lets try it and see what happens
Vasile Rus, Zhiqiang Cai and Arthur C. Graesser
Evaluation in Natural Language Generation: The Question Generation Task
Donia Scott and Johanna Moore
An NLG evaluation competition? Eight Reasons to be Cautious
Amanda Stent
Pragmatic Influences on Sentence Planning and Surface Realization: Implications for Evaluation
Jette Viethen
Automatic Evaluation of Referring Expression Generation Is Possible
Marilyn Walker
Share and Share Alike: Resources for Language Generation