Tuesday, 5 September 2017

A Comparison of Nuggets and Clusters for Evaluating Timeline Summaries


There is growing interest in systems that generate timeline summaries by filtering high-volume streams of documents to retain only those that are relevant to a particular event or topic. Continued advances in algorithms and techniques for this task depend on standardized and reproducible evaluation methodologies for comparing systems. However, timeline summary evaluation is still in its infancy, with competing methodologies currently being explored in international evaluation forums such as TREC. One area of active exploration is how to explicitly represent the units of information that should appear in a 'good' summary. Currently, there are two main approaches, one based on identifying nuggets in an external 'ground truth', and the other based on clustering system outputs. In this paper, by building test collections that have both nugget and cluster annotations, we are able to compare these two approaches. Specifically, we address questions related to evaluation effort, differences in the final evaluation products, and correlations between scores and rankings generated by both approaches. We summarize advantages and disadvantages of nuggets and clusters to offer recommendations for future system evaluations.

Gaurav Brauah, Richard McCreadie and Jimmy Lin.
A Comparison of Nuggets and Clusters for Evaluating Timeline Summaries
In Proceedings of CIKM, 2017.

PDF
Dataset

4 comments:

Mohammed Ismail said...

Its Really A Great Post .Looking For Some More Stuff
Digital Marketing Service In Bangalore.

Unknown said...

Nice information. Thank you for sharing such post

Unknown said...

Very nice post. Awesome article... Really helpful

I Digital Academy said...

Thank u for sharing.... its a nice post.
Best Web Designing Training in Bangalore
Best SAP Training in Bangalore

Post a Comment

older post Home