Parsimonious Language Models for a Terabyte of Text


Hiemstra, Djoerd and Kamps, Jaap and Kaptein, Rianne and Li, Rongmei (2008) Parsimonious Language Models for a Terabyte of Text. In: 16th Text Retrieval Conference, TREC 2007, 5-9 November 2007, Gaithersburg, Maryland, USA (pp. p. 64).

open access
Abstract:The aims of this paper are twofold. Our first aim
is to compare results of the earlier Terabyte tracks
to the Million Query track. We submitted a number
of runs using different document representations
(such as full-text, title-fields, or incoming
anchor-texts) to increase pool diversity. The initial
results show broad agreement in system rankings
over various measures on topic sets judged at both
Terabyte and Million Query tracks, with runs using
the full-text index giving superior results on
all measures, but also some noteworthy upsets.
Our second aim is to explore the use of parsimonious
language models for retrieval on terabyte-scale
collections. These models are smaller thus
more efficient than the standard language models
when used at indexing time, and they may also improve
retrieval performance. We have conducted
initial experiments using parsimonious models in
combination with pseudo-relevance feedback, for
both the Terabyte and Million Query track topic
sets, and obtained promising initial results.
Item Type:Conference or Workshop Item
Electrical Engineering, Mathematics and Computer Science (EEMCS)
Research Group:
Link to this item:
Proceedings URL:
Export this item as:BibTeX
HTML Citation
Reference Manager


Repository Staff Only: item control page

Metis ID: 250975