TREC Knowledge Base Acceleration

Supporters:

TREC KBA 2014: Technical Details

Knowledge Base Acceleration (KBA) is an open evaluation in NIST's Text Retrieval Conference (TREC). KBA addresses this fundamental question:

Given a rich dossier on a subject,
filter a stream of documents to
accelerate users filling in knowledge gaps.

External Information: teams can use external info that entered the world before the given hour being processed. Teams must describe such external data in their run submission descriptions.

Submissions to KBA are gzipped text files in the format below. The first line must be a comment containing a JSON string in the filter-run.json schema. Assertions consist of seven fields with separated by whitespace.

Be sure your file name ends with ".gz".

Comment lines must start with '#'. All comment lines after the first line of the file are ignored.

In order to attend the TREC conference, you must submit a run. To send your runs to NIST, verify its format using this script: http://trec.nist.gov/act_part/scripts/13.scripts/check_kba.pl and then uploading to https://ir.nist.gov/trecsubmit/kba.html

Example run submission generated by this toy KBA system written in python that generates both SSF and CCR example output. NB: this system will be updated before queries are released for KBA 2014.

#{"run_type": "automatic", "poc_email": "trec-kba@googlegroups.com", "team_id": "CompInsights", "topic_set_id": "kba-2014-ccr-and-ssf", "corpus_id": "kba-streamcorpus-2014-v0_3_0", "$schema": "http://trec-kba.org/schemas/v1.1/filter-run.json", "team_name": "Computable Insights", "system_description_short": "relevance=2, exact name match, longest sentence slot fills", "system_description": "Entity title strings are used as surface form names, then any document containing one of the surface form names is ranked vital with confidence proportional to length of surface form name, and the longest sentence containing the longest surface form name is treated as a slot fill for all slot types for the given entity type.", "task_id": "kba-ccr-2014", "poc_name": "TREC KBA Organizers", "run_info": {"num_entities": 170, "num_stream_hours": 8951}, "system_id": "toy_1"}
CompInsights	toy_1	1317995861-4c6376217ea27bb954f96164c7cdc8ab	http://en.wikipedia.org/wiki/The_Ritz_Apartment_(Ocala,_Florida)	1000	2	1	2011-10-07-14	Affiliate	19ed38ac70555a9f1bbf26feb79764bf	1057-1263
CompInsights	toy_1	1317995861-4c6376217ea27bb954f96164c7cdc8ab	http://en.wikipedia.org/wiki/The_Ritz_Apartment_(Ocala,_Florida)	1000	2	1	2011-10-07-14	Contact_Meet_Entity	19ed38ac70555a9f1bbf26feb79764bf	1057-1263
CompInsights	toy_1	1317995861-4c6376217ea27bb954f96164c7cdc8ab	http://en.wikipedia.org/wiki/Appleton_Museum_of_Art	1000	2	1	2011-10-07-14	Affiliate	19ed38ac70555a9f1bbf26feb79764bf	1057-1263
CompInsights	toy_1	1317995861-4c6376217ea27bb954f96164c7cdc8ab	http://en.wikipedia.org/wiki/Appleton_Museum_of_Art	1000	2	1	2011-10-07-14	Contact_Meet_Entity	19ed38ac70555a9f1bbf26feb79764bf	1057-1263
CompInsights	toy_1	1317995861-4c6376217ea27bb954f96164c7cdc8ab	http://en.wikipedia.org/wiki/Bill_Coen	1000	2	1	2011-10-07-14	Affiliate	fc9c67b4ca0bdeaf2cac34c3d6edb192	0-303
CompInsights	toy_1	1317995861-4c6376217ea27bb954f96164c7cdc8ab	http://en.wikipedia.org/wiki/Bill_Coen	1000	2	1	2011-10-07-14	AssociateOf	fc9c67b4ca0bdeaf2cac34c3d6edb192	0-303
CompInsights	toy_1	1317995861-4c6376217ea27bb954f96164c7cdc8ab	http://en.wikipedia.org/wiki/Bill_Coen	1000	2	1	2011-10-07-14	Contact_Meet_PlaceTime	fc9c67b4ca0bdeaf2cac34c3d6edb192	0-303
CompInsights	toy_1	1317995861-4c6376217ea27bb954f96164c7cdc8ab	http://en.wikipedia.org/wiki/Bill_Coen	1000	2	1	2011-10-07-14	AwardsWon	fc9c67b4ca0bdeaf2cac34c3d6edb192	0-303
snip...
#{
#    "$schema": "http://trec-kba.org/schemas/v1.1/filter-run.json", 
#    "corpus_id": "kba-streamcorpus-2014-v0_3_0", 
#    "poc_email": "trec-kba@googlegroups.com", 
#    "poc_name": "TREC KBA Organizers", 
#    "run_info": {
#        "elapsed_time": 4.623950004577637, 
#        "num_entities": 170, 
#        "num_entity_doc_compares": 170000, 
#        "num_filter_results": 16458, 
#        "num_stream_hours": 3
#    }, 
#    "run_type": "automatic", 
#    "system_description": "Entity title strings are used as surface form names, then any document containing one of the surface form names is ranked vital with confidence proportional to length of surface form name, and the longest sentence containing the longest surface form name is treated as a slot fill for all slot types for the given entity type.", 
#    "system_description_short": "relevance=2, exact name match, longest sentence slot fills", 
#    "system_id": "toy_1", 
#    "task_id": "kba-ssf-2014", 
#    "team_id": "CompInsights", 
#    "team_name": "Computable Insights", 
#    "topic_set_id": "kba-2014-ccr-and-ssf"
#}

where:

  1. first column: your team_id
  2. second column: your system_id. This provides a unique identifier for the submission when combined with your team_id.
  3. third column: official document identifier of the retrieved document, which is always the stream_id in the kba-stream-corpus-2014-v0_3_0.
  4. fourth column: unique identifier for the topic, which for kba-ccr-2014 and kba-ssf-2014 is the target_id of the entity provided in the topic set file.
  5. fifth column: confidence score, which must be an integer normalized to be less than or equal to 1000 and greater than 0, i.e. you can think of them as floating point numbers between zero and one that we present as integer thousandths.
    confidence ∈ (0, 1000] and confidence ∈ 𝐙
  6. sixth column: relevance rating levelinteger in [-1, 0, 1, 2] corresponding to relevance judgment in ['garbage', 'neutral', 'useful', 'vital']. SSF run submissions can provide this field, however the scoring tool will only consider lines that have "2" (vital) in this field. CCR run submissions must provide this field, and scoring tool considers both 'useful' and 'vital'.
  7. seventh column: contains mention integer in [0, 1], which is boolean indicating whether the document contained a mention of the entity or not.
  8. eighth column: date-hour string corresponds to the directory name containing the chunk file that contains the document, e.g. '2012-04-04-04'.
  9. ninth column: slot name from the TAC KBP slot ontology. Used in SSF. Runs for CCR should use 'NULL' in this field. This field most have a string from the list below. Optionally, this field may contain a second string separated from the first by a colon ":", where the second string is a system-selected name for a sub-type or variant of the target slot. This will not be used in scoring and is provided solely for the purpose of allowing systems to output more information about the algorithm's perspective on the slot. This field must not contain any spaces.
  10. tenth column: (ignored in 2014) slot value equivalence class name generated by system. Was used in SSF 2013, and is not used in 2014.
  11. eleventh column: inclusive byte range, e.g. "23-27" specifies five bytes. Byte numbering is zero-based. Used in SSF. Runs for CCR should use '0-0' in this field.

automatic versus manual: As is standard for TREC tasks, you should design and implement your system without studying the particular topics and training data. The purpose of the training data is to allow you to automatically train your system, not manually tune/tweak/patch your system for these particular topics. After you generate an automatic run, it is probably quite fruitful to manually examine the training data and conceive of improvements for manual runs.

We want to hear about your insights, and please consider describing your work in a poster at the TREC conference and/or a technical report in the TREC proceedings.