Skip to main content

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Registration for the NIST GenAI evaluation is now open. You can sign up here.

GenAI: Text-to-Text (T2T)

Evaluating generators and discriminators for AI-generated text vs human-written text.

Overview

NIST GenAI T2T is an evaluation series that supports research in Generative AI Text-to-Text modality. Which generative AI models are capable of producing synthetic content that can deceive the best discriminators as well as humans? The performance of generative AI models can be measured by (a) humans and (b) discriminative AI models. To evaluate the "best" generative AI models, we need the most competent humans and discriminators. The most proficient discriminators are those that possess the highest accuracy in detecting the "best" generative AI models. Therefore, it is crucial to evaluate both generative AI models (generators) and discriminative AI models (discriminators).


What

The Text-to-Text Generators (T2T-G) task is to automatically generate high-quality summaries given a statement of information needed ("topic") and a set of source documents to summarize. For more details, please see the generator data specification.

The Text-to-Text Discriminators (T2T-D) task is to detect if a target output summary has been generated using a Generative AI system or a Human. For more details, please see the discriminator evaluation plan.


Who

We welcome and encourage teams from academia, industry, and other research labs to contribute to Generative AI research through the GenAI platform. The platform is designed to support various modalities and technologies, including both "Generators" and "Discriminators".
Generators will supplement the evaluation test material with their own AI-generated content based on the given task (e.g., automatic summarization of documents). These participants will use cutting-edge tools and techniques to create synthetic content. By incorporating this data into our test material, our test sets will evolve in pace with technology advancements. In the GenAI pilot, generators do “well” when their synthetic content is not detected by humans or AI discriminators.
Discriminators are automatic algorithms identifying whether a piece of media (text, audio, image, video, code) originated from generative AI or a human. In the GenAI pilot, discriminators do “well” when correctly categorizing the test material produced by AI or Humans.


How

To take part in the GenAI evaluations you need to register on this website and complete the data usage agreement and the data transfer agreement to download/upload the data. NIST will make all necessary data resources available to the generator and discriminator participants. Each team will receive access to data resources upon completion of all needed data agreement forms and based on the published schedule of each task data release date. Please refer to the published schedule for data release dates. Once your system is functional, you will be able to upload your data (generators) or system outputs (discriminators) to the challenge website and see your results displayed on the leaderboard.


Task Coordinator

If you have any questions, please email to the NIST GenAI team

Schedule

Date

Generators (G)

Discriminators (D)

April 15, 2024

Data Specification available

Evaluation Plan available

May 1, 2024

Registration period opens

Registration period opens

June 3, 2024

NIST source article data available

Test set-1: NIST pilot set-1 available

July 5, 2024

Registration closes

Registration closes

August 2, 2024

Round-1 data submission deadline

System output submission deadline on the test set-1 (Leaderboard)

September 2, 2024

G-Scorer results for the Round-1 data available (Leaderboard)

Test set-2: NIST pilot set-2 + G-participant round-1 data available

October 18, 2024

Round-2 data submission deadline

System output submission deadline on the test set-2 (Leaderboard)

November 4, 2024

G-Scorer results for the Round-2 data available (Leaderboard)

Test set-3: NIST pilot set-3 + G-participant round-2 data available

December 13, 2024

System output submission deadline on the test set-3 (Leaderboard)

January 2025

Close

Feburary 2025

Results release for both G and D

March 2025

GenAI pilot evaluation workshop

GenAI T2T Evaluation Rules (Updated: 5/15/2024)

  • Participation in the GenAI evaluation program is voluntary and open to all who find the task of interest and are willing and able to abide by the rules of the evaluation. To fully participate, a registered site must:
    • become familiar with and abide by all evaluation rules;
    • develop/enhance an algorithm that can process the required evaluation datasets;
    • submit the necessary files to NIST for scoring; and
    • attend the evaluation workshop (if one occurs) and openly discuss the algorithm and related research with other evaluation participants and the evaluation coordinators.
  • Participants are free to publish results for their own system but must NOT publicly compare their results with other participants (ranking, score differences, etc.) without explicit written consent from the other participants and NIST.
  • While participants may report their own results, participants may NOT make advertising claims about their standing in the evaluation, regardless of rank, winning the evaluation, or claim NIST endorsement of their system(s). The following language in the U.S. Code of Federal Regulations (15 C.F.R. § 200.113(d)) shall be respected: NIST does not approve, recommend, or endorse any proprietary product or proprietary material. No reference shall be made to NIST or to reports or results furnished by NIST in any advertising or sales promotion which would indicate or imply that NIST approves, recommends, or endorses any proprietary product or proprietary material or which has as its purpose an intent to cause directly or indirectly the advertised product to be used or purchased because of NIST test reports or results.
  • At the conclusion of the evaluation, NIST may generate a report summarizing the system results for conditions of interest. Participants may publish or otherwise disseminate these charts unaltered and with appropriate reference to their source.
  • The challenge participant agrees NOT to use publicly available NIST-released data to train their systems or tune parameters, however, they may use other publicly available data that complies with applicable laws and regulations to train their models.
  • The challenge participant agrees NOT to examine the test data manually or through human means, including analyzing the media and/or training their model on the test data, to draw conclusions from prior to the evaluation period to the end of the leaderboard evaluation.
  • All machine learning or statistical analysis algorithms must complete training, model selection, and tuning prior to running on the test data. This rule does NOT preclude online learning/adaptation during test data processing so long as the adaptation information is NOT reused for subsequent runs of the evaluation collection.
  • The participants agree to make at least one valid submission for participating tasks. Evaluation participants must do so to be included in downloading the next round of datasets.
  • The participants agree to have one or more representatives at the post-evaluation workshop, to present a meaningful description of their system(s). Evaluation participants must do so to be included in future evaluation participation.

T2T Discriminators Overview

The primary goal of the GenAI pilot is to understand system behavior detecting AI-generated vs. human-generated content.

The T2T-D task is a detection task focused on determining if a target output was generated using Generative AI or humans. The T2T-D detection task consists of detecting if a target text summary was generated based on large language models (LLMs)  such as ChatGPT.

For each T2T-D trial consisting of a single summary, the T2T-D detection system must render a confidence score with a higher number indicating a higher likelihood that the target text summary was generated using LLM-based models. The primary metric for measuring detection performance will be the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC), as well as the Equal Error Rate (EER), True Positive Rate (TPR) at a given False Positive Rate (FPR), and Bayes risk for varying tradeoff values for cost of errors.

The GenAI pilot challenge provides data, including dry-run sets and test sets, created by both G-participants and the NIST GenAI team. This allows D-participants to develop and run a system on their own hardware platform. Discriminator participants can then submit their system outputs to a web-based leaderboard, where scores and results are displayed.

The data from G-participants will only be accessible to D-participants once the G-participants submit their data packages to NIST and the NIST GenAI team approves the data. However, NIST will provide pilot data generated by the NIST GenAI team for D-participants to start the development of their systems. NIST reports performance measures for D-participant system outputs, displayed through a leaderboard, using either NIST pilot data or the evolved G-participants data.

Please refer to the discriminator evaluation plan for the details.

Data resources will be available for download once the registration is open and the data release has been announced. NIST will also release GenAI Scorer and Format Validator scripts.

T2T Discriminator Instructions

System Input File

For a given task, a system’s input is the task index file, called <modality_id>_<dataset_id>_<task_id>_index.csv. Given an index file, each row specifies a test trial. Taking the corresponding media (texts or images) as input(s), systems perform detection tasks.

The following format constitutes the index file for the D-participant system input:

genai24_T2T-D_detection_index.csv
DatasetID (string) The ID of the dataset release (e.g., GenAI24-PL-set1)
TaskID (string) The globally unique ID of tasks. Tasks could be summarization,
 generation, translation, question-answering (e.g., Detection)
FileID (string) The globally unique ID of the text summary trials (e.g., xxx_000011.txt)

Example of the CSV file with delimiter “|”.

DatasetID    | TaskID      | FileID         
PG24-PL-set1 | Detection   | xxx_000011.txt

System Output File

The system output file must be a CSV file with the separator “|”. The filename for the output file must be a user-defined string that identifies the submission with no spaces or special characters besides ‘_-.’ (e.g., `genai24_t2t_d_sys_model-01.csv`).  

The system output CSV file for the T2T-D detection task must follow the format below:

genai24_t2t_d_sys_model-01.csv
DatasetID (string) The ID of the dataset release, e.g., GenAI24-PL-set1
TaskID (string) The ID of the summary files, e.g., Detection
DiscriminatorID (string) The globally unique ID of Discriminator (D) participants, e.g. D-participant_001
ModelVersion (string) The system model version on D-participant submission (e.g., MySystem_GPT4.0)
FileID (string) The globally unique ID of the text summary trials (e.g., xxx_000011.txt)
ConfidenceScore (float) in the range [0,1], the larger, the more confidence that the output is AI generated

Example of the CSV file with delimiter “|”.

DatasetID    | TaskID    | DiscriminatorID  | ModelVersion | FileID         |  ConfidenceScore
PG24-PL-set1 | Detection | D-participant_01 | MySys_GPT4.0 | xxx_000011.txt |  0.7

Validation

The FileID column in the system output [submission-file-name].csv must be consistent with the FileID in the <modality_id>_<dataset_id>_<task_id>_index.csv file. The row order may change, but the number of the files and file names from the system output must match to the index file.

To validate your system output locally, D-Participants may use the command-line below.

$ python validate.py -t detection -x genai24_T2T-D_summarization_index.csv -s genai24_T2T-D_detection_sysout.csv

Submission

System output submission to NIST for subsequent scoring must be made through the web platform using the submission instructions described above. To prepare your submission, you will first make .tar.gz (or .tgz) file of your system output CSV file via the UNIX command ‘tar zcvf [submission_name].tgz [submission_file_name].csv’ and then upload the system output tar file under a new or existing ‘System’ label. This system label is a longitudinal tracking mechanism that allows you to track improvements to your specific technology over time.

Please ensure timely submission of your files to allow us sufficient time to address any transmission errors before the due date. Note that submissions received after the stated due dates for any reason will be marked late and may not be scored. Please refer to the published schedule for the details.

Please take into consideration that submitting your system outputs indicates and assumes your agreement to the Rules of Behavior.


T2T-D Pilot

No results to display
coming soon..

T2T Generators Overview

The primary goal of the GenAI pilot is to understand system behavior detecting AI-generated vs human-generated content.

The T2T-G task for the generative AI models is: given a topic and a set of about 25 relevant documents, create from the documents a brief, well-organized, fluent summary that answers the need for information expressed in the topic statement. Participants should assume that the target audience of the summary is a supervisory information analyst who needs the summary to inform decision-making.

  • All processing of documents and generation of summaries must be automatic.
  • The summary can be no longer than 250 words (whitespace-delimited tokens).
  • Summaries over the size limit will be truncated.
  • No bonus will be given for creating a shorter summary.
  • No specific formatting other than linear is allowed.

There will be about 45 topics in the test data for generator teams. This set of summaries from all generator teams will serve as the testing data for discriminator teams, who will work on detecting whether the written content is human-generated or AI-generated.

The summary output will be evaluated by determining how easy or difficult it is to discriminate AI-generated summaries from human-generated summaries, i.e., the goal of generators is to output a summary that is indistinguishable from human-generated summaries.  

For more information and details about the task specifics for generator teams, please refer to the generator data specification.

Data Generation Instructions

NIST human assessors developed topics of interest. Each assessor created a topic and chose a set of 25 relevant documents. The testing dataset documents will come from a corpus comprising a set of newswire articles. NIST will distribute a subset of topics and relevant documents.

Only T2T generator participants who have completed and submitted all required data agreement forms will be allowed access. As the example below shows, each topic includes an id (num), title, and the required topic statement (narr). The “docs” tag indicates the source relevant documents to be used when generating the required summaries. Please check the published schedule for testing data release dates.

Example of topic:
<topic>
  <num> D0701A </num>
  <title> North Medical Center  </title>
  
  <narr>
  Describe the activities of John Smith and the North Medical Center. 
  </narr>
  
  <docs>
  19980304.0061
  19980715.0137
  19990227.0073
  </docs>
</topic>

Submission Guidelines

  • Each team may submit up to 5 runs for a data generation package. Each run should include one summary per topic.
  • Each run should contain summaries for all topics (a run can not skip a topic or submit summaries for a subset of the topics). Please refer to the generator data specification for summary generation instructions.
  • Summary content should be free from offensive text or inappropriate remarks. NIST has the right to exclude any summary or whole runs if the content proves to be inappropriate for the general public.
  • Each run should include high-level metadata to characterize the generator system as requested by the below run format and DTD file. As explained in the DTD file, teams need to provide some required information/parameters, such as:
    • trainingData: Name of training dataset or collection of different datasets or source data
    • teamName: The name of the team as registered on the NIST GenAI website
    • priority: The priority of the submitted run (the lower number, the higher the priority). For any required manual review of submissions, NIST may need to limit effort to only the highest priority runs.
    • trained: A boolean (T or F) to indicate if the run was the output of a trained system by the team specifically for this task (T) or the output of an already existing system that the team used to generate the outputs (F)
    • desc: A high-level description of the system that generated this run
    • link: A link to the model used to generate the run (e.g. GitHub, etc)
    • topic: The topic id (the “num” field in the topic XML file
    • elapsedTime: The processing time of the model (with hardware specs) to generate the summary after the topic and documents were given to it.
Example of a sample run:
<!DOCTYPE GeneratorResults SYSTEM "GeneratorResult.dtd"> 
<GeneratorResults teamName="ExampleTeam">
  <GeneratorRunResult trainingData="OpenAI" version="1.0"
      priority="1" trained="T" 
      desc="Short description about generation approach." 
      link="https://hyperlink_to_document_source (if available)">

    <GeneratorTopicResult topic="1" elapsedTime="5">
    this is a 250-word summary of topic 1
    </GeneratorTopicResult>

    <GeneratorTopicResult topic="2" elapsedTime="5">
    this is a 250-word summary of “topic 2"
    </GeneratorTopicResult>

    <!-- ... -->
    <GeneratorTopicResult topic="40" elapsedTime="5">
    this is a 250-word summary of topic 40
    </GeneratorTopicResult>

  </GeneratorRunResult>
</GeneratorResults>

Generator Data submission validation

  • NIST will provide, prior to submission dates, a validator script to participants to validate their output XML files format as well as content specific to the task guidelines (e.g. topic ids, empty required attributes, etc). All generator teams should validate their runs before submitting them to NIST. Example of available DTD validators (via a shell script): xmllint --valid simple_sample.xml
  • Submission notes: according to the published schedule, the submission page (form) will be open and available (via the GenAI website) for teams to submit their data outputs. Please make sure to follow the schedule and submit on time as extending the submission dates may not be possible.
  • Upon submission, NIST will validate the data outputs uploaded and report any errors to the submitter.
  • Please take into consideration that submitting your data outputs indicates and assumes your agreement to the Rules of Behavior .

T2T-G Pilot

No results to display