Datasets:
metadata
dataset_name: openmed-community/mmlu-5-options-rl-ready
tags:
- MMLU
- evaluation
- DPO
- RL
- SFT
pretty_name: MMLU – 5-Options RL-Ready
license: mit
language:
- en
task_categories:
- multiple-choice
- question-answering
- reinforcement-learning
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: task
dtype: string
- name: output
dtype: string
- name: options
dtype: string
- name: letter
dtype: string
- name: incorrect_letters
list: string
- name: incorrect_answers
list: string
- name: single_incorrect_answer
dtype: string
- name: system_prompt
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 369405907
num_examples: 97842
- name: test
num_bytes: 7551070
num_examples: 2000
download_size: 229394171
dataset_size: 376956977
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
MMLU – 5-Options RL-Ready
A standardized, RL-friendly remix of MMLU with explicit negatives and a unified five-option presentation string for each question. Ideal for DPO and other RL setups while remaining drop-in for classic multiple-choice evaluation.
What’s inside
Splits & size: ~97.8k train + 2k test ≈ 99.8k total.
Schema (core fields):
question: strchoices: list[str](canonical options, typically 4 as in original MMLU)answer: int(0-based index)task: str(subject/task label; ~55 values)output: str(correct option text)options: str(single markdown-style block with (1)…(5) enumerated choices for unified 5-option prompts)letter: str(correct letter tag)incorrect_letters: list[str]incorrect_answers: list[str]single_incorrect_answer: str(one negative for pairwise prefs)system_prompt: str(single default value)input: str(ready-to-use user message text)
Note: The dataset provides both the original structured
choicesarray (as in MMLU) and a five-optionoptionsstring for standardized, list-variant prompting in RL pipelines.
Why it’s RL-ready
- Explicit negatives:
incorrect_answers+single_incorrect_answerenable DPO, pairwise prefs, and contrastive training without extra preprocessing. - Unified prompts:
system_prompt+inputand the five-optionoptionsstring make it simple to build consistent chat-style prompts across frameworks.
Example record
{
"question": "Which statement best describes the critics' reaction to the Segway?",
"choices": ["Nothing but an electrical device.", "A disappointing engineering mistake.", "An expensive and disappointing invention.", "Disappointing, but still a successful device."],
"answer": 3,
"task": "miscellaneous",
"output": "Disappointing, but still a successful device.",
"options": "(1) ... (2) ... (3) ... (4) ... (5) ...",
"letter": "(3)",
"incorrect_letters": ["(1)", "(2)", "(4)", "(5)"],
"incorrect_answers": ["...", "...", "...", "..."],
"single_incorrect_answer": "...",
"system_prompt": "You are a helpful tutor.",
"input": "Choose the correct answer from the options below.\n\n<question + (1)…(5) options>"
}
Intended uses
- Evaluation of general reasoning on MMLU tasks with standardized five-option prompts.
- SFT with chat-style formatting.
- DPO / RL using explicit positive vs. negative pairs from
single_incorrect_answeror fullincorrect_answers.
Source & attribution
Derived from the original MMLU dataset by Hendrycks et al. (CAIS) cais/mmlu. Please cite the original work when using this derivative.