AMQA is an adversarial medical question answering dataset for benchmarking the bias of large language models (LLMs) in the medical question answering context. AMQA is created from the U.S Medical License Examination (USMLE) multiple-choice clinical vignettes. Each sample includes:
- 
An original clinical vignette from the U.S Medical License Examination (USMLE) question bank (MedQA dataset).
 - 
A neutralized clinical vignette with sensitive attributes removed
 - 
Six adversarial variants targeting:
- Race (Black vs. White)
 - Gender (Female vs. Male)
 - Socioeconomic Status (Low vs. High Income)
 
 
Variants are generated using a multi-agent LLM pipeline and reviewed by humans for quality control. The following figure demonstrates the workflow of the creation of the AMQA dataset.
AMQA/
βββ AMQA_Dataset/                         
β   βββ AMQA_Dataset.jsonl/             # Final AMQA Dataset based on the adversarial variants from GPT-Agent and revised by Human Reviewers
β   βββ Vignette_GPT-4.1.jsonl/         # Adversarial Clinical Vignette Variants from GPT-Agent
β   βββ Vignette_Deepseek-v3.jsonl/     # Adversarial Clinical Vignette Variants from Deepseek-Agent
β   βββ Vignette_Deepseek-v3.jsonl/     # Adversarial Clinical Vignette Variants from Deepseek-Agent
β   βββ .../
βββ Scripts/                         
β   βββ AMQA_generation_batch/          # Python script for generating adversarial variants from neutralized clinical vignette
β   βββ AMQA_Benchmark_LLM/             # Python script for benchmarking given LLMs
β   βββ .../
βββ Results/                        
β   βββ AMQA_Benchmark_Answer_{LLM_Name}.jsonl    # Raw answers from {LLM_Name} on original vignettes, neutralized vignettes, and vignette variants
β   βββ AMQA_Benchmark_Summary_{LLM_Name}.jsonl   # Statistical Results of benchmarking {LLM_Name}
βββ Figures/                        
β   βββ AMQA_Banner                     # Banner figure of AMQA benchmark dataset
β   βββ AMQA_Workflow                   # Workflow of the creation of the AMQA benchmark dataset
βββ README.md
- Individual Fairness: Consistency across counterfactual variants
 - Group Fairness: Accuracy disparity between demographic groups
 - Significance Testing: McNemar's test for evaluating answer consistency
 
Format: For the convenience of dataset usage, we release our dataset in the format of ".jsonl" and make it publicly available on both the AMQA GitHub Repository and the AMQA Hugging Face Page.
Properties: Currently, there are 801 samples in the AMQA dataset. Each sample contains 39 properties, including "question id", "original question", "neutralized question", 6 "adversarial description", six "adversarial variant", 6 "variant tag", answers on original question, neutralized question, and 6 variants...
To access the AMQA benchmark dataset, you can copy and run the following code:
from datasets import load_dataset
ds = load_dataset("Showing-KCL/AMQA")
Ying Xiao is the maintainer of the AMQA dataset as well as this repository. If you have any problems or suggestions in using the AMQA dataset as well as our source code, please feel free to reach out by emailing [ying.1.xiao@kcl.ac.uk].