Datasets:
metadata
task_categories:
- image-text-to-text
license: other
language:
- ru
tags:
- mathematics
- education
- vlm
- assessment
- russian
- handwritten
EGE Math Solutions Assessment Benchmark
Dataset Description
This dataset contains student solutions to Russian Unified State Exam (EGE) mathematics problems, with reference scores for benchmarking automated evaluation systems.
🖼️ All images are properly embedded and accessible!
The dataset includes three types of images for each solution:
- Student solutions with correct answers shown (152 images)
- Student solutions without answers for blind evaluation (152 images)
- True/reference solutions for each problem (144 images)
Dataset Statistics
- Total examples: 122
- Total images: 448
- Task types: 7
- Score range: 0-4 points
Task Types
Task Type | Count |
---|---|
Financial mathematics | 15 |
Logarithmic inequalities | 19 |
Number theory problem | 16 |
Planimetric problem | 17 |
Problem with parameters | 16 |
Stereometric problem | 18 |
Trigonometric equations | 21 |
Score Distribution
Score | Count | Percentage |
---|---|---|
0 | 28 | 23.0% |
1 | 40 | 32.8% |
2 | 35 | 28.7% |
3 | 11 | 9.0% |
4 | 8 | 6.6% |
Dataset Structure
Each example contains:
solution_id
: Unique identifier for the solutiontask_id
: Task type ID (13-19)example_id
: Specific example identifiertask_type
: Description of the task type in Englishscore
: Reference score (0-4)parts_count
: Number of parts in the solutionimages_with_answer
: List of PIL Images containing student solution with correct answerimages_without_answer
: List of PIL Images containing only student solutionimages_with_true_solution
: List of PIL Images containing task with true solution
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset('Karifannaa/EGE_Math_Solutions_Assessment_Benchmark')
# Access an example
example = dataset['train'][0]
print(f"Solution ID: {example['solution_id']}")
print(f"Task Type: {example['task_type']}")
print(f"Score: {example['score']}")
# View images (all images are PIL Image objects)
print(f"Images with answer: {len(example['images_with_answer'])}")
print(f"Images without answer: {len(example['images_without_answer'])}")
print(f"Images with true solution: {len(example['images_with_true_solution'])}")
# Display an image
if example['images_with_answer']:
img = example['images_with_answer'][0]
img.show() # This will work - images are properly embedded!
Image Access
All images are stored as PIL Image objects and can be directly accessed:
# Get first example
example = dataset['train'][0]
# Access different types of images
student_solution_with_answer = example['images_with_answer'][0]
student_solution_without_answer = example['images_without_answer'][0]
true_solution = example['images_with_true_solution'][0]
# Images are PIL Image objects with standard methods
print(f"Image size: {student_solution_with_answer.size}")
print(f"Image mode: {student_solution_with_answer.mode}")
# Save image
student_solution_with_answer.save("solution.png")
License
This dataset is provided for research and educational purposes.
Citation
If you use this work in your research, please consider citing it.
Plain Text:
Khrulev, R. (2025). CHECK-MAT: Checking Hand-Written Mathematical Answers for the Russian Unified State Exam. arXiv preprint arXiv:2507.22958. https://arxiv.org/abs/2507.22958
BibTeX:
@misc{khrulev2025checkmatcheckinghandwrittenmathematical,
title={CHECK-MAT: Checking Hand-Written Mathematical Answers for the Russian Unified State Exam},
author={Ruslan Khrulev},
year={2025},
eprint={2507.22958},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.22958},
}