Datasets:
Dataset Summary
This dataset is an extension of the one found here adding the problem data for each submission, including things like the statement, title, difficulty, etc...
This dataset aims to facilitate the creation of sophisticated, multi-turn dialogue datasets focused on coding that could be used for training reasoning Large Language Models (LLMs), particularly for Supervised Fine-Tuning (SFT) and Knowledge Distillation techniques.
It also serves as a robust foundation for problem-solving in Large Language Models (LLMs).
The dataset includes both accepted and failed solutions from Codeforces contests.
It covers contests up to contest Codeforces Round 1004 (Feb 11, 2025).
In total, it features 10385 unique problems, 1879 contests and 11,949,042 submissions across over 60 different programming languages (including different compiling environments for each language).
(The dataset is not accessible to the public, please do not contact me regarding its access.)
Languages
The problem descriptions are in English.
The dataset includes submissions in various programming languages.
Here is a table with the 10 most used languages.
Lang |
---|
C++17 (GCC 7-32) |
C++14 (GCC 6-32) |
GNU C++11 |
GNU C++ |
Python 3 |
C++17 (GCC 9-64) |
Java 8 |
PyPy 3 |
MS C++ |
C++20 (GCC 11-64) |
Verdict
The verdict indicates the outcome of each submission.
It contains the result or a score based on the total solved subtasks for the given problem.
Here is a table showing some of the possible results.
Result |
---|
OK |
WRONG_ANSWER |
TIME_LIMIT_EXCEEDED |
RUNTIME_ERROR |
COMPILATION_ERROR |
CHALLENGED |
MEMORY_LIMIT_EXCEEDED |
SKIPPED |
Data Splits
There are no splits (Only training).
Dataset Creation
Aug of 2025
Source Data
The source of the dataset is Codeforces's contests
Annotations
The dataset includes the following columns:
- contest_number: contest number (e.g. 1024).
- letter: letter given to a problem for a give contest (e.g. A).
- statement: markdown explanation of the problem.
- difficulty: difficulty given to a problem (e.g. 1200).
- title: title of the problem.
- language: programming language used for the solution (see previous table for details).
- verdict: outcome of the submission (see previous table for details).
- passed_test_count: number of tests passed by the submission.
- execution_time: time taken by the submission in milliseconds
- execution_memory: memory used by the submission in kilobytes.
- creation_time: unix time of the creation of the submission.
- code: code of the submission.
If you have any suggestion, comment or proposal, please contact me here
- Downloads last month
- 11