Datasets:
Upload 6 files
Browse files- .gitattributes +1 -0
- README.md +182 -8
- create_metadata_table.py +138 -0
- custom_tokens_vocab.txt +0 -0
- github_python_metadata.csv +0 -0
- mega_licensed_corpus_redacted.txt +3 -0
- python_files.txt +0 -0
.gitattributes
CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
60 |
+
mega_licensed_corpus_redacted.txt filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,10 +1,184 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
-
|
6 |
-
|
7 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
size_categories:
|
9 |
-
-
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- author
|
4 |
+
license:
|
5 |
+
- gpl-3.0
|
6 |
+
multilinguality:
|
7 |
+
- monolingual
|
8 |
+
pretty_name: GitHub-Python
|
9 |
+
dataset_name: github-python
|
10 |
+
dataset_type: code
|
11 |
+
tags:
|
12 |
+
- code
|
13 |
+
- python
|
14 |
+
- code-generation
|
15 |
size_categories:
|
16 |
+
- 100K<n⩽1M
|
17 |
+
task_categories:
|
18 |
+
- text-generation
|
19 |
+
task_ids:
|
20 |
+
- code-completion
|
21 |
+
---
|
22 |
+
|
23 |
+
# GitHub-Python
|
24 |
+
|
25 |
+
A **767 MB** corpus of permissively-licensed Python code drawn from public GitHub repositories.
|
26 |
+
The dataset was created to support training and evaluation of **code-completion / generation** models.
|
27 |
+
|
28 |
+
## Dataset at a glance
|
29 |
+
|
30 |
+
| | Value |
|
31 |
+
| ---------------- | --------------------------------------------- |
|
32 |
+
| Files | 53,017 `.py` files |
|
33 |
+
| Repositories | 16,447 |
|
34 |
+
| Owners | 12,515 |
|
35 |
+
| Compressed size | 732 MB (`mega_licensed_corpus_redacted.txt`) |
|
36 |
+
| Vocabulary | 443,431 tokens (`custom_tokens_vocab.txt`) |
|
37 |
+
| Time period | Commits ≥ 2015-01-01 |
|
38 |
+
| License coverage | MIT, Apache-2.0, BSD, ISC, Unlicense |
|
39 |
+
| Removed secrets | ✅ – all hard-coded secrets/API keys redacted |
|
40 |
+
|
41 |
+
Numbers were obtained from the final redacted corpus and companion metadata.
|
42 |
+
|
43 |
+
---
|
44 |
+
|
45 |
+
## Dataset structure
|
46 |
+
|
47 |
+
```
|
48 |
+
huggingface_dataset/
|
49 |
+
├─ mega_licensed_corpus_redacted.txt # concatenated code corpus
|
50 |
+
├─ python_files.txt # list of raw file URLs (1-per-line)
|
51 |
+
└─ custom_tokens_vocab.txt # `<token>\t<id>` vocabulary file
|
52 |
+
```
|
53 |
+
|
54 |
+
### File separator
|
55 |
+
|
56 |
+
Individual files are concatenated with the sentinel line:
|
57 |
+
|
58 |
+
```
|
59 |
+
# <FILESEP>
|
60 |
+
```
|
61 |
+
|
62 |
+
Anything following the sentinel until the next sentinel (or EOF) is the source
|
63 |
+
code of one file.
|
64 |
+
|
65 |
+
---
|
66 |
+
|
67 |
+
## Collection methodology
|
68 |
+
|
69 |
+
1. **Repository discovery**
|
70 |
+
|
71 |
+
- Queried GitHub REST API for projects with **≥ 10 stars**
|
72 |
+
(earlier iterations used 100+, later expanded for coverage).
|
73 |
+
- Only repositories with primary language _Python_ and last commit ≥ 2015.
|
74 |
+
|
75 |
+
2. **File filtering**
|
76 |
+
|
77 |
+
- Retain files whose **size ∈ [1 KB, 100 KB]**.
|
78 |
+
- Exclude common build/packaging scripts (`setup.py`, `__init__.py`, etc.).
|
79 |
+
|
80 |
+
3. **License compliance**
|
81 |
+
|
82 |
+
- Allowed: MIT, Apache-2.0, BSD-2/3-Clause, ISC, Unlicense.
|
83 |
+
- GPL, LGPL, AGPL and proprietary licenses were **excluded**.
|
84 |
+
|
85 |
+
4. **Deduplication**
|
86 |
+
|
87 |
+
- Unique file SHA hashes; duplicates skipped.
|
88 |
+
|
89 |
+
5. **Formatting & cleaning**
|
90 |
+
|
91 |
+
- Formatted with _autopep8_ to normalise whitespace.
|
92 |
+
- Custom script removed trailing whitespace & normalised newlines.
|
93 |
+
|
94 |
+
6. **Secret redaction**
|
95 |
+
- `truffleHog` + custom regex pass removed >150 active credentials.
|
96 |
+
- Redacted corpus stored as `mega_licensed_corpus_redacted.txt`.
|
97 |
+
|
98 |
+
---
|
99 |
+
|
100 |
+
## Custom tokenisation
|
101 |
+
|
102 |
+
The accompanying `custom_tokens_vocab.txt` implements a **Python-aware
|
103 |
+
sub-token scheme**:
|
104 |
+
|
105 |
+
1. Strip doc-strings & comments.
|
106 |
+
2. Split on:
|
107 |
+
- Camel-Case boundaries (`Camel` → `Camel`, `Case`)
|
108 |
+
- Underscores, spaces
|
109 |
+
- Indentation & newlines (preserved as `<newline>` token)
|
110 |
+
3. Rare tokens (frequency < 10) were dropped → 443 k vocabulary.
|
111 |
+
|
112 |
+
Example:
|
113 |
+
|
114 |
+
```python
|
115 |
+
def helloWorld(value):
|
116 |
+
return value + 1
|
117 |
+
```
|
118 |
+
|
119 |
+
tokenises to:
|
120 |
+
|
121 |
+
```
|
122 |
+
def hello world ( value ) <newline> return value + 1 <newline>
|
123 |
+
```
|
124 |
+
|
125 |
+
---
|
126 |
+
|
127 |
+
## Usage
|
128 |
+
|
129 |
+
```python
|
130 |
+
from datasets import load_dataset
|
131 |
+
|
132 |
+
ds = load_dataset("jblitzar/github-python", split="train")
|
133 |
+
|
134 |
+
print(ds[0]["code"][:300]) # raw source code
|
135 |
+
```
|
136 |
+
|
137 |
+
If you prefer token level examples (small reasons: memory), map the tokenizer:
|
138 |
+
|
139 |
+
```python
|
140 |
+
from tokenizers import Tokenizer
|
141 |
+
tok = Tokenizer.from_file("custom_tokens_vocab.txt")
|
142 |
+
|
143 |
+
def encode(ex):
|
144 |
+
ex["input_ids"] = tok.encode(ex["code"]).ids
|
145 |
+
return ex
|
146 |
+
|
147 |
+
ds = ds.map(encode, remove_columns=["code"])
|
148 |
+
```
|
149 |
+
|
150 |
+
---
|
151 |
+
|
152 |
+
## Ethical considerations & limitations
|
153 |
+
|
154 |
+
- **Licenses respected** – only permissive licenses included; retain NOTICE
|
155 |
+
files when redistributing derivative works.
|
156 |
+
- **Secrets removed** – automated & manual audits performed, yet users **must
|
157 |
+
not assume zero secrets**; re-audit before public deployments.
|
158 |
+
- **Code quality** – projects vary in style & correctness. Generated models
|
159 |
+
may replicate bugs or vulnerable patterns.
|
160 |
+
|
161 |
+
---
|
162 |
+
|
163 |
+
## Citation
|
164 |
+
|
165 |
+
If you use this dataset, please cite:
|
166 |
+
|
167 |
+
```
|
168 |
+
@misc{github-python-2024,
|
169 |
+
author = {JBlitzar},
|
170 |
+
title = {GitHub-Python: A Permissively Licensed Corpus of Python Code},
|
171 |
+
year = {2024},
|
172 |
+
howpublished = {\url{https://huggingface.co/datasets/jblitzar/github-python}},
|
173 |
+
note = {Version 1.0}
|
174 |
+
}
|
175 |
+
```
|
176 |
+
|
177 |
+
---
|
178 |
+
|
179 |
+
## License
|
180 |
+
|
181 |
+
Dataset card and aggregation scripts: **GPLv3**.
|
182 |
+
Each code snippet remains under its **original repository license** (MIT,
|
183 |
+
Apache-2.0, BSD, ISC, etc.). Users must comply with upstream notices when
|
184 |
+
redistributing code or derivatives.
|
create_metadata_table.py
ADDED
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""
|
3 |
+
Create a metadata table from GitHub Python file URLs.
|
4 |
+
|
5 |
+
This script processes the file URLs from python_files.txt and creates a tabular
|
6 |
+
CSV file with repository metadata including owner, name, file path, and URLs.
|
7 |
+
"""
|
8 |
+
|
9 |
+
import os
|
10 |
+
import re
|
11 |
+
import csv
|
12 |
+
import pandas as pd
|
13 |
+
from collections import Counter
|
14 |
+
from urllib.parse import urlparse
|
15 |
+
from tqdm import tqdm
|
16 |
+
|
17 |
+
|
18 |
+
def parse_github_url(url):
|
19 |
+
"""
|
20 |
+
Parse a GitHub URL to extract repository owner, name, and file path.
|
21 |
+
|
22 |
+
Handles both raw.githubusercontent.com and github.com URLs.
|
23 |
+
|
24 |
+
Args:
|
25 |
+
url (str): GitHub URL
|
26 |
+
|
27 |
+
Returns:
|
28 |
+
dict: Dictionary with repo_owner, repo_name, file_path, repo_url
|
29 |
+
"""
|
30 |
+
url = url.strip()
|
31 |
+
|
32 |
+
# Initialize default values
|
33 |
+
result = {
|
34 |
+
"repo_owner": "unknown",
|
35 |
+
"repo_name": "unknown",
|
36 |
+
"file_path": "",
|
37 |
+
"file_url": url,
|
38 |
+
"repo_url": ""
|
39 |
+
}
|
40 |
+
|
41 |
+
try:
|
42 |
+
# Parse URL to get components
|
43 |
+
parsed = urlparse(url)
|
44 |
+
path_parts = parsed.path.strip('/').split('/')
|
45 |
+
|
46 |
+
# Handle raw.githubusercontent.com URLs
|
47 |
+
# Format: https://raw.githubusercontent.com/owner/repo/branch/path/to/file.py
|
48 |
+
if 'raw.githubusercontent.com' in url:
|
49 |
+
if len(path_parts) >= 3:
|
50 |
+
result["repo_owner"] = path_parts[0]
|
51 |
+
result["repo_name"] = path_parts[1]
|
52 |
+
# Skip branch (path_parts[2]) and get the rest as file path
|
53 |
+
result["file_path"] = '/'.join(path_parts[3:])
|
54 |
+
result["repo_url"] = f"https://github.com/{path_parts[0]}/{path_parts[1]}"
|
55 |
+
|
56 |
+
# Handle github.com URLs
|
57 |
+
# Format: https://github.com/owner/repo/blob/branch/path/to/file.py
|
58 |
+
elif 'github.com' in url:
|
59 |
+
if len(path_parts) >= 4 and path_parts[2] == 'blob':
|
60 |
+
result["repo_owner"] = path_parts[0]
|
61 |
+
result["repo_name"] = path_parts[1]
|
62 |
+
# Skip 'blob' and branch, get the rest as file path
|
63 |
+
result["file_path"] = '/'.join(path_parts[4:])
|
64 |
+
result["repo_url"] = f"https://github.com/{path_parts[0]}/{path_parts[1]}"
|
65 |
+
|
66 |
+
return result
|
67 |
+
|
68 |
+
except Exception as e:
|
69 |
+
print(f"Error parsing URL {url}: {e}")
|
70 |
+
return result
|
71 |
+
|
72 |
+
|
73 |
+
def process_file_urls(input_file, output_file):
|
74 |
+
"""
|
75 |
+
Process GitHub file URLs and create a metadata CSV file.
|
76 |
+
|
77 |
+
Args:
|
78 |
+
input_file (str): Path to the file containing GitHub URLs
|
79 |
+
output_file (str): Path to the output CSV file
|
80 |
+
"""
|
81 |
+
print(f"Processing URLs from {input_file}...")
|
82 |
+
|
83 |
+
# Read file URLs
|
84 |
+
with open(input_file, 'r', encoding='utf-8') as f:
|
85 |
+
urls = [line.strip() for line in f if line.strip()]
|
86 |
+
|
87 |
+
# Parse each URL
|
88 |
+
metadata = []
|
89 |
+
for url in tqdm(urls, desc="Parsing URLs"):
|
90 |
+
metadata.append(parse_github_url(url))
|
91 |
+
|
92 |
+
# Convert to DataFrame
|
93 |
+
df = pd.DataFrame(metadata)
|
94 |
+
|
95 |
+
# Save to CSV
|
96 |
+
# Use minimal quoting to remain compatible with the standard csv module
|
97 |
+
df.to_csv(output_file, index=False, quoting=csv.QUOTE_MINIMAL)
|
98 |
+
print(f"Metadata saved to {output_file}")
|
99 |
+
|
100 |
+
# Print statistics
|
101 |
+
unique_repos = df[['repo_owner', 'repo_name']].drop_duplicates()
|
102 |
+
unique_owners = df['repo_owner'].nunique()
|
103 |
+
|
104 |
+
print("\n=== Dataset Statistics ===")
|
105 |
+
print(f"Total files: {len(df)}")
|
106 |
+
print(f"Unique repositories: {len(unique_repos)}")
|
107 |
+
print(f"Unique repository owners: {unique_owners}")
|
108 |
+
|
109 |
+
# Top repositories by file count
|
110 |
+
repo_counts = Counter(zip(df['repo_owner'], df['repo_name']))
|
111 |
+
print("\nTop 10 repositories by file count:")
|
112 |
+
for (owner, repo), count in repo_counts.most_common(10):
|
113 |
+
print(f" {owner}/{repo}: {count} files")
|
114 |
+
|
115 |
+
# File extensions
|
116 |
+
extensions = Counter([os.path.splitext(path)[1] for path in df['file_path'] if path])
|
117 |
+
print("\nFile extensions:")
|
118 |
+
for ext, count in extensions.most_common(5):
|
119 |
+
print(f" {ext or 'No extension'}: {count} files")
|
120 |
+
|
121 |
+
# Repository owners with most repositories
|
122 |
+
owner_repo_counts = Counter(df['repo_owner'])
|
123 |
+
print("\nTop 5 repository owners:")
|
124 |
+
for owner, count in owner_repo_counts.most_common(5):
|
125 |
+
print(f" {owner}: {count} files")
|
126 |
+
|
127 |
+
|
128 |
+
if __name__ == "__main__":
|
129 |
+
input_file = "python_files.txt"
|
130 |
+
output_file = "github_python_metadata.csv"
|
131 |
+
|
132 |
+
# Check if input file exists
|
133 |
+
if not os.path.exists(input_file):
|
134 |
+
print(f"Error: Input file {input_file} not found.")
|
135 |
+
print("Please make sure the file exists in the current directory.")
|
136 |
+
exit(1)
|
137 |
+
|
138 |
+
process_file_urls(input_file, output_file)
|
custom_tokens_vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
github_python_metadata.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|
mega_licensed_corpus_redacted.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:25872f695fefeb4106ddd7006c798767be925a67abadf5d22da240541cad6717
|
3 |
+
size 767255913
|
python_files.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|