Datasets:
metadata
license: apache-2.0
task_categories:
- image-to-text
language:
- en
pretty_name: dataset-BotDetect-CAPTCHA-Generator
size_categories:
- 100M<n<1B
π¦ CAPTCHA Datasets (style0βstyle59)
This repository contains CAPTCHA datasets for training CRNN+CTC models. Each archive dataset_*.tar.gz includes 60 styles (from BotDetect Captcha), structured as folders style0 through style59. Each style contains N images depending on the archive name.
ποΈ Available Archives
dataset_500.tar.gzβ 500 images per style (β 30,000 total)dataset_1000.tar.gzβ 1,000 images per style (β 60,000 total)dataset_5000.tar.gzβ 5,000 images per style (β 300,000 total)dataset_10000.tar.gzβ 10,000 images per style (β 600,000 total)dataset_20000.tar.gzβ 20,000 images per style (β 1,200,000 total)dataset_1000_rand.tar.gzβ randomized variant with 1,000 images per style
Naming convention: dataset_{N}.tar.gz means each styleX folder holds exactly N PNG images.
π Directory Layout
/path/to/dataset
βββ style0/
β βββ A1B2C.png
β βββ 9Z7QK.png
β βββ ...
βββ style1/
β βββ K9NO2.png
β βββ ...
βββ ...
βββ style59/
- Filename = ground-truth label (5 uppercase alphanumeric chars), e.g.
K9NO2.png. - Image size =
50Γ250pixels (H=50, W=250), grayscale PNG. - Label rule = regex
^[A-Z0-9]{5}$(exactly 5 chars, uppercase & digits).
π§° Extraction
# example: extract into /workspace/dataset_1000
mkdir -p /workspace/dataset_1000
tar -xvzf dataset_1000.tar.gz -C /workspace/dataset_1000
β Quick File Counts
# total PNG files (depth 2 to only count inside style folders)
find /workspace/dataset_1000 -maxdepth 2 -type f -name '*.png' | wc -l
# per-style counts without a for-loop (prints "count styleX")
find /workspace/dataset_1000 -mindepth 2 -maxdepth 2 -type f -name '*.png' \
| awk -F/ '{print $(NF-2)}' | sort | uniq -c | sort -k2
Expected totals:
dataset_500β 500 Γ 60 = 30,000 filesdataset_1000β 60,000 filesdataset_5000β 300,000 filesdataset_10000β 600,000 filesdataset_20000β 1,200,000 files
π§ͺ Label Validation
# list filenames that violate the strict 5-char uppercase/digit rule
find /workspace/dataset_1000 -type f -name '*.png' \
| awk -F/ '{print $NF}' | sed 's/\.png$//' \
| grep -vE '^[A-Z0-9]{5}$' | head
CSV report via Python (pandas):
import os, re
import pandas as pd
from glob import glob
root = "/workspace/dataset_1000"
rows = []
for s in range(60):
for p in glob(os.path.join(root, f"style{s}", "*.png")):
rows.append({"style": f"style{s}", "filepath": p, "label": os.path.basename(p)[:-4]})
df = pd.DataFrame(rows)
bad = df[~df["label"].str.match(r"^[A-Z0-9]{5}$", na=True)]
print("Invalid labels:", len(bad))
if len(bad):
bad.to_csv("invalid_labels.csv", index=False)
π§© Example: Load to DataFrame
import os
from glob import glob
import pandas as pd
def load_dataset(root_dir):
data = []
for style_id in range(60):
folder = os.path.join(root_dir, f"style{style_id}")
for path in glob(os.path.join(folder, "*.png")):
label = os.path.splitext(os.path.basename(path))[0]
data.append((path, label, f"style{style_id}"))
df = pd.DataFrame(data, columns=["filepath", "label", "style"])
# enforce strict label rule
df = df[df["label"].str.match(r"^[A-Z0-9]{5}$")]
return df
df = load_dataset("/workspace/dataset_1000")
print(df.head(), len(df))
π Merge Datasets (no loop)
Add new files without overwriting existing ones:
rsync -av \
--ignore-existing \
--include='style[0-5][0-9]/' \
--include='style[0-5][0-9]/*.png' \
--exclude='*' \
/workspace/dataset_10000/ /workspace/dataset_20000/
Overwrite only if source is newer:
rsync -av --update \
--include='style[0-5][0-9]/' \
--include='style[0-5][0-9]/*.png' \
--exclude='*' \
/workspace/dataset_10000/ /workspace/dataset_20000/
π Checksums
Optional: keep SHA256 for integrity.
sha256sum dataset_1000.tar.gz > dataset_1000.tar.gz.sha256
sha256sum -c dataset_1000.tar.gz.sha256
π Notes
- All images prepared for CRNN+CTC models with input
(H, W) = (50, 250), grayscale. - Character distribution: digits 0β9 and letters AβZ (uppercase).
- Each style emulates a distinct visual variant (font/noise/warp) from BotDetect.
π Contact
For questions, dataset issues, or custom subsets, please open an issue in this repository.