Datasets:
Upload folder using huggingface_hub
Browse files- README.md +176 -0
- dataset_10000.tar.gz +3 -0
- dataset_1000_rand.tar.gz +3 -0
- dataset_20000.tar.gz +3 -0
- dataset_500.tar.gz +3 -0
- dataset_5000.tar.gz +3 -0
README.md
ADDED
|
@@ -0,0 +1,176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
π¦ CAPTCHA Datasets (style0βstyle59)
|
| 2 |
+
====================================
|
| 3 |
+
|
| 4 |
+
This repository contains CAPTCHA datasets for training CRNN+CTC models. Each archive `dataset_*.tar.gz` includes **60 styles** (from [BotDetect Captcha](https://captcha.com/)), structured as folders `style0` through `style59`. Each style contains _N_ images depending on the archive name.
|
| 5 |
+
|
| 6 |
+
* * *
|
| 7 |
+
|
| 8 |
+
ποΈ Available Archives
|
| 9 |
+
----------------------
|
| 10 |
+
|
| 11 |
+
* `dataset_500.tar.gz` β 500 images per style (β 30,000 total)
|
| 12 |
+
* `dataset_1000.tar.gz` β 1,000 images per style (β 60,000 total)
|
| 13 |
+
* `dataset_5000.tar.gz` β 5,000 images per style (β 300,000 total)
|
| 14 |
+
* `dataset_10000.tar.gz` β 10,000 images per style (β 600,000 total)
|
| 15 |
+
* `dataset_20000.tar.gz` β 20,000 images per style (β 1,200,000 total)
|
| 16 |
+
* `dataset_1000_rand.tar.gz` β randomized variant with 1,000 images per style
|
| 17 |
+
|
| 18 |
+
**Naming convention:** `dataset_{N}.tar.gz` means each `styleX` folder holds exactly `N` PNG images.
|
| 19 |
+
|
| 20 |
+
* * *
|
| 21 |
+
|
| 22 |
+
π Directory Layout
|
| 23 |
+
-------------------
|
| 24 |
+
|
| 25 |
+
/path/to/dataset
|
| 26 |
+
βββ style0/
|
| 27 |
+
β βββ A1B2C.png
|
| 28 |
+
β βββ 9Z7QK.png
|
| 29 |
+
β βββ ...
|
| 30 |
+
βββ style1/
|
| 31 |
+
β βββ K9NO2.png
|
| 32 |
+
β βββ ...
|
| 33 |
+
βββ ...
|
| 34 |
+
βββ style59/
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
* **Filename** = ground-truth label (5 uppercase alphanumeric chars), e.g. `K9NO2.png`.
|
| 38 |
+
* **Image size** = `50Γ250` pixels (H=50, W=250), grayscale PNG.
|
| 39 |
+
* **Label rule** = regex `^[A-Z0-9]{5}$` (exactly 5 chars, uppercase & digits).
|
| 40 |
+
|
| 41 |
+
* * *
|
| 42 |
+
|
| 43 |
+
π§° Extraction
|
| 44 |
+
-------------
|
| 45 |
+
|
| 46 |
+
# example: extract into /workspace/dataset_1000
|
| 47 |
+
mkdir -p /workspace/dataset_1000
|
| 48 |
+
tar -xvzf dataset_1000.tar.gz -C /workspace/dataset_1000
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
* * *
|
| 52 |
+
|
| 53 |
+
β
Quick File Counts
|
| 54 |
+
-------------------
|
| 55 |
+
|
| 56 |
+
# total PNG files (depth 2 to only count inside style folders)
|
| 57 |
+
find /workspace/dataset_1000 -maxdepth 2 -type f -name '*.png' | wc -l
|
| 58 |
+
|
| 59 |
+
# per-style counts without a for-loop (prints "count styleX")
|
| 60 |
+
find /workspace/dataset_1000 -mindepth 2 -maxdepth 2 -type f -name '*.png' \
|
| 61 |
+
| awk -F/ '{print $(NF-2)}' | sort | uniq -c | sort -k2
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
Expected totals:
|
| 65 |
+
|
| 66 |
+
* `dataset_500` β 500 Γ 60 = 30,000 files
|
| 67 |
+
* `dataset_1000` β 60,000 files
|
| 68 |
+
* `dataset_5000` β 300,000 files
|
| 69 |
+
* `dataset_10000` β 600,000 files
|
| 70 |
+
* `dataset_20000` β 1,200,000 files
|
| 71 |
+
|
| 72 |
+
* * *
|
| 73 |
+
|
| 74 |
+
π§ͺ Label Validation
|
| 75 |
+
-------------------
|
| 76 |
+
|
| 77 |
+
# list filenames that violate the strict 5-char uppercase/digit rule
|
| 78 |
+
find /workspace/dataset_1000 -type f -name '*.png' \
|
| 79 |
+
| awk -F/ '{print $NF}' | sed 's/\.png$//' \
|
| 80 |
+
| grep -vE '^[A-Z0-9]{5}$' | head
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
CSV report via Python (pandas):
|
| 84 |
+
|
| 85 |
+
import os, re
|
| 86 |
+
import pandas as pd
|
| 87 |
+
from glob import glob
|
| 88 |
+
|
| 89 |
+
root = "/workspace/dataset_1000"
|
| 90 |
+
rows = []
|
| 91 |
+
for s in range(60):
|
| 92 |
+
for p in glob(os.path.join(root, f"style{s}", "*.png")):
|
| 93 |
+
rows.append({"style": f"style{s}", "filepath": p, "label": os.path.basename(p)[:-4]})
|
| 94 |
+
|
| 95 |
+
df = pd.DataFrame(rows)
|
| 96 |
+
bad = df[~df["label"].str.match(r"^[A-Z0-9]{5}$", na=True)]
|
| 97 |
+
print("Invalid labels:", len(bad))
|
| 98 |
+
if len(bad):
|
| 99 |
+
bad.to_csv("invalid_labels.csv", index=False)
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
* * *
|
| 103 |
+
|
| 104 |
+
π§© Example: Load to DataFrame
|
| 105 |
+
-----------------------------
|
| 106 |
+
|
| 107 |
+
import os
|
| 108 |
+
from glob import glob
|
| 109 |
+
import pandas as pd
|
| 110 |
+
|
| 111 |
+
def load_dataset(root_dir):
|
| 112 |
+
data = []
|
| 113 |
+
for style_id in range(60):
|
| 114 |
+
folder = os.path.join(root_dir, f"style{style_id}")
|
| 115 |
+
for path in glob(os.path.join(folder, "*.png")):
|
| 116 |
+
label = os.path.splitext(os.path.basename(path))[0]
|
| 117 |
+
data.append((path, label, f"style{style_id}"))
|
| 118 |
+
df = pd.DataFrame(data, columns=["filepath", "label", "style"])
|
| 119 |
+
# enforce strict label rule
|
| 120 |
+
df = df[df["label"].str.match(r"^[A-Z0-9]{5}$")]
|
| 121 |
+
return df
|
| 122 |
+
|
| 123 |
+
df = load_dataset("/workspace/dataset_1000")
|
| 124 |
+
print(df.head(), len(df))
|
| 125 |
+
|
| 126 |
+
|
| 127 |
+
* * *
|
| 128 |
+
|
| 129 |
+
π Merge Datasets (no loop)
|
| 130 |
+
---------------------------
|
| 131 |
+
|
| 132 |
+
**Add new files without overwriting existing ones**:
|
| 133 |
+
|
| 134 |
+
rsync -av \
|
| 135 |
+
--ignore-existing \
|
| 136 |
+
--include='style[0-5][0-9]/' \
|
| 137 |
+
--include='style[0-5][0-9]/*.png' \
|
| 138 |
+
--exclude='*' \
|
| 139 |
+
/workspace/dataset_10000/ /workspace/dataset_20000/
|
| 140 |
+
|
| 141 |
+
|
| 142 |
+
**Overwrite only if source is newer**:
|
| 143 |
+
|
| 144 |
+
rsync -av --update \
|
| 145 |
+
--include='style[0-5][0-9]/' \
|
| 146 |
+
--include='style[0-5][0-9]/*.png' \
|
| 147 |
+
--exclude='*' \
|
| 148 |
+
/workspace/dataset_10000/ /workspace/dataset_20000/
|
| 149 |
+
|
| 150 |
+
|
| 151 |
+
* * *
|
| 152 |
+
|
| 153 |
+
π Checksums
|
| 154 |
+
------------
|
| 155 |
+
|
| 156 |
+
Optional: keep SHA256 for integrity.
|
| 157 |
+
|
| 158 |
+
sha256sum dataset_1000.tar.gz > dataset_1000.tar.gz.sha256
|
| 159 |
+
sha256sum -c dataset_1000.tar.gz.sha256
|
| 160 |
+
|
| 161 |
+
|
| 162 |
+
* * *
|
| 163 |
+
|
| 164 |
+
π Notes
|
| 165 |
+
--------
|
| 166 |
+
|
| 167 |
+
* All images prepared for CRNN+CTC models with input `(H, W) = (50, 250)`, grayscale.
|
| 168 |
+
* Character distribution: digits 0β9 and letters AβZ (uppercase).
|
| 169 |
+
* Each style emulates a distinct visual variant (font/noise/warp) from BotDetect.
|
| 170 |
+
|
| 171 |
+
* * *
|
| 172 |
+
|
| 173 |
+
π Contact
|
| 174 |
+
----------
|
| 175 |
+
|
| 176 |
+
For questions, dataset issues, or custom subsets, please open an issue in this repository.
|
dataset_10000.tar.gz
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a31756f1dafe5e4e452c4aeac3c5eacb7cfdc2e0ab83b3e9ae875b592f51952c
|
| 3 |
+
size 2763340664
|
dataset_1000_rand.tar.gz
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d85f575201223ff63733b792cbd4d07261646a985158b24a64ac4f778a059532
|
| 3 |
+
size 279998763
|
dataset_20000.tar.gz
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ec6386886ce93369d64e2fab8d4817309e76673563a35046b03faa99f59f1447
|
| 3 |
+
size 6094405204
|
dataset_500.tar.gz
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0f7ec06ce234ef771256a57b16a76e30a1d766ff52efeed82865ea5b3a198a3b
|
| 3 |
+
size 138296823
|
dataset_5000.tar.gz
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7799971f94d936b93f5432294cdb0cd582f789d2ffe4e2bb45e8910ffec92e95
|
| 3 |
+
size 1400945469
|