Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
collected_date: string
total_datasets: int64
total_size_bytes: int64
by_source: struct<github: struct<count: int64, size: int64, files: list<item: string>>, jsonplaceholder: struct<count: int64, size: int64, files: list<item: string>>, public_apis: struct<count: int64, size: int64, files: list<item: string>>, synthetic: struct<count: int64, size: int64, files: list<item: string>>>
collected: list<item: struct<source: string, name: string, size: int64, path: string>>
failed: list<item: struct<url: string, error: string>>
vs
sha: string
node_id: string
commit: struct<author: struct<date: string, email: string, name: string>, comment_count: int64, committer: struct<date: string, email: string, name: string>, message: string, tree: struct<sha: string, url: string>, url: string, verification: struct<payload: string, reason: string, signature: string, verified: bool, verified_at: string>>
url: string
html_url: string
comments_url: string
author: struct<avatar_url: string, events_url: string, followers_url: string, following_url: string, gists_url: string, gravatar_id: string, html_url: string, id: int64, login: string, node_id: string, organizations_url: string, received_events_url: string, repos_url: string, site_admin: bool, starred_url: string, subscriptions_url: string, type: string, url: string, user_view_type: string>
committer: struct<avatar_url: string, events_url: string, followers_url: string, following_url: string, gists_url: string, gravatar_id: string, html_url: string, id: int64, login: string, node_id: string, organizations_url: string, received_events_url: string, repos_url: string, site_admin: bool, starred_url: string, subscriptions_url: string, type: string, url: string, user_view_type: string>
parents: list<item: struct<html_url: string, sha: string, url: string>>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 547, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
collected_date: string
total_datasets: int64
total_size_bytes: int64
by_source: struct<github: struct<count: int64, size: int64, files: list<item: string>>, jsonplaceholder: struct<count: int64, size: int64, files: list<item: string>>, public_apis: struct<count: int64, size: int64, files: list<item: string>>, synthetic: struct<count: int64, size: int64, files: list<item: string>>>
collected: list<item: struct<source: string, name: string, size: int64, path: string>>
failed: list<item: struct<url: string, error: string>>
vs
sha: string
node_id: string
commit: struct<author: struct<date: string, email: string, name: string>, comment_count: int64, committer: struct<date: string, email: string, name: string>, message: string, tree: struct<sha: string, url: string>, url: string, verification: struct<payload: string, reason: string, signature: string, verified: bool, verified_at: string>>
url: string
html_url: string
comments_url: string
author: struct<avatar_url: string, events_url: string, followers_url: string, following_url: string, gists_url: string, gravatar_id: string, html_url: string, id: int64, login: string, node_id: string, organizations_url: string, received_events_url: string, repos_url: string, site_admin: bool, starred_url: string, subscriptions_url: string, type: string, url: string, user_view_type: string>
committer: struct<avatar_url: string, events_url: string, followers_url: string, following_url: string, gists_url: string, gravatar_id: string, html_url: string, id: int64, login: string, node_id: string, organizations_url: string, received_events_url: string, repos_url: string, site_admin: bool, starred_url: string, subscriptions_url: string, type: string, url: string, user_view_type: string>
parents: list<item: struct<html_url: string, sha: string, url: string>>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ZENO Benchmark Datasets
This directory contains diverse, real-world JSON datasets collected for benchmarking ZENO's compression performance against JSON and other formats.
Collection Summary
Last Updated: November 5, 2025 Total Datasets: 65 Total Size: 4.17 MB (4,373,804 bytes)
Dataset Categories
1. GitHub API (30 datasets, 1.24 MB)
Real GitHub API responses covering various endpoint types:
Repository Information (10 datasets):
- Major OSS projects: Kubernetes, React, VS Code, TensorFlow, Rust, Python, Go, Node.js, Django, Vue.js
- Contains: stars, forks, issues, language stats, license info, etc.
- Size range: 6-7 KB per file
User Profiles (5 datasets):
- Famous developers: Linus Torvalds, Guido van Rossum, DHH, Kyle Simpson, TJ Holowaychuk
- Contains: follower counts, repos, bio, location, etc.
- Size range: 1-1.5 KB per file
Issues (3 datasets):
- Open issues from Rust, VS Code, React
- 10 issues per dataset with full metadata
- Size range: 54-70 KB per file
Pull Requests (2 datasets):
- Active PRs from Kubernetes, TensorFlow
- Includes commits, reviews, labels, assignees
- Size range: 198-283 KB per file
Contributors (2 datasets):
- Top 20 contributors from React, Node.js
- Contains: contributions count, commit stats
- Size: ~21 KB per file
Commits (2 datasets):
- Recent 15 commits from Rust, Python
- Full commit metadata with author, message, stats
- Size: 77-78 KB per file
Releases (2 datasets):
- Release history from Rust, Node.js
- Includes: version tags, release notes, assets
- Size: 107-217 KB per file
Organizations (4 datasets):
- Microsoft, Google, Facebook, Apache
- Org metadata: repos, members, location
- Size: 1-1.3 KB per file
2. JSONPlaceholder (10 datasets, 235 KB)
Fake REST API data for testing:
- posts (27 KB): 100 blog posts
- comments (157 KB): 500 comments
- albums (9 KB): 100 photo albums
- photos (10 KB): 50 photo metadata
- todos (24 KB): 200 todo items
- users (5 KB): 10 user profiles
- user_posts (2 KB): Posts by specific user
- user_albums (1 KB): Albums by specific user
- post_comments (1 KB): Comments on specific post
- posts_single (292 bytes): Single post detail
3. Public APIs (10 datasets, 2.34 MB)
Data from various public REST APIs:
Geographic Data:
- countries_usa (21 KB): USA country information
- countries_region_europe (263 KB): All European countries
- countries_language_spanish (122 KB): Spanish-speaking countries
Cryptocurrency Data:
- crypto_coins_list (1.6 MB): Complete cryptocurrency list (15,000+ coins)
- crypto_bitcoin (140 KB): Bitcoin detailed data
- crypto_ethereum (140 KB): Ethereum detailed data
- crypto_markets (47 KB): Top 50 crypto markets
Other APIs:
- dog_breeds (4 KB): Dog breed catalog
- breweries (25 KB): 50 brewery records
- nested_posts_with_comments (10 KB): Posts with embedded comments
4. Synthetic Datasets (15 datasets, 463 KB)
Carefully crafted datasets representing common real-world patterns:
E-commerce & Business:
- ecommerce_catalog (31 KB): 100 products with varied attributes
- user_profiles (34 KB): 80 user accounts with preferences
- npm_packages (40 KB): 60 package.json configurations
Logging & Events:
- server_logs (48 KB): 200 structured log entries
- event_stream (41 KB): 150 event records
- api_responses (51 KB): 50 API response samples
Time-series & Sensor Data:
- sensor_timeseries (32 KB): 150 sensor readings
- geographic_data (12 KB): 60 city records with coordinates
Compression Test Cases:
- numeric_sequences (1.9 KB): Linear, Fibonacci, powers, primes (delta compression test)
- repeated_values (11 KB): High repetition data (sparse mode test)
- wide_table (21 KB): 50 records × 20 fields (column mode test)
- database_records_sparse (13 KB): Records with sparse fields
Complex Structures:
- nested_structures (22 KB): Deeply nested objects
- mixed_types (23 KB): All JSON types mixed
- large_text_fields (86 KB): Articles with large text content
Size Distribution
| Size Range | Count | Example |
|---|---|---|
| < 10 KB | 28 | User profiles, organizations, small configs |
| 10-50 KB | 24 | Repository info, logs, synthetic data |
| 50-100 KB | 7 | Issues, commits, large text fields |
| 100-300 KB | 7 | Releases, crypto data, country lists |
| > 300 KB | 1 | Crypto coins list (1.6 MB) |
Diversity Characteristics
The dataset collection includes:
- Multiple domains: Development tools, social media, e-commerce, finance, geography
- Various structures: Flat objects, nested hierarchies, arrays, mixed types
- Different patterns:
- Repetitive data (sparse mode candidates)
- Numeric sequences (delta encoding candidates)
- Wide tables (column mode candidates)
- String-heavy content (dictionary candidates)
- Size variety: 292 bytes to 1.6 MB
- Real-world APIs: GitHub, CoinGecko, REST Countries, etc.
- Representative synthetic data: Common use cases like logs, configs, time-series
File Structure
Each dataset consists of two files:
source/
dataset_name.json # The actual JSON data
dataset_name.metadata.json # Collection metadata
Metadata Format
{
"id": "source_dataset_name",
"source": "github|jsonplaceholder|public_apis|synthetic",
"collected_date": "2025-11-05T22:51:39.293685",
"size_bytes": 12345,
"description": "Brief description of the dataset",
"url": "Original URL if applicable"
}
Usage in Benchmarks
These datasets are used to:
- Measure compression ratios: ZENO vs JSON vs MessagePack vs Protobuf
- Test encoding modes: Verify column/row/sparse/delta selection
- Benchmark performance: Encoding/decoding speed across dataset types
- Validate correctness: Round-trip testing (JSON → ZENO → JSON)
- Token efficiency: Estimate LLM token reduction
Expanding the Dataset
To scale to 1000+ datasets:
- GitHub API: Expand to more repos, longer histories
- Package Registries: npm, PyPI, crates.io package metadata
- Social Media: Twitter/X, Reddit public APIs
- Open Data: Government datasets, scientific data
- Logs: Real production logs (anonymized)
- Configurations: Real-world config files from OSS projects
- Database Dumps: Sample SQL→JSON exports
License & Attribution
- GitHub API data: Public API, subject to GitHub's terms
- JSONPlaceholder: Fake data, free to use
- Public APIs: Various licenses, check individual sources
- Synthetic data: Generated for this project, no restrictions
All data is for benchmarking and research purposes only
- Downloads last month
- 256