File size: 2,712 Bytes
eef7462 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
---
language:
- en
task_categories:
- text-classification
- token-classification
tags:
- json
- validation
- semantic-errors
- synthetic-data
size_categories:
- n<1K
---
# JSON SemVal Synthetic v1
Synthetic JSON+Schema corruptions for semantic validation training.
## Dataset Description
This dataset contains synthetically generated JSON payloads with controlled semantic errors for training ML models to detect and fix JSON validation issues.
### Dataset Structure
Each example in the dataset contains:
- `schema`: JSON Schema definition
- `clean_json`: Valid JSON payload conforming to the schema
- `corrupt_json`: Corrupted JSON with semantic errors
- `error_type`: Type of error introduced (e.g., type_mismatch, format_violation, enum_violation)
- `jsonpath`: JSONPath to the location of the error
- `fix_action`: Suggested fix action (e.g., cast_number, parse_date_iso, map_enum)
### Error Types
- `type_mismatch` - Wrong data type (e.g., string instead of integer)
- `format_violation` - Invalid format (e.g., bad date format)
- `enum_violation` - Invalid enum value
- `missing_required` - Missing required field
- `extra_property` - Unexpected additional property
- `range_violation` - Value outside allowed range
- `pattern_violation` - String doesn't match regex pattern
### Data Splits
- **Train**: 30 examples
- **Test**: 10 examples
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("thearnabsarkar/json-semval-synth-v1")
# Access train split
train_data = dataset["train"]
# Example usage
for example in train_data:
print(f"Error type: {example['error_type']}")
print(f"JSONPath: {example['jsonpath']}")
print(f"Fix action: {example['fix_action']}")
```
### Dataset Creation
This dataset was generated using the JSON Semantic Validator's synthetic data generation pipeline, which:
1. Generates diverse JSON schemas
2. Creates valid JSON payloads
3. Introduces controlled corruptions
4. Labels each corruption with error type and location
### License
MIT
### Citation
If you use this dataset, please cite:
```bibtex
@misc{json-semval-synth-v1,
author = {Arnab Sarkar},
title = {JSON SemVal Synthetic v1},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/thearnabsarkar/json-semval-synth-v1}
}
```
### Related Resources
- **Model**: [thearnabsarkar/json-semval-minilm-v1](https://huggingface.co/thearnabsarkar/json-semval-minilm-v1) (coming soon)
- **Space**: [thearnabsarkar/json-semantic-validator](https://huggingface.co/spaces/thearnabsarkar/json-semantic-validator) (coming soon)
- **Code**: [GitHub Repository](https://github.com/thearnabsarkar/json-semantic-validator) (if applicable)
|