File size: 3,755 Bytes
a17cfac |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
# DLSCA Test Dataset
A Hugging Face dataset for Deep Learning Side Channel Analysis (DLSCA) with streaming support for large trace files using zarr format.
## Features
- **Streaming Support**: Large trace data is converted to zarr format with chunking for efficient streaming access
- **Caching**: Uses Hugging Face cache instead of fsspec cache for better integration
- **Zip Compression**: Zarr chunks are stored in zip files to minimize file count
- **Memory Efficient**: Only loads required chunks, not the entire dataset
## Dataset Structure
- **Labels**: 1000 examples with 4 labels each (int32)
- **Traces**: 1000 examples with 20,971 features each (int8)
- **Index**: Sequential index for each example
## Usage
### Local Development
```python
from test import TestDataset
# Load dataset locally
dataset = TestDataset()
dataset.download_and_prepare()
dataset_dict = dataset.as_dataset(split="train")
# Access examples
example = dataset_dict[0]
print(f"Labels: {example['labels']}")
print(f"Traces length: {len(example['traces'])}")
```
### Streaming Usage (for large datasets)
```python
from test import TestDownloadManager, TestDataset
# Initialize streaming dataset
dl_manager = TestDownloadManager()
traces_path = "data/traces.npy"
zarr_zip_path = dl_manager.download_zarr_chunks(traces_path, chunk_size=100)
# Access zarr data efficiently
dataset = TestDataset()
zarr_array = dataset._load_zarr_from_zip(zarr_zip_path)
# Access specific chunks
chunk_data = zarr_array[0:100] # First chunk
```
### Chunk Selection
```python
# Select specific ranges for training
selected_range = slice(200, 300)
selected_traces = zarr_array[selected_range]
selected_labels = labels[selected_range]
```
## Implementation Details
### Custom DownloadManager
The `TestDownloadManager` extends `datasets.DownloadManager` to:
- Convert numpy arrays to zarr format with chunking
- Store zarr data in zip files for compression
- Use Hugging Face cache directory
- Support streaming access patterns
### Custom Dataset Builder
The `TestDataset` extends `datasets.GeneratorBasedBuilder` to:
- Handle both local numpy files and remote zarr chunks
- Provide efficient chunk-based data access
- Maintain compatibility with Hugging Face datasets API
### Zarr Configuration
- **Format**: Zarr v2 (for better fsspec compatibility)
- **Chunks**: (100, 20971) - 100 examples per chunk
- **Compression**: ZIP format for the zarr store
- **Storage**: Hugging Face cache directory
## Performance
The zarr-based approach provides:
- **Memory efficiency**: Only loads required chunks
- **Streaming capability**: Can work with datasets larger than RAM
- **Compression**: Zip storage reduces file size
- **Cache optimization**: Leverages Hugging Face caching mechanism
## Requirements
```
datasets
zarr<3
fsspec
numpy
zipfile36
```
## File Structure
```
test/
├── data/
│ ├── labels.npy # Label data (small, kept as numpy)
│ └── traces.npy # Trace data (large, converted to zarr)
├── test.py # Main dataset implementation
├── example_usage.py # Usage examples
├── requirements.txt # Dependencies
└── README.md # This file
```
## Notes
- The original `traces.npy` is ~20MB, which demonstrates the zarr chunking approach
- For even larger datasets (GB/TB), this approach scales well
- The zarr v2 format is used for better compatibility with fsspec
- Chunk size can be adjusted based on memory constraints and access patterns
## Future Enhancements
- Support for multiple splits (train/test/validation)
- Dynamic chunk size based on available memory
- Compression algorithms for zarr chunks
- Metadata caching for faster dataset initialization
|