| # DLSCA Test Dataset | |
| A Hugging Face dataset for Deep Learning Side Channel Analysis (DLSCA) with streaming support for large trace files using zarr format. | |
| ## Features | |
| - **Streaming Support**: Large trace data is converted to zarr format with chunking for efficient streaming access | |
| - **Caching**: Uses Hugging Face cache instead of fsspec cache for better integration | |
| - **Zip Compression**: Zarr chunks are stored in zip files to minimize file count | |
| - **Memory Efficient**: Only loads required chunks, not the entire dataset | |
| ## Dataset Structure | |
| - **Labels**: 1000 examples with 4 labels each (int32) | |
| - **Traces**: 1000 examples with 20,971 features each (int8) | |
| - **Index**: Sequential index for each example | |
| ## Usage | |
| ### Local Development | |
| ```python | |
| from test import TestDataset | |
| # Load dataset locally | |
| dataset = TestDataset() | |
| dataset.download_and_prepare() | |
| dataset_dict = dataset.as_dataset(split="train") | |
| # Access examples | |
| example = dataset_dict[0] | |
| print(f"Labels: {example['labels']}") | |
| print(f"Traces length: {len(example['traces'])}") | |
| ``` | |
| ### Streaming Usage (for large datasets) | |
| ```python | |
| from test import TestDownloadManager, TestDataset | |
| # Initialize streaming dataset | |
| dl_manager = TestDownloadManager() | |
| traces_path = "data/traces.npy" | |
| zarr_zip_path = dl_manager.download_zarr_chunks(traces_path, chunk_size=100) | |
| # Access zarr data efficiently | |
| dataset = TestDataset() | |
| zarr_array = dataset._load_zarr_from_zip(zarr_zip_path) | |
| # Access specific chunks | |
| chunk_data = zarr_array[0:100] # First chunk | |
| ``` | |
| ### Chunk Selection | |
| ```python | |
| # Select specific ranges for training | |
| selected_range = slice(200, 300) | |
| selected_traces = zarr_array[selected_range] | |
| selected_labels = labels[selected_range] | |
| ``` | |
| ## Implementation Details | |
| ### Custom DownloadManager | |
| The `TestDownloadManager` extends `datasets.DownloadManager` to: | |
| - Convert numpy arrays to zarr format with chunking | |
| - Store zarr data in zip files for compression | |
| - Use Hugging Face cache directory | |
| - Support streaming access patterns | |
| ### Custom Dataset Builder | |
| The `TestDataset` extends `datasets.GeneratorBasedBuilder` to: | |
| - Handle both local numpy files and remote zarr chunks | |
| - Provide efficient chunk-based data access | |
| - Maintain compatibility with Hugging Face datasets API | |
| ### Zarr Configuration | |
| - **Format**: Zarr v2 (for better fsspec compatibility) | |
| - **Chunks**: (100, 20971) - 100 examples per chunk | |
| - **Compression**: ZIP format for the zarr store | |
| - **Storage**: Hugging Face cache directory | |
| ## Performance | |
| The zarr-based approach provides: | |
| - **Memory efficiency**: Only loads required chunks | |
| - **Streaming capability**: Can work with datasets larger than RAM | |
| - **Compression**: Zip storage reduces file size | |
| - **Cache optimization**: Leverages Hugging Face caching mechanism | |
| ## Requirements | |
| ``` | |
| datasets | |
| zarr<3 | |
| fsspec | |
| numpy | |
| zipfile36 | |
| ``` | |
| ## File Structure | |
| ``` | |
| test/ | |
| ├── data/ | |
| │ ├── labels.npy # Label data (small, kept as numpy) | |
| │ └── traces.npy # Trace data (large, converted to zarr) | |
| ├── test.py # Main dataset implementation | |
| ├── example_usage.py # Usage examples | |
| ├── requirements.txt # Dependencies | |
| └── README.md # This file | |
| ``` | |
| ## Notes | |
| - The original `traces.npy` is ~20MB, which demonstrates the zarr chunking approach | |
| - For even larger datasets (GB/TB), this approach scales well | |
| - The zarr v2 format is used for better compatibility with fsspec | |
| - Chunk size can be adjusted based on memory constraints and access patterns | |
| ## Future Enhancements | |
| - Support for multiple splits (train/test/validation) | |
| - Dynamic chunk size based on available memory | |
| - Compression algorithms for zarr chunks | |
| - Metadata caching for faster dataset initialization | |