ZUNA Core ML: Enumerated Apple Profiles
This repository contains Core ML conversions of ZUNA for Apple-native inference on iPhone, visionOS, and macOS, organized under enumerated profile folders.
ZUNA is a 380M-parameter masked diffusion autoencoder for scalp EEG reconstruction and superresolution. Given a subset of channels and their 3D electrode coordinates, the model can:
- Denoise observed EEG channels
- Reconstruct dropped or missing channels
- Predict signals at novel scalp positions from physical coordinates
The base model was trained on a large harmonized public EEG corpus (approximately 2 million channel-hours, spanning many datasets), and this Core ML release preserves that pretrained behavior for Apple deployment.
Model Overview
The base model follows the same high-level inference pattern as upstream ZUNA:
- Inputs are EEG windows of 5 seconds @ 256 Hz (
seq_len=1280). - Signals are tokenized with
num_fine_time_pts=32, so each channel produces1280 / 32 = 40coarse time tokens. tok_idxencodes{x, y, z, tc}(electrode position + coarse time index).- Inference performs:
- Encoder forward pass (once)
- Decoder denoising loop (N diffusion steps)
- Token-to-signal reconstruction
As reported in the original ZUNA paper, the base architecture is a ~380M-parameter position-aware diffusion autoencoder trained on a large harmonized public EEG corpus.
This release preserves the base model tensor contract and publishes profile-specific shapes for deterministic Apple deployment.
Preprocessing Contract
For best parity with upstream behavior, keep the same preprocessing assumptions used by ZUNA:
- EEG montage must include 3D channel positions
- Sampling rate: 256 Hz
- Epoch length: 5 seconds (
1280samples) - Token chunk size:
32(40coarse tokens per channel) - Normalization aligned with upstream inference (
data_norm=10.0)
These assumptions are what the released pretrained weights were optimized for.
Getting Started
Profile artifacts are organized as:
profiles/16ch/fp16/...profiles/32ch/fp16/...profiles/64ch/fp16/...profiles/64ch/fp32/...
Each profile contains:
ZunaEncoder.mlpackageZunaDecoderStep.mlpackageZunaDecoderStepUpdate.mlpackagecoreml_export_metadata.json
Model split
ZunaEncoder: Encodes tokenized EEG contextZunaDecoderStep: One denoising step in the diffusion loopZunaDecoderStepUpdate: Decoder step + Euler update (z_next = z - dt * v_c)
Use DecoderStepUpdate when you want a minimal host-side loop and fewer host tensor ops.
Available profiles
| Profile | Channels | Precision | Token Count | Encoder/Decoder Tensor Shape | final-z rel_l2 vs PyTorch |
|---|---|---|---|---|---|
16ch-fp16 |
16 | fp16 |
640 | [1, 640, 32] |
0.006580 |
32ch-fp16 |
32 | fp16 |
1280 | [1, 1280, 32] |
0.005629 |
64ch-fp16 |
64 | fp16 |
2560 | [1, 2560, 32] |
0.004366 |
64ch-fp32 |
64 | fp32 |
2560 | [1, 2560, 32] |
0.000002 |
See profiles/index.json for machine-readable profile discovery.
Validation
All published profiles are checked against the original PyTorch weights using a 20-step diffusion parity run.
| Profile | MAE | RMSE | max_abs | rel_l2 | Threshold | Gate |
|---|---|---|---|---|---|---|
16ch-fp16 |
0.004843 | 0.006150 | 0.057458 | 0.006580 | 0.010000 | PASS |
32ch-fp16 |
0.004189 | 0.005253 | 0.020710 | 0.005629 | 0.010000 | PASS |
64ch-fp16 |
0.003265 | 0.004077 | 0.018174 | 0.004366 | 0.010000 | PASS |
64ch-fp32 |
0.000001 | 0.000001 | 0.000011 | 0.000002 | 0.005000 | PASS |
Parity Visualization
Waveform Overlay + Residual
Representative sample from 64ch-fp16 (channel 0), final-z step-loop output.
Runtime Notes
16ch/fp16and32ch/fp16are best for quick mobile validation.64ch/fp16is the practical high-capacity default on Apple GPU.64ch/fp32is a high-fidelity reference profile and quality fallback.- Throughput/latency depends strongly on diffusion steps.
- Lower diffusion steps are useful for rapid iteration; higher steps improve reconstruction quality.
Upstream Resources
- Original model card: https://huggingface.co/Zyphra/ZUNA
- Original repository and tutorials: https://github.com/Zyphra/zuna
- Technical paper page: https://www.zyphra.com/zuna-technical-paper
Citation
Please cite and credit the original ZUNA model and Zyphra resources:
- Base model: https://huggingface.co/Zyphra/ZUNA
- Repository: https://github.com/Zyphra/zuna
- Paper: ZUNA: Flexible EEG Superresolution with Position-Aware Diffusion Autoencoders
- Technical page: https://www.zyphra.com/zuna-technical-paper
Disclaimer
This conversion is for research and engineering use only. It is not validated for medical diagnosis, treatment, or clinical decision-making.
Use at your own risk and follow the base model's usage and licensing terms.
- Downloads last month
- 104
Model tree for oraculumai/ZUNA-CoreML-Apple
Base model
Zyphra/ZUNA
