Add metadata and improve model card (#1)
Browse files- Add metadata and improve model card (3631d3332a9d9ebbf6937554909c5d7b0ed58602)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -1,58 +1,34 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
-
|
| 8 |
-
|
| 9 |
-
-
|
| 10 |
-
|
| 11 |
-
- **Version**: 1.0
|
| 12 |
-
|
| 13 |
-
- **Release Date**: August 2025
|
| 14 |
-
|
| 15 |
-
- **Developers**: Zijian Zhao, Dian Jin
|
| 16 |
-
|
| 17 |
-
- **Organization**: HKUST, PolyU
|
| 18 |
-
|
| 19 |
-
- **License**: Apache License 2.0
|
| 20 |
-
|
| 21 |
-
- **Paper**: [Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?](https://arxiv.org/abs/2506.01482)
|
| 22 |
|
| 23 |
-
-
|
| 24 |
-
|
| 25 |
-
```
|
| 26 |
-
@article{zhao2025automatic,
|
| 27 |
-
title={Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?},
|
| 28 |
-
author={Zhao, Zijian and Jin, Dian and Zhou, Zijing and Zhang, Xiaoyu},
|
| 29 |
-
journal={arXiv preprint arXiv:2506.01482},
|
| 30 |
-
year={2025}
|
| 31 |
-
}
|
| 32 |
-
```
|
| 33 |
|
| 34 |
-
- **
|
| 35 |
|
| 36 |
-
-
|
| 37 |
|
| 38 |
-
|
|
|
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
-
- **
|
| 43 |
-
- **
|
| 44 |
-
- **
|
| 45 |
-
- **
|
| 46 |
-
- **
|
| 47 |
-
- **Tasks Supported**: Stage lighting sequence generation
|
| 48 |
|
| 49 |
## Training Data
|
| 50 |
|
| 51 |
-
The model was trained on the **RPMC-L2** dataset
|
| 52 |
-
|
| 53 |
-
- **Dataset Source**: [RPMC-L2](https://zenodo.org/records/14854217?token=eyJhbGciOiJIUzUxMiJ9.eyJpZCI6IjM5MDcwY2E5LTY0MzUtNGZhZC04NzA4LTczMjNhNTZiOGZmYSIsImRhdGEiOnt9LCJyYW5kb20iOiI1YWRkZmNiMmYyOGNiYzI4ZWUxY2QwNTAyY2YxNTY4ZiJ9.0Jr6GYfyyn02F96eVpkjOtcE-MM1wt-_ctOshdNGMUyUKI15-9Rfp9VF30_hYOTqv_9lLj-7Wj0qGyR3p9cA5w)
|
| 54 |
-
- **Description**: Contains music and corresponding stage lighting data in a format suitable for training Skip-BART.
|
| 55 |
-
- **Details**: Refer to the [paper](https://arxiv.org/abs/2506.01482) for dataset specifics.
|
| 56 |
|
| 57 |
## Usage
|
| 58 |
|
|
@@ -64,6 +40,8 @@ git clone https://huggingface.co/RS2002/Skip-BART
|
|
| 64 |
|
| 65 |
### Example Code
|
| 66 |
|
|
|
|
|
|
|
| 67 |
```python
|
| 68 |
import torch
|
| 69 |
from model import Skip_BART
|
|
@@ -80,4 +58,19 @@ decoder_attention_mask = torch.zeros((2, 1024))
|
|
| 80 |
# Forward pass
|
| 81 |
output = model(x_encoder, x_decoder, encoder_attention_mask, decoder_attention_mask)
|
| 82 |
print(output.size()) # Output: [2, 1024, 1024]
|
| 83 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: other
|
| 4 |
+
datasets:
|
| 5 |
+
- RS2002/RPMC-L2
|
| 6 |
+
tags:
|
| 7 |
+
- stage-lighting
|
| 8 |
+
- generative-task
|
| 9 |
+
- music-to-light
|
| 10 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
+
# Skip-BART
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
+
Skip-BART is an end-to-end generative model designed for **Automatic Stage Lighting Control (ASLC)**. Unlike traditional rule-based methods, Skip-BART conceptualizes lighting control as a generative task, learning directly from professional lighting engineers to predict vivid, human-like lighting sequences synchronized with music.
|
| 15 |
|
| 16 |
+
This model was presented in the paper [Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?](https://huggingface.co/papers/2506.01482).
|
| 17 |
|
| 18 |
+
- **Repository**: [https://github.com/RS2002/Skip-BART](https://github.com/RS2002/Skip-BART)
|
| 19 |
+
- **Dataset**: [RS2002/RPMC-L2](https://huggingface.co/datasets/RS2002/RPMC-L2)
|
| 20 |
|
| 21 |
+
## Model Details
|
| 22 |
|
| 23 |
+
- **Model Type**: Transformer-based model (BART architecture) with skip connections.
|
| 24 |
+
- **Task**: Stage lighting sequence generation (predicting light hue and intensity).
|
| 25 |
+
- **Architecture**: BART-based structure enhanced with a novel skip-connection mechanism to strengthen the relationship between musical frames and lighting states.
|
| 26 |
+
- **Input Format**: Encoder input (batch_size, length, 512) for audio features; Decoder input (batch_size, length, 2) for lighting parameters.
|
| 27 |
+
- **Output Format**: Hidden states representing lighting control parameters (dimension 1024).
|
|
|
|
| 28 |
|
| 29 |
## Training Data
|
| 30 |
|
| 31 |
+
The model was trained on the **RPMC-L2** dataset, a self-collected dataset containing music and corresponding stage lighting data synchronized within a frame grid.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
## Usage
|
| 34 |
|
|
|
|
| 40 |
|
| 41 |
### Example Code
|
| 42 |
|
| 43 |
+
The following snippet demonstrates how to load the model and perform a forward pass (requires `model.py` from the official repository).
|
| 44 |
+
|
| 45 |
```python
|
| 46 |
import torch
|
| 47 |
from model import Skip_BART
|
|
|
|
| 58 |
# Forward pass
|
| 59 |
output = model(x_encoder, x_decoder, encoder_attention_mask, decoder_attention_mask)
|
| 60 |
print(output.size()) # Output: [2, 1024, 1024]
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
## Citation
|
| 64 |
+
|
| 65 |
+
```bibtex
|
| 66 |
+
@article{zhao2025automatic,
|
| 67 |
+
title={Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?},
|
| 68 |
+
author={Zhao, Zijian and Jin, Dian and Zhou, Zijing and Zhang, Xiaoyu},
|
| 69 |
+
journal={arXiv preprint arXiv:2506.01482},
|
| 70 |
+
year={2025}
|
| 71 |
+
}
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
## Contact
|
| 75 |
+
|
| 76 |
+
Zijian Zhao: zzhaock@connect.ust.hk
|