Update model card: Add `text-generation` pipeline tag, `transformers` library, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +37 -17
README.md CHANGED
@@ -1,15 +1,16 @@
1
  ---
2
- license: mit
3
  datasets:
4
  - monology/pile-uncopyrighted
5
  language:
6
  - en
7
- library_name: CALM
 
 
 
8
  tags:
9
  - large language models
10
  - language modeling
11
- metrics:
12
- - BrierLM
13
  ---
14
 
15
  # Continuous Autoregressive Language Models
@@ -17,7 +18,7 @@ metrics:
17
  [![Paper](https://img.shields.io/badge/Paper_πŸ“ƒ-green)](https://arxiv.org/abs/2510.27688)
18
  [![GitHub](https://img.shields.io/badge/GitHub_πŸ§‘β€πŸ’»-blue)](https://github.com/shaochenze/calm)
19
  [![HuggingFace](https://img.shields.io/badge/HuggingFace_πŸ€—-orange)](https://huggingface.co/collections/cccczshao/calm)
20
- [![Blog](https://img.shields.io/badge/Blog_✍️-yellowgreen)](https://shaochenze.github.io/blog/2025/CALM/)
21
 
22
  ## Model Description
23
 
@@ -25,24 +26,43 @@ Modern Large Language Models (LLMs) are constrained by a fundamental bottleneck:
25
 
26
  This is achieved through a two-stage process:
27
 
28
- 1. **A high-fidelity autoencoder** learns to compress K tokens into a single vector and reconstruct them with near-perfect accuracy.
29
- 2. **A continuous-domain language model** then performs autoregressive prediction in this vector space.
30
 
31
  ### Key Features
32
 
33
- * πŸš€ **Ultra-Efficient by Design:** Dramatically improves training and inference efficiency by reducing the number of autoregressive steps by a factor of K.
34
- * πŸ’‘ **A New Scaling Axis:** Introduces a new scaling dimension for LLMsβ€”semantic bandwidth (K). Instead of just scaling parameters and data, you can now scale the amount of information processed in a single step.
35
- * πŸ› οΈ **A Comprehensive Likelihood-Free Toolkit:** Operating in a continuous domain requires new tools. This repository provides the full suite of algorithms that make CALM possible:
36
-
37
- * **A Robust Autoencoder** to learn high-fidelity continuous representations of token chunks.
38
- * **Energy-Based Training**, a principled and likelihood-free method for generative modeling.
39
- * **BrierLM**, a new metric for calibrated, likelihood-free evaluation of language models.
40
- * **Temperature Sampling** for controlled, high-quality text generation using only a black-box sampler.
41
 
42
  ## How to use
43
 
44
- See our [GitHub README](https://github.com/shaochenze/calm), where we provide scripts for training and evaluation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  ## Contact
47
 
48
- If you have any questions, feel free to submit an issue or contact `chenzeshao@tencent.com`.
 
1
  ---
 
2
  datasets:
3
  - monology/pile-uncopyrighted
4
  language:
5
  - en
6
+ library_name: transformers
7
+ license: mit
8
+ metrics:
9
+ - BrierLM
10
  tags:
11
  - large language models
12
  - language modeling
13
+ pipeline_tag: text-generation
 
14
  ---
15
 
16
  # Continuous Autoregressive Language Models
 
18
  [![Paper](https://img.shields.io/badge/Paper_πŸ“ƒ-green)](https://arxiv.org/abs/2510.27688)
19
  [![GitHub](https://img.shields.io/badge/GitHub_πŸ§‘β€πŸ’»-blue)](https://github.com/shaochenze/calm)
20
  [![HuggingFace](https://img.shields.io/badge/HuggingFace_πŸ€—-orange)](https://huggingface.co/collections/cccczshao/calm)
21
+ [![Project Page](https://img.shields.io/badge/Project_Page_✍️-yellowgreen)](https://shaochenze.github.io/blog/2025/CALM/)
22
 
23
  ## Model Description
24
 
 
26
 
27
  This is achieved through a two-stage process:
28
 
29
+ 1. **A high-fidelity autoencoder** learns to compress K tokens into a single vector and reconstruct them with near-perfect accuracy.
30
+ 2. **A continuous-domain language model** then performs autoregressive prediction in this vector space.
31
 
32
  ### Key Features
33
 
34
+ * πŸš€ **Ultra-Efficient by Design:** Dramatically improves training and inference efficiency by reducing the number of autoregressive steps by a factor of K.
35
+ * πŸ’‘ **A New Scaling Axis:** Introduces a new scaling dimension for LLMsβ€”**semantic bandwidth (K)**. Instead of just scaling parameters and data, you can now scale the amount of information processed in a single step.
36
+ * πŸ› οΈ **A Comprehensive Likelihood-Free Toolkit:** Operating in a continuous domain requires new tools. This repository provides the full suite of algorithms that make CALM possible:
37
+ * **A Robust Autoencoder** to learn high-fidelity continuous representations of token chunks.
38
+ * **Energy-Based Training**, a principled and likelihood-free method for generative modeling.
39
+ * **BrierLM**, a new metric for calibrated, likelihood-free evaluation of language models.
40
+ * **Temperature Sampling** for controlled, high-quality text generation using only a black-box sampler.
 
41
 
42
  ## How to use
43
 
44
+ We provide scripts for training and evaluation in our [GitHub README](https://github.com/shaochenze/calm).
45
+
46
+ ### Sample Usage (Text Generation)
47
+
48
+ You can explore the core implementation of **CALM** in the GitHub repository. We've made it easy to use CALM by including our custom code in the πŸ€—[Hugging Face model zoo](https://huggingface.co/collections/cccczshao/calm). Simply set `trust_remote_code=True` when loading the models through the Transformers library.
49
+
50
+ ```python
51
+ from transformers import pipeline, AutoTokenizer
52
+ import torch
53
+
54
+ model_name = "cccczshao/CALM-M" # Example model from the collection
55
+ pipe = pipeline(
56
+ "text-generation",
57
+ model_name,
58
+ tokenizer=AutoTokenizer.from_pretrained(model_name),
59
+ torch_dtype=torch.bfloat16,
60
+ device_map="auto",
61
+ trust_remote_code=True,
62
+ )
63
+ print(pipe("The key to life is", max_new_tokens=20, do_sample=True)[0]["generated_text"])
64
+ ```
65
 
66
  ## Contact
67
 
68
+ If you have any questions, feel free to submit an issue or contact `chenzeshao@tencent.com`.