cccczshao commited on
Commit
eb12163
Β·
verified Β·
1 Parent(s): 038f3cf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -32
README.md CHANGED
@@ -12,13 +12,13 @@ tags:
12
  - language modeling
13
  pipeline_tag: text-generation
14
  ---
15
-
16
  # Continuous Autoregressive Language Models
17
 
18
  [![Paper](https://img.shields.io/badge/Paper_πŸ“ƒ-green)](https://arxiv.org/abs/2510.27688)
19
  [![GitHub](https://img.shields.io/badge/GitHub_πŸ§‘β€πŸ’»-blue)](https://github.com/shaochenze/calm)
20
  [![HuggingFace](https://img.shields.io/badge/HuggingFace_πŸ€—-orange)](https://huggingface.co/collections/cccczshao/calm)
21
- [![Project Page](https://img.shields.io/badge/Project_Page_✍️-yellowgreen)](https://shaochenze.github.io/blog/2025/CALM/)
 
22
 
23
  ## Model Description
24
 
@@ -26,42 +26,23 @@ Modern Large Language Models (LLMs) are constrained by a fundamental bottleneck:
26
 
27
  This is achieved through a two-stage process:
28
 
29
- 1. **A high-fidelity autoencoder** learns to compress K tokens into a single vector and reconstruct them with near-perfect accuracy.
30
- 2. **A continuous-domain language model** then performs autoregressive prediction in this vector space.
31
 
32
  ### Key Features
33
 
34
- * πŸš€ **Ultra-Efficient by Design:** Dramatically improves training and inference efficiency by reducing the number of autoregressive steps by a factor of K.
35
- * πŸ’‘ **A New Scaling Axis:** Introduces a new scaling dimension for LLMsβ€”**semantic bandwidth (K)**. Instead of just scaling parameters and data, you can now scale the amount of information processed in a single step.
36
- * πŸ› οΈ **A Comprehensive Likelihood-Free Toolkit:** Operating in a continuous domain requires new tools. This repository provides the full suite of algorithms that make CALM possible:
37
- * **A Robust Autoencoder** to learn high-fidelity continuous representations of token chunks.
38
- * **Energy-Based Training**, a principled and likelihood-free method for generative modeling.
39
- * **BrierLM**, a new metric for calibrated, likelihood-free evaluation of language models.
40
- * **Temperature Sampling** for controlled, high-quality text generation using only a black-box sampler.
 
41
 
42
  ## How to use
43
 
44
- We provide scripts for training and evaluation in our [GitHub README](https://github.com/shaochenze/calm).
45
-
46
- ### Sample Usage (Text Generation)
47
-
48
- You can explore the core implementation of **CALM** in the GitHub repository. We've made it easy to use CALM by including our custom code in the πŸ€—[Hugging Face model zoo](https://huggingface.co/collections/cccczshao/calm). Simply set `trust_remote_code=True` when loading the models through the Transformers library.
49
-
50
- ```python
51
- from transformers import pipeline, AutoTokenizer
52
- import torch
53
-
54
- model_name = "cccczshao/CALM-M" # Example model from the collection
55
- pipe = pipeline(
56
- "text-generation",
57
- model_name,
58
- tokenizer=AutoTokenizer.from_pretrained(model_name),
59
- torch_dtype=torch.bfloat16,
60
- device_map="auto",
61
- trust_remote_code=True,
62
- )
63
- print(pipe("The key to life is", max_new_tokens=20, do_sample=True)[0]["generated_text"])
64
- ```
65
 
66
  ## Contact
67
 
 
12
  - language modeling
13
  pipeline_tag: text-generation
14
  ---
 
15
  # Continuous Autoregressive Language Models
16
 
17
  [![Paper](https://img.shields.io/badge/Paper_πŸ“ƒ-green)](https://arxiv.org/abs/2510.27688)
18
  [![GitHub](https://img.shields.io/badge/GitHub_πŸ§‘β€πŸ’»-blue)](https://github.com/shaochenze/calm)
19
  [![HuggingFace](https://img.shields.io/badge/HuggingFace_πŸ€—-orange)](https://huggingface.co/collections/cccczshao/calm)
20
+ [![Blog](https://img.shields.io/badge/Blog_✍️-yellowgreen)](https://shaochenze.github.io/blog/2025/CALM/)
21
+
22
 
23
  ## Model Description
24
 
 
26
 
27
  This is achieved through a two-stage process:
28
 
29
+ 1. **A high-fidelity autoencoder** learns to compress K tokens into a single vector and reconstruct them with near-perfect accuracy.
30
+ 2. **A continuous-domain language model** then performs autoregressive prediction in this vector space.
31
 
32
  ### Key Features
33
 
34
+ * πŸš€ **Ultra-Efficient by Design:** Dramatically improves training and inference efficiency by reducing the number of autoregressive steps by a factor of K.
35
+ * πŸ’‘ **A New Scaling Axis:** Introduces a new scaling dimension for LLMsβ€”semantic bandwidth (K). Instead of just scaling parameters and data, you can now scale the amount of information processed in a single step.
36
+ * πŸ› οΈ **A Comprehensive Likelihood-Free Toolkit:** Operating in a continuous domain requires new tools. This repository provides the full suite of algorithms that make CALM possible:
37
+
38
+ * **A Robust Autoencoder** to learn high-fidelity continuous representations of token chunks.
39
+ * **Energy-Based Training**, a principled and likelihood-free method for generative modeling.
40
+ * **BrierLM**, a new metric for calibrated, likelihood-free evaluation of language models.
41
+ * **Temperature Sampling** for controlled, high-quality text generation using only a black-box sampler.
42
 
43
  ## How to use
44
 
45
+ See our [GitHub README](https://github.com/shaochenze/calm), where we provide scripts for training and evaluation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  ## Contact
48