Time Series Forecasting
Chronos
Safetensors
t5
time series
forecasting
foundation models
pretrained models
File size: 6,389 Bytes
254b535
 
 
 
b743e84
 
254b535
 
 
 
 
 
 
 
 
b743e84
 
254b535
18128c7
254b535
 
 
 
 
 
 
ff35bf3
254b535
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f1b3424
254b535
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
license: apache-2.0
model_id: chronos-2
tags:
  - time series
  - forecasting
  - foundation models
  - pretrained models
  - safetensors
paper:
  - https://arxiv.org/abs/2510.15821
datasets:
  - autogluon/chronos_datasets
  - Salesforce/GiftEvalPretrain
leaderboards:
  - Salesforce/GIFT-Eval
  - autogluon/fev-leaderboard
pipeline_tag: time-series-forecasting
library_name: chronos-forecasting

---

# Chronos-2
**Chronos-2** is a 120M-parameter, encoder-only time series foundation model for zero-shot forecasting.
It supports **univariate**, **multivariate**, and **covariate-informed** tasks within a single architecture.
Inspired by the T5 encoder, Chronos-2 produces multi-step-ahead quantile forecasts and uses a group attention mechanism for efficient in-context learning across related series and covariates.
Trained on a combination of real-world and large-scale synthetic datasets, it achieves **state-of-the-art zero-shot accuracy** among public models on [**fev-bench**](https://huggingface.co/spaces/autogluon/fev-leaderboard), [**GIFT-Eval**](https://huggingface.co/spaces/Salesforce/GIFT-Eval), and [**Chronos Benchmark II**](https://arxiv.org/abs/2403.07815).
Chronos-2 is also **highly efficient**, delivering over 300 time series forecasts per second on a single A10G GPU and supporting both **GPU and CPU inference**.

## Links
- πŸš€ [Deploy Chronos-2 on Amazon SageMaker](https://github.com/amazon-science/chronos-forecasting/blob/main/notebooks/deploy-chronos-to-amazon-sagemaker.ipynb)
- πŸ“„ [Technical report](https://arxiv.org/abs/2510.15821v1)
- πŸ’» [GitHub](https://github.com/amazon-science/chronos-forecasting)
- πŸ“˜ [Example notebook](https://github.com/amazon-science/chronos-forecasting/blob/main/notebooks/chronos-2-quickstart.ipynb)
- πŸ“° [Amazon Science Blog](https://www.amazon.science/blog/introducing-chronos-2-from-univariate-to-universal-forecasting)


## Overview

| Capability | Chronos-2 | Chronos-Bolt | Chronos |
|------------|-----------|--------------|----------|
| Univariate Forecasting | βœ… | βœ… | βœ… |
| Cross-learning across items | βœ… | ❌ | ❌ |
| Multivariate Forecasting | βœ… | ❌ | ❌ |
| Past-only (real/categorical) covariates | βœ… | ❌ | ❌ |
| Known future (real/categorical) covariates | βœ… | 🧩 | 🧩 |
| Max. Context Length | 8192 | 2048 | 512 |
| Max. Prediction Length | 1024 | 64 | 64 |

🧩 Chronos & Chronos-Bolt do not natively support future covariates, but they can be combined with external covariate regressors (see [AutoGluon tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html#incorporating-the-covariates)). This only models per-timestep effects, not effects across time. In contrast, Chronos-2 supports all covariate types natively.


## Usage

### Local usage

For experimentation and local inference, you can use the [inference package](https://github.com/amazon-science/chronos-forecasting).

Install the package
```
pip install "chronos-forecasting>=2.0"
```

Make zero-shot predictions using the `pandas` API

```python
import pandas as pd  # requires: pip install 'pandas[pyarrow]'
from chronos import Chronos2Pipeline

pipeline = Chronos2Pipeline.from_pretrained("amazon/chronos-2", device_map="cuda")

# Load historical target values and past values of covariates
context_df = pd.read_parquet("https://autogluon.s3.amazonaws.com/datasets/timeseries/electricity_price/train.parquet")

# (Optional) Load future values of covariates
test_df = pd.read_parquet("https://autogluon.s3.amazonaws.com/datasets/timeseries/electricity_price/test.parquet")
future_df = test_df.drop(columns="target")

# Generate predictions with covariates
pred_df = pipeline.predict_df(
    context_df,
    future_df=future_df,
    prediction_length=24,  # Number of steps to forecast
    quantile_levels=[0.1, 0.5, 0.9],  # Quantiles for probabilistic forecast
    id_column="id",  # Column identifying different time series
    timestamp_column="timestamp",  # Column with datetime information
    target="target",  # Column(s) with time series values to predict
)
```

### Deploying a Chronos-2 endpoint to SageMaker

For production use, we recommend deploying Chronos-2 endpoints to Amazon SageMaker.

First, update the SageMaker SDK to make sure that all the latest models are available.

```
pip install -U sagemaker
```

Deploy an inference endpoint to SageMaker.

```python
from sagemaker.jumpstart.model import JumpStartModel

model = JumpStartModel(
    model_id="pytorch-forecasting-chronos-2",
    instance_type="ml.g5.2xlarge",
)
predictor = model.deploy()
```

Now you can send time series data to the endpoint in JSON format.

```python
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv")

payload = {
    "inputs": [
        {"target": df["#Passengers"].tolist()}
    ],
    "parameters": {
        "prediction_length": 12,
    }
}
forecast = predictor.predict(payload)["predictions"]
```

For more details about the endpoint API, check out the [example notebook](https://github.com/amazon-science/chronos-forecasting/blob/main/notebooks/deploy-chronos-to-amazon-sagemaker.ipynb)


## Training data
More details about the training data are available in the [technical report](https://arxiv.org/abs/2510.15821).

- Subset of [Chronos Datasets](https://huggingface.co/datasets/autogluon/chronos_datasets) (excluding test portion of datasets that overlap with GIFT-Eval)
- Subset of [GIFT-Eval Pretrain](https://huggingface.co/datasets/Salesforce/GiftEvalPretrain)
- Synthetic univariate and multivariate data


## Citation

If you find Chronos-2 useful for your research, please consider citing the associated paper:

```
@article{ansari2025chronos2,
  title        = {Chronos-2: From Univariate to Universal Forecasting},
  author       = {Abdul Fatir Ansari and Oleksandr Shchur and Jaris KΓΌken and Andreas Auer and Boran Han and Pedro Mercado and Syama Sundar Rangapuram and Huibin Shen and Lorenzo Stella and Xiyuan Zhang and Mononito Goswami and Shubham Kapoor and Danielle C. Maddix and Pablo Guerron and Tony Hu and Junming Yin and Nick Erickson and Prateek Mutalik Desai and Hao Wang and Huzefa Rangwala and George Karypis and Yuyang Wang and Michael Bohlke-Schneider},
  year         = {2025},
  url          = {https://arxiv.org/abs/2510.15821}
}
```