File size: 23,426 Bytes
c259108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
659e7d8
c259108
 
 
 
 
 
 
 
 
659e7d8
c259108
 
 
 
 
 
 
 
 
659e7d8
c259108
 
 
 
 
 
 
 
 
659e7d8
c259108
 
 
 
 
 
 
 
 
659e7d8
c259108
 
 
 
 
 
 
 
 
37118f4
c259108
 
 
 
 
 
 
 
 
37118f4
c259108
37118f4
c259108
 
 
 
 
37118f4
 
 
 
 
c259108
37118f4
c259108
 
37118f4
 
 
 
 
 
 
 
 
 
c259108
 
37118f4
c259108
 
37118f4
c259108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37118f4
c259108
 
37118f4
c259108
37118f4
c259108
37118f4
 
c259108
 
 
37118f4
c259108
37118f4
 
 
 
c259108
 
37118f4
 
c259108
 
 
 
 
 
37118f4
 
 
c259108
37118f4
c259108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37118f4
c259108
 
 
37118f4
 
c259108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
659e7d8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c259108
 
 
 
37118f4
 
 
 
 
 
 
 
c259108
37118f4
c259108
37118f4
 
 
 
 
 
 
 
 
 
c259108
 
37118f4
 
c259108
 
37118f4
 
 
 
c259108
 
 
 
 
 
 
 
 
 
 
659e7d8
 
 
 
 
 
 
 
 
c259108
 
 
 
659e7d8
 
 
 
c259108
 
659e7d8
 
 
 
 
 
 
 
 
 
 
 
 
 
37118f4
 
 
659e7d8
37118f4
c259108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37118f4
c259108
37118f4
c259108
37118f4
 
c259108
 
 
37118f4
 
 
c259108
37118f4
c259108
37118f4
c259108
37118f4
 
 
 
 
 
 
 
 
c259108
 
 
37118f4
 
 
 
c259108
 
 
 
 
 
 
 
 
 
 
 
37118f4
c259108
37118f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c259108
 
 
 
 
 
37118f4
c259108
 
 
 
 
 
37118f4
 
c259108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
---
language:
- en
license: apache-2.0
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- text-generation
- causal-lm
- transformers
- nanohammer
- holographic-embeddings
- state-space
- efficient-attention
- long-context
pipeline_tag: text-generation
model-index:
- name: NanoHammer-1.5B-Instruct
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (ARC-Challenge)
      type: arc_challenge
    metrics:
    - type: acc_norm
      value: 35.67
      name: normalized accuracy
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (ARC-Easy)
      type: arc_easy
    metrics:
    - type: acc
      value: 65.66
      name: accuracy
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag
      type: hellaswag
    metrics:
    - type: acc_norm
      value: 57.24
      name: normalized accuracy
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: PIQA
      type: piqa
    metrics:
    - type: acc
      value: 72.80
      name: accuracy
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: WinoGrande
      type: winogrande
    metrics:
    - type: acc
      value: 59.91
      name: accuracy
---

<div align="center">

# πŸ”¨ NanoHammer-1.5B-Instruct

**Explicit Causal Modeling with Holographic Integral State Compression**

*A hybrid architecture combining Transformer attention with global causal state accumulation*

[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Model Size](https://img.shields.io/badge/Parameters-1.5B-green.svg)]()
[![Context Length](https://img.shields.io/badge/Context-131K-orange.svg)]()

</div>

---

## 🌟 Key Innovation: Global Causal Context per Token

NanoHammer introduces a hybrid architecture that augments standard Transformer layers with an **explicit causal state mechanism**. Unlike traditional attention where each token only sees raw previous tokens, NanoHammer provides **every token with access to a compressed global causal summary** of the entire preceding sequence.

### 🎯 Core Advantages

| Feature | Traditional Attention | NanoHammer |
|---------|---------------------|------------|
| **Causal Modeling** | Implicit (learned from raw tokens) | **Explicit (accumulated state)** |
| **Per-Token Global Context** | Must attend to all O(n) previous tokens | **Direct access via state token** |
| **Incremental Decode Cost** | KV cache lookup O(n) | **State update O(1)** |
| **Causal Summary Size** | KV cache grows O(nΒ·dΒ·L) | **Fixed 512d per layer** |
| **Information Flow** | Token-to-token only | **Token β†’ State β†’ Token** |

### πŸ”¬ How It Works

```
Traditional Transformer:          NanoHammer Architecture:

Token₁ ──────────────┐            Token₁ ──→ State₁ ──┐
Tokenβ‚‚ ──────────┐   β”‚            Tokenβ‚‚ ──→ Stateβ‚‚ ──┼──→ [Stateβ‚œ] prepended
Token₃ ────┐     β”‚   β”‚            Token₃ ──→ State₃ ───    to attention input
   ...     β–Ό     β–Ό   β–Ό               ...              β”‚
Tokenβ‚œ β†’ Attend(T₁..Tβ‚œβ‚‹β‚)         Tokenβ‚œ ──→ Stateβ‚œ β”€β”€β”˜
         (sees raw tokens)                 ↓
                                  Each token attends to:
                                  [Global State] + [Local Tokens]
```

The state token **S(t)** acts as a **causal information accumulator**:
- **Holographic encoding**: Position-aware via complex-domain rotations (e^(iΞΈ))
- **Fixed-point iteration**: Multi-head Euler method for stable state evolution
- **Global context injection**: Every token can attend to compressed history, not just raw tokens

---

## πŸš€ Quick Start

### Installation

```bash
pip install transformers torch
```

### Basic Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load model
model_path = "NoesisLab/NanoHammer-1.5B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    trust_remote_code=True,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

# Generate response
prompt = "Explain the concept of causality in physics."
messages = [{"role": "user", "content": prompt}]

input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=256,
    temperature=0.7,
    do_sample=True,
    top_p=0.9,
)

response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(response)
```

### Multi-turn Conversation

```python
messages = [
    {"role": "user", "content": "What is a holographic state?"},
    {"role": "assistant", "content": "A holographic state is a compressed representation that encodes global information..."},
    {"role": "user", "content": "How does it differ from traditional attention?"}
]

input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# ... generate as above
```

---

## πŸ—οΈ Architecture Details

### Hybrid Decoder Layer Flow

Each NanoHammer decoder layer maintains **two parallel streams** that merge for attention:

```
Input: Hidden (B, T, 2048) + State (B, T, 512)
    ↓
[1] State Update Cell (parallel to hidden stream)
    β€’ Multi-head fixed-point iteration: S_{t+1} = S_t + Ξ±Β·f(S_t)
    β€’ 16 heads Γ— 32 dim = 512 total
    β€’ O(1) computation per token position
    ↓
[2] State Token Projection
    β€’ Project state_hidden_size (512) β†’ hidden_size (2048)
    β€’ Creates T state tokens encoding causal history up to each position
    ↓
[3] Sequence Concatenation
    β€’ Concat: [State₁..Stateβ‚œ] + [Hidden₁..Hiddenβ‚œ]
    β€’ Sequence length: T β†’ 2T
    β€’ Custom causal mask ensures proper causality
    ↓
[4] Llama Self-Attention
    β€’ Standard Llama attention over 2T tokens
    β€’ Each hidden token can attend to its corresponding state token
    β€’ GQA: 32 query heads, 8 KV heads
    ↓
[5] Llama MLP
    β€’ SwiGLU activation
    β€’ 2048 β†’ 8192 β†’ 2048
    ↓
[6] Extract Hidden Tokens
    β€’ Remove state tokens from output
    β€’ Return T hidden tokens
    ↓
Output: Hidden (B, T, 2048) + Updated State (B, T, 512)
```

### Core Components

#### 1️⃣ **HolographicRotaryEmbedding**
```python
# Complex-domain rotational encoding
x_i * e^(i*ΞΈ_k)  where ΞΈ_k = position_id / (10000^(2k/d))
```
- Encodes **absolute positions** in complex space
- Enables **inverse rotation** for relative coordinate transformations
- Maintains **temporal coherence** across state updates

#### 2️⃣ **StateUpdateCell**
```python
# Multi-head Euler iteration
for head in range(num_state_heads):
    S_new[head] = S[head] + step_size[head] * MLP(LayerNorm(S[head]))
```
- **16 independent state heads** (512-dim total)
- **Learnable step sizes** per head for adaptive evolution
- **Pre-norm + MLP + Post-norm** architecture for stability

#### 3️⃣ **StateTokenProjection**
```python
# Project state to hidden dimension for attention participation
state_token = Linear(state_hidden_size=512 β†’ hidden_size=2048)
```
- **Dimensional expansion**: 512 β†’ 2048
- **Per-position projection**: Each position gets its own state token
- **Enables attention**: State tokens participate in standard Llama attention

### Model Specifications

| Parameter | Value |
|-----------|-------|
| **Total Parameters** | ~1.5B |
| **Hidden Size** | 2048 |
| **Intermediate Size** | 8192 |
| **Num Layers** | 16 |
| **Attention Heads** | 32 (query) / 8 (KV, GQA) |
| **State Heads** | 16 |
| **State Hidden Size** | 512 |
| **Vocab Size** | 128,256 |
| **Max Position Embeddings** | 131,072 |
| **RoPE Theta** | 500,000 |

---

## 🧠 O(1) Incremental Inference: The Core Logic

This is the heart of how NanoHammer achieves O(1) state recurrence. In traditional Transformers, generating the $t$-th token typically requires looking back at all $t-1$ previous tokens via the KV Cache. In NanoHammer, we compress "history" into a fixed-dimensional state vector $S$.

The essence of `_forward_incremental` is that it's not "reviewing" historyβ€”it's **updating the current state snapshot**.

### Algorithm: NanoHammer Incremental Inference (O(1) State Recurrence)

**Inputs:**
- $x_t$: Current token's hidden state
- $S_t$: Cumulative integral state entering this layer
- $S_{prev\_out}$: Previous timestep's output state from this layer (this is keyβ€”represents the fully evolved history at $t-1$)
- $Cache_{KV}$: Historical Key-Value cache

**Outputs:**
- $y_t$: Current layer's output hidden state
- $S_{updated}$: Updated state (passed to next timestep as $S_{prev\_out}$)

```python
def forward_incremental(x_t, S_t, S_prev_out, Cache_KV):
    """
    NanoHammer's O(1) State Recurrence Step
    Complexity: Regardless of sequence length, state S has fixed dimensions,
                so computation remains constant.
    """

    # 1. State Evolution (The Euler Step)
    # Physics: Evolve the system state forward one step based on current input S_t
    # S_{updated} = S_t + alpha * f(S_t)
    S_updated = StateUpdateCell(S_t)

    # 2. Holographic Inverse Rotation
    # Physics: Project previous "absolute state" S_prev_out into current timestep t's
    #          "relative coordinate system"
    # This step decompresses position information encoded in S
    # R^{-1}(S, t) = S * e^{-i * theta * t}
    S_relative = InverseHolographicRoPE(S_prev_out, position_id=t)

    # 3. State Materialization
    # Project abstract physics state vector into Transformer-readable token space
    Token_State = Project(S_relative)

    # 4. Dual-Token Query Construction
    # We don't just query x_t; we query [Global State, Current Input]
    # Query = [Token_State, x_t]
    Q_pair = Concat([Token_State, x_t])

    # 5. Hybrid Attention
    # Token_State handles "recalling" global history (Long-term Memory)
    # x_t handles "attending to" local details (Local Context)
    # Note: While attention still occurs, deeper layers gradually ignore Cache_KV,
    #       relying primarily on Token_State
    y_pair = LlamaAttention(
        query=Q_pair,
        key_value=Cache_KV + Current_KV
    )

    # 6. Extract Output
    # We only need the output corresponding to x_t; Token_State's output is discarded
    # (it only serves as guidance)
    y_t = y_pair[1]

    return y_t, S_updated
```

### Key Insight

The state update (`StateUpdateCell`) is **O(1)** regardless of sequence length because:
1. State dimension is fixed at 512
2. The Euler step operates only on the current state, not on historical tokens
3. Position information is encoded holographically, not through explicit sequence traversal

This contrasts with standard KV-cache attention where attending to history costs O(T).

---

## ⚑ Performance Characteristics

### Computational Complexity

| Phase | Operation | Complexity | Description |
|-------|-----------|-----------|-------------|
| **Prefill** | State Updates | O(T) | T tokens Γ— O(1) per update |
| **Prefill** | Self-Attention | O(TΒ²) | Standard quadratic attention |
| **Prefill** | **Total** | **O(TΒ²)** | Dominated by attention |
| **Decode** | State Update | **O(1)** | Single fixed-size iteration |
| **Decode** | Attention (with KV cache) | O(T) | Attend to T cached tokens |
| **Decode** | **Total per token** | **O(T)** | Same as standard Transformer |

### What NanoHammer Actually Provides

**NOT claiming**:
- ~~O(1) total inference~~ (still O(TΒ²) prefill, O(T) decode)
- ~~Linear attention replacement~~ (uses standard quadratic attention)

**Actually provides**:
- **Global causal context per token**: Each token directly attends to a compressed state summarizing ALL prior tokens, not just what fits in attention window
- **O(1) incremental state update**: During decode, updating the causal state costs O(1), independent of sequence length
- **Fixed-size causal summary**: The state is always 512d regardless of sequence length

### Memory Characteristics

```
KV Cache:        O(T Γ— d Γ— L)    [grows with sequence]
Causal State:    O(d_s Γ— L)      [512 Γ— 16 = 8KB, constant]
```

The state provides a **complementary** compressed representation:
- KV cache: exact token representations for attention
- Causal state: accumulated global context summary
- Both are used together, not as replacements

---

## πŸ“Š Benchmark Results

NanoHammer has been evaluated on standard language understanding benchmarks using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) framework (0-shot evaluation).

### Common Sense Reasoning & Knowledge

| Task | Version | Metric | Value | Stderr |
|------|---------|--------|-------|--------|
| **ARC-Challenge** | 1 | acc | 32.42% | Β±1.37% |
| | | acc_norm | **35.67%** | Β±1.40% |
| **ARC-Easy** | 1 | acc | **65.66%** | Β±0.97% |
| | | acc_norm | 62.67% | Β±0.99% |
| **HellaSwag** | 1 | acc | 43.54% | Β±0.49% |
| | | acc_norm | **57.24%** | Β±0.49% |
| **PIQA** | 1 | acc | **72.80%** | Β±1.04% |
| | | acc_norm | 72.47% | Β±1.04% |
| **WinoGrande** | 1 | acc | **59.91%** | Β±1.38% |

### Performance Summary

```
Average Accuracy (normalized): 57.59%
- Strong performance on physical reasoning (PIQA: 72.80%)
- Competitive commonsense reasoning (HellaSwag: 57.24%, WinoGrande: 59.91%)
- Solid performance on knowledge tasks (ARC-Easy: 65.66%, ARC-Challenge: 35.67%)
```

### Comparison with Similar-Scale Models (OpenLLM Leaderboard)

| Metric | NanoHammer (1.5B, 16K Data) | Llama 3.2 1B (Instruct) | Qwen 2.5 1.5B (Instruct) | TinyLlama 1.1B (3T Tokens) |
|--------|----------------------------|-------------------------|--------------------------|---------------------------|
| **WinoGrande** | **59.91%** πŸ† | 59.70% | ~60.2% | 59.1% |
| **PIQA** | 72.80% βš”οΈ | 74.40% | ~75.0% | 73.3% |
| **ARC-Challenge** | 35.67% | 38.10% | ~40.5% | 30.1% |
| **HellaSwag** | 57.24% | 60.80% | ~65.0% | 59.2% |
| **ARC-Easy** | 65.66% | 68.50% | ~70.0% | 55.2% |

> πŸ† **WinoGrande**: Outperforms Llama 3.2 1B with only 16K training samples!
> βš”οΈ **PIQA**: Competitive physical reasoning, close to fully-trained baselines
> πŸ“Š **Data Efficiency**: Achieves comparable results with **16K samples** vs **3T tokens** (TinyLlama)

**Observations:**
- Performance is comparable to other 1-2B parameter models
- The causal state mechanism does not degrade standard benchmark performance
- Strong physical reasoning (PIQA: 72.80%) suggests the state captures useful semantic information
- Note: These benchmarks don't specifically test long-range causal reasoning where the architecture may have advantages

### Evaluation Details

**Setup:**
- Evaluation framework: `lm-evaluation-harness`
- Shot configuration: 0-shot (no few-shot examples)
- Temperature: Greedy decoding
- Batch size: Auto

**Reproducing Results:**
```bash
# Install lm-eval
pip install lm-eval

# Run evaluation
lm_eval --model hf \
    --model_args pretrained=NoesisLab/NanoHammer-1.5B-Instruct,trust_remote_code=True \
    --tasks arc_challenge,arc_easy,hellaswag,piqa,winogrande \
    --batch_size auto \
    --output_path results/
```

---

## πŸŽ“ Training

### Base Model & Weight Transfer

NanoHammer initializes from **Llama-3.2-1B-Instruct** via selective weight transfer:

**Frozen Components** (from Llama):
- Token embeddings (`embed_tokens`)
- Language modeling head (`lm_head`)
- Self-attention layers (`self_attn`)
- MLP layers (`mlp`)
- All RMS layer norms

**Trainable Components** (NanoHammer-specific):
- `token_to_state`: Projects input tokens β†’ state space
- `holographic_rope`: Position encoding for state
- `state_cell`: State update mechanism (per layer)
- `state_projection`: State β†’ hidden projection (per layer)

### Training Configuration

- **Dataset**: High-quality instruction-following data
- **Precision**: BF16 mixed precision
- **Optimization**: AdamW with cosine LR schedule
- **Gradient Checkpointing**: Enabled for memory efficiency
- **Batch Size**: Scaled with gradient accumulation
- **Max Sequence Length**: 2048 tokens (extendable to 131K via RoPE)

---

## πŸ” Why NanoHammer?

### The Problem: Raw Token Attention

Traditional Transformers compute attention over raw token representations:
```
Tokenβ‚œ attends to β†’ [Token₁, Tokenβ‚‚, ..., Tokenβ‚œβ‚‹β‚]
                     (all raw, uncompressed representations)
```

**Limitations**:
- Each token must "re-derive" global context from scratch via attention
- No explicit mechanism for causal information accumulation
- Long-range dependencies require attending through many intermediate tokens

### The Solution: Explicit Causal State

NanoHammer adds a **parallel causal state stream**:
```
              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
              β”‚     Causal State Stream         β”‚
              β”‚  S₁ β†’ Sβ‚‚ β†’ S₃ β†’ ... β†’ Sβ‚œ        β”‚
              β”‚  (accumulated causal summary)    β”‚
              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                            β”‚
Tokenβ‚œ attends to β†’ [Sβ‚œ] + [Token₁, ..., Tokenβ‚œβ‚‹β‚]
                     ↑
            Global context in ONE token
```

**Benefits**:
- **Direct global access**: Sβ‚œ summarizes all causal information up to t
- **Explicit accumulation**: State evolves via learnable fixed-point iteration
- **Complementary to attention**: Doesn't replace attention, augments it
- **Interpretable**: State can be analyzed as a compressed causal representation

---

## πŸ“Š Model Architecture Diagram

```
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Input: "What is the capital of France?"                β”‚
β”‚  Tokens: [What, is, the, capital, of, France, ?]       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                 β”‚
                 β–Ό
         Token Embeddings (B, T, 2048)
                 β”‚
                 β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                 β”‚                          β–Ό
                 β”‚                 Token-to-State Projection
                 β”‚                 (2048 β†’ 512, init state)
                 β”‚                          β”‚
                 β”‚                          β–Ό
                 β”‚                 Holographic RoPE
                 β”‚                 (position encoding in state space)
                 β”‚                          β”‚
         ╔═══════▼══════════════════════════▼════════╗
         β•‘              Layer 1-16                   β•‘
         ╠═══════════════════════════════════════════╣
         β•‘                                           β•‘
         β•‘  Hidden (B,T,2048)    State (B,T,512)     β•‘
         β•‘       β”‚                    β”‚              β•‘
         β•‘       β”‚              β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”        β•‘
         β•‘       β”‚              β”‚  State    β”‚        β•‘
         β•‘       β”‚              β”‚  Update   β”‚ O(1)   β•‘
         β•‘       β”‚              β”‚  Cell     β”‚ per    β•‘
         β•‘       β”‚              β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ token  β•‘
         β•‘       β”‚                    β”‚              β•‘
         β•‘       β”‚              β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”        β•‘
         β•‘       β”‚              β”‚  Project  β”‚        β•‘
         β•‘       β”‚              β”‚  512β†’2048 β”‚        β•‘
         β•‘       β”‚              β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜        β•‘
         β•‘       β”‚                    β”‚              β•‘
         β•‘       β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              β•‘
         β•‘               β–Ό                           β•‘
         β•‘      [State Tokens] + [Hidden Tokens]     β•‘
         β•‘           (B, 2T, 2048)                   β•‘
         β•‘               β”‚                           β•‘
         β•‘         β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”                     β•‘
         β•‘         β”‚   Llama   β”‚                     β•‘
         β•‘         β”‚ Attention β”‚  O(TΒ²)              β•‘
         β•‘         β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜                     β•‘
         β•‘               β”‚                           β•‘
         β•‘         β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”                     β•‘
         β•‘         β”‚   Llama   β”‚                     β•‘
         β•‘         β”‚    MLP    β”‚                     β•‘
         β•‘         β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜                     β•‘
         β•‘               β”‚                           β•‘
         β•‘      Extract hidden tokens (B, T, 2048)   β•‘
         β•‘               β”‚                           β•‘
         β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β–Όβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
                 β”‚
         β”Œβ”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚   Final Norm   β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                 β”‚
         β”Œβ”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚     LM Head    β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                 β”‚
                 β–Ό
    Output: "Paris" (logits over 128K vocab)
```

**Key insight**: The state tokens (carrying global causal context) are **prepended** to the sequence, so every token can attend to them. This doubles the attention sequence length to 2T but provides direct global context access.

---

## πŸ“š Citation

If you use NanoHammer in your research, please cite:

```bibtex
@misc{nanohammer2025,
  title={NanoHammer: Explicit Causal Modeling with Holographic Integral State Compression},
  author={NoesisLab},
  year={2025},
  howpublished={\url{https://huggingface.co/NoesisLab/NanoHammer-1.5B-Instruct}},
}
```

---

## πŸ“ License

This model is released under the **Apache 2.0** license, inheriting from the base Llama-3.2-1B-Instruct model.

---

## πŸ™ Acknowledgments

- **Base Model**: Meta's Llama-3.2-1B-Instruct
- **Inspiration**: State-space models, holographic memory, and causal inference theory
- **Framework**: HuggingFace Transformers

---

## πŸ”— Links

- **Model Card**: [NoesisLab/NanoHammer-1.5B-Instruct](https://huggingface.co/NoesisLab/NanoHammer-1.5B-Instruct)
- **Paper**: Coming soon

---

<div align="center">

**Built with ❀️ by NoesisLab**

*Advancing causal modeling in large language models*

</div>