YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Modded NanoGPT Model
This is a GPT-2 style model trained with modifications from modded-nanogpt.
Model Config
- Layers: 12
- Heads: 6
- Embedding dimension: 768
- Vocab size: 50304
- Squared MLP: False
- Bilinear: True
- Gated: True
- Expansion factor: 4
Training
- Training step: 6484
Usage
from huggingface_hub import hf_hub_download
import torch
from train_gpt2 import GPT, GPTConfig
import json
# Download config
config_path = hf_hub_download(repo_id="Elriggs/gpt2-swiglu-squared-attn", filename="config.json")
with open(config_path) as f:
config_dict = json.load(f)
# Remove non-GPTConfig fields
config_dict.pop('step', None)
# Create model
config = GPTConfig(**config_dict)
model = GPT(config)
# Download and load weights
weights_path = hf_hub_download(repo_id="Elriggs/gpt2-swiglu-squared-attn", filename="pytorch_model.bin")
state_dict = torch.load(weights_path, map_location='cpu')
model.load_state_dict(state_dict)
model.eval()
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support