Chess Policy Network
A neural network-based chess engine that uses a policy network to predict the best moves.
Model Details
- Architecture: Convolutional ResNet with policy head
- Input: 12-channel board representation (piece positions for each color)
- Output: 4672 move logits
- Total Parameters: 5,827,456
Architecture Configuration
input_channels: 12
num_res_blocks: 8
filters: 128
policy_channels: 32
num_move_classes: 4672
dropout: 0.1
activation: relu
Training Data
- Trained on high-ELO chess games (minimum rating: 2200+)
- Supervised learning on master games
- Cross-entropy loss with AdamW optimizer
- Cosine annealing learning rate schedule
Usage
import torch
from huggingface_hub import hf_hub_download
from training.models import PolicyNetwork
from training.utils import encode_board
# Load model
model_path = hf_hub_download("rzhang-7/chesshacks-model", "pytorch_model.bin")
model = PolicyNetwork.from_config(config)
model.load(model_path)
model.eval()
# Get move predictions for a position
board_tensor = encode_board(board)
with torch.no_grad():
logits = model(board_tensor.unsqueeze(0))
# Convert to legal moves
move_probs = filter_policy_to_legal(logits[0].numpy(), board)
Performance
Evaluate with:
python training/scripts/evaluate.py --model-path path/to/model.pt --data-dir training/data
License
MIT
Citation
If you use this model, please cite:
@model{chess_policy_net,
title={Chess Policy Network},
author={Chess Hacks},
year={2024},
url={https://huggingface.co/rzhang-7/chesshacks-model}
}
Disclaimers
- This model is trained on historical chess games and may reflect biases in those games
- The model is provided as-is without guarantees
- For competitive chess, consider using dedicated engines like Stockfish or Leela Chess Zero
- Downloads last month
- 11