File size: 1,790 Bytes
941a7fe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: mit
language:
- en
tags:
- gan
- mnist
- 7gen
- pytorch
library_name: torch
model_type: image-generator
---

![7Gen Model](https://img.shields.io/badge/7Gen-MNIST_Generator-blue?style=for-the-badge)
![Python](https://img.shields.io/badge/python-3.8+-blue.svg?style=for-the-badge&logo=python)
![PyTorch](https://img.shields.io/badge/PyTorch-1.12+-red.svg?style=for-the-badge&logo=pytorch)
![License](https://img.shields.io/badge/license-MIT-green.svg?style=for-the-badge)

# 7Gen - Advanced MNIST Digit Generation System

**State-of-the-art Conditional GAN for MNIST digit synthesis with self-attention mechanisms.**

---

## πŸš€ Features

- 🎯 **Conditional Generation**: Generate specific digits (0–9) on demand.
- πŸ–ΌοΈ **High Quality Output**: Sharp and realistic handwritten digit samples.
- ⚑ **Fast Inference**: Real-time generation on GPU.
- πŸ”Œ **Easy Integration**: Minimal setup, PyTorch-native implementation.
- πŸš€ **GPU Acceleration**: Full CUDA support.

---

## πŸ” Model Details

- **Architecture**: Conditional GAN with self-attention  
- **Parameters**: 2.5M  
- **Input**: 100-dimensional noise vector + class label  
- **Output**: 28x28 grayscale images  
- **Training Data**: MNIST dataset (60,000 images)  
- **Training Time**: ~2 hours on NVIDIA RTX 3050 Ti  

---

## πŸ§ͺ Performance Metrics

| Metric           | Score |
|------------------|-------|
| **FID Score**    | 12.3  |
| **Inception Score** | 8.7   |

- **Training Epochs**: 100  
- **Batch Size**: 64  

---

## βš™οΈ Training Configuration

```yaml
model:
  latent_dim: 100
  num_classes: 10
  generator_layers: [256, 512, 1024]
  discriminator_layers: [512, 256]

training:
  batch_size: 64
  learning_rate: 0.0002
  epochs: 100
  optimizer: Adam
  beta1: 0.5
  beta2: 0.999