| | --- |
| | license: apache-2.0 |
| | tags: |
| | - tensorflow |
| | - optimizer |
| | - deep-learning |
| | - machine-learning |
| | - adaptive-learning-rate |
| | library_name: tensorflow |
| | --- |
| | |
| | # NEAT Optimizer |
| |
|
| | **NEAT (Noise-Enhanced Adaptive Training)** is a novel optimization algorithm for deep learning that combines adaptive learning rates with controlled noise injection to improve convergence and generalization. |
| |
|
| | ## Overview |
| |
|
| | The NEAT optimizer enhances traditional adaptive optimization methods by intelligently injecting noise into the gradient updates. This approach helps: |
| | - Escape local minima more effectively |
| | - Improve generalization performance |
| | - Achieve faster and more stable convergence |
| | - Reduce overfitting on training data |
| |
|
| | ## Installation |
| |
|
| | ### From PyPI (recommended) |
| | ```bash |
| | pip install neat-optimizer |
| | ``` |
| |
|
| | ### From Source |
| | ```bash |
| | git clone https://github.com/yourusername/neat-optimizer.git |
| | cd neat-optimizer |
| | pip install -e . |
| | ``` |
| |
|
| | ## Quick Start |
| |
|
| | ```python |
| | import tensorflow as tf |
| | from neat_optimizer import NEATOptimizer |
| | |
| | # Create your model |
| | model = tf.keras.Sequential([ |
| | tf.keras.layers.Dense(128, activation='relu'), |
| | tf.keras.layers.Dense(10, activation='softmax') |
| | ]) |
| | |
| | # Use NEAT optimizer |
| | optimizer = NEATOptimizer( |
| | learning_rate=0.001, |
| | noise_scale=0.01, |
| | beta_1=0.9, |
| | beta_2=0.999 |
| | ) |
| | |
| | # Compile and train |
| | model.compile( |
| | optimizer=optimizer, |
| | loss='sparse_categorical_crossentropy', |
| | metrics=['accuracy'] |
| | ) |
| | |
| | model.fit(x_train, y_train, epochs=10, validation_data=(x_val, y_val)) |
| | ``` |
| |
|
| | ## Key Features |
| |
|
| | - **Adaptive Learning Rates**: Automatically adjusts learning rates per parameter |
| | - **Noise Injection**: Controlled stochastic perturbations for better exploration |
| | - **TensorFlow Integration**: Drop-in replacement for standard TensorFlow optimizers |
| | - **Hyperparameter Flexibility**: Customizable noise schedules and adaptation rates |
| |
|
| | ## Parameters |
| |
|
| | - `learning_rate` (float, default=0.001): Initial learning rate |
| | - `noise_scale` (float, default=0.01): Scale of noise injection |
| | - `beta_1` (float, default=0.9): Exponential decay rate for first moment estimates |
| | - `beta_2` (float, default=0.999): Exponential decay rate for second moment estimates |
| | - `epsilon` (float, default=1e-7): Small constant for numerical stability |
| | - `noise_decay` (float, default=0.99): Decay rate for noise scale over time |
| |
|
| | ## Requirements |
| |
|
| | - Python >= 3.7 |
| | - TensorFlow >= 2.4.0 |
| | - NumPy >= 1.19.0 |
| |
|
| | ## Citation |
| |
|
| | If you use NEAT optimizer in your research, please cite: |
| |
|
| | ```bibtex |
| | @software{neat_optimizer, |
| | title={NEAT: Noise-Enhanced Adaptive Training Optimizer}, |
| | author={Your Name}, |
| | year={2025}, |
| | url={https://github.com/yourusername/neat-optimizer} |
| | } |
| | ``` |
| |
|
| | ## References |
| |
|
| | - Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. |
| | - Neelakantan, A., et al. (2015). Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807. |
| |
|
| | ## License |
| |
|
| | This project is licensed under the Apache License 2.0 - see the LICENSE file for details. |
| |
|
| | ## Contributing |
| |
|
| | Contributions are welcome! Please feel free to submit a Pull Request. |
| |
|
| | ## Support |
| |
|
| | For issues, questions, or feature requests, please open an issue on GitHub. |