Not-trained-Neural-Networks-Notes
A comprehensive collection of notes, implementations, and examples of neural networks that don't rely on traditional gradient-based training methods.
Contents
- Algebraic Neural Networks
- Uncomputable Neural Networks
- Theory and Mathematical Foundations
- Implementations
- Examples and Use Cases
Algebraic Neural Networks
Algebraic Neural Networks (ANNs) represent a paradigm shift from traditional neural networks by utilizing algebraic structures and operations instead of gradient-based optimization. These networks leverage:
- Algebraic Group Theory: Using group operations for network transformations
- Polynomial Algebras: Networks based on polynomial computations
- Geometric Algebra: Incorporating geometric algebraic structures
- Fixed Algebraic Transformations: Pre-defined algebraic operations
Key Features
- No Training Required: Networks are constructed using algebraic principles
- Deterministic Behavior: Outputs are fully determined by algebraic rules
- Mathematical Rigor: Based on well-established algebraic foundations
- Interpretability: Clear mathematical interpretation of operations
Uncomputable Neural Networks
Uncomputable Neural Networks extend the paradigm of non-trained networks by incorporating theoretical concepts from computability theory. These networks explore computational boundaries by simulating uncomputable functions and operations:
- Halting Oracle Layers: Simulate access to halting oracles for program termination decisions
- Kolmogorov Complexity Layers: Approximate uncomputable complexity measures using compression heuristics
- Busy Beaver Layers: Utilize the uncomputable Busy Beaver function values and approximations
- Non-Recursive Layers: Operate on computably enumerable but non-computable sets
Key Features
- Theoretical Foundations: Based on computability theory and hypercomputation concepts
- Bounded Approximations: Practical implementations of theoretically uncomputable functions
- Deterministic Simulation: Consistent behavior through fixed-seed randomness and heuristics
- Educational Value: Demonstrates limits and possibilities of computation
Getting Started
git clone https://github.com/ewdlop/Not-trained-Neural-Networks-Notes.git
cd Not-trained-Neural-Networks-Notes
# Install dependencies
pip install numpy matplotlib
# Quick demo
python demo.py
# Run main implementation
python algebraic_neural_network.py
# Run comprehensive tests
python test_comprehensive.py
Quick Demo
python demo.py
This runs a simple demonstration showing how algebraic neural networks process data without any training.
Examples
# Polynomial-based networks
python examples/polynomial_network.py
# Group theory networks
python examples/group_theory_network.py
# Geometric algebra networks
python examples/geometric_algebra_network.py
# Uncomputable neural networks
python examples/uncomputable_networks.py
Structure
βββ README.md # This file
βββ demo.py # Quick demonstration script
βββ algebraic_neural_network.py # Main implementation
βββ test_comprehensive.py # Test suite
βββ theory/ # Theoretical background
β βββ algebraic_foundations.md # Mathematical foundations
β βββ uncomputable_networks.md # Uncomputable neural networks theory
β βββ examples.md # Worked examples
βββ examples/ # Practical examples
βββ polynomial_network.py # Polynomial-based network
βββ group_theory_network.py # Group theory implementation
βββ geometric_algebra_network.py # Geometric algebra network
βββ uncomputable_networks.py # Uncomputable neural networks
Testing
Run the comprehensive test suite to verify all components:
python test_comprehensive.py
This tests:
- Basic functionality of all layer types (algebraic and uncomputable)
- Network composition and data flow
- Deterministic behavior (same input β same output)
- Mathematical properties of algebraic operations
- Uncomputable layer approximations and bounds
- Edge cases and boundary conditions