Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
2.97k
4.77k
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Quantitative Mapping of Computational Boundaries

Paper DOI License

A Statistical Field Theory Approach to Phase Transitions in NP-Hard Problems

Author: Zixi Li (Oz Lee) Affiliation: Noesis Lab (Independent Research Group) Contact: lizx93@mail2.sysu.edu.cn


Overview

Classical computability theory tells us that computational boundaries exist (halting problem, P vs NP), but it doesn't answer: where exactly are these boundaries?

This paper presents the first quantitative mapping of computational phase transitions through Monte Carlo experiments on 22,000 constraint satisfaction instances. We discover universal laws governing the solvability boundary and extend this framework to natural language via pure NLP semantics.

Key Question

For a problem of size $L$ with constraint density $d$, what is the probability $\mu(L,d)$ of finding a solution?

Traditional answer: "NP-hard ⇒ exponentially hard" (asymptotic)

Our answer: $\mu(L,d) = \frac{1}{2}(1 - \text{erf}((d - d_c(L))/\sigma))$ where $d_c(L) = -0.0809\ln(L) + 0.501$ (exact formula)


Main Contributions

1. Three Universal Laws

We discover three fundamental laws governing computational boundaries:

Logarithmic Scaling Law:

d_c(L) = -0.0809 ln(L) + 0.501

with MSE ∼ 10⁻³² (machine precision!)

Universal Phase Transition Kernel:

K(x) = 1/2 (1 - erf(x/σ))

with universal constant σ = 0.1007

Self-Constraint Theory:

C = 1 - λ_min/λ_max

Constraint strength emerges from eigenvalue spectrum of word embedding covariance—no heuristic rules needed!

2. Complete Prediction Formula

Combining all discoveries:

μ(L,d) = 1/2 (1 - erf((d - d_c(L))/0.1007))
where d_c(L) = -0.0809 ln(L) + 0.501

This formula predicts solvability probability for any problem instance.

3. Natural Language Extension

We extend the framework to arbitrary problems described in natural language:

μ(I,C) = 1/2 (1 - erf((C - C_c(I))/σ))
where:
  I = information complexity (from text)
  C = self-constraint strength (from embeddings)
  C_c(I) = -0.0809 I + 0.501

Methodology: The Pea Experiment

We propose a Monte Carlo boundary mapping approach inspired by area estimation:

  1. Throw peas randomly across parameter space (L, d)
  2. For each point, sample N problem instances
  3. Run solver and record success/failure
  4. Estimate μ(L,d) = successes/N
  5. Map the entire solvability landscape

Total experiments: 22,000 samples Problem sizes: L ∈ {8, 12, 16, 24, 32, 48, 64, 96, 128, 192, 256} Constraint densities: d ∈ [0.005, 0.4]


Key Results

Phase Transition Discovery

Phase Diagram

Sharp transitions observed with:

  • Transition width Δd ≈ 0.1
  • Low density (d < 0.05): μ = 0.996 ± 0.012
  • High density (d > 0.3): μ = 0.278 ± 0.102
  • Transition amplitude: Δμ ≈ 0.72

Universal Kernel Collapse

Universal Kernel

All phase transition curves collapse onto a single kernel when aligned:

  • Standard deviation after alignment: σ = 0.029
  • Reconstruction MSE = 0.0057
  • Best fit: Error function (cumulative Gaussian)

Natural Language Predictions

Problem I C_self C_c μ Prediction
Sort array of numbers 1.54 0.09 0.38 1.00 ✓ Trivial
Hamiltonian cycle in graph 1.82 0.24 0.35 0.94 ✓ Easy
Sudoku with 40 givens 2.03 0.35 0.34 0.41 ✓ Hard
TSP + 5 required edges 2.53 0.39 0.30 0.10 ✓ Intractable

Predictions match human intuition without running any solver!


Theoretical Impact

Connections Across Disciplines

This work reveals deep connections between:

  • Computation: Phase transitions in solvability
  • Information Theory: Shannon entropy and constraint budgets
  • Statistical Physics: Landau phase transition theory
  • Geometry: Spectral properties of embedding spaces

Paradigm Shift

Traditional Complexity Our Approach
Constructive proofs Monte Carlo sampling
Asymptotic bounds Exact μ values
Discrete classes (P, NP) Continuous phase diagram
O(·) notation Machine precision MSE

Philosophical Implications

Computability is:

  • Not binary but probabilistic (μ ∈ [0,1])
  • Not qualitative but quantitative (exact formulas)
  • Not symbolic but geometric (embedding properties)

Repository Contents

.
├── computational_boundary_paper.pdf       # Full paper
├── computational_boundary_paper.tex       # LaTeX source
├── README.md                              # This file
├── phase_diagram.png                      # Phase transition visualization
├── universal_kernel_analysis.png          # Universal kernel collapse
├── critical_boundary_mu50.png             # Critical boundary curve
├── multi_threshold_boundaries.png         # Multiple threshold analysis
├── tsp_phase_diagram.png                  # TSP cross-validation
└── solvability_predictor_guide.png        # Prediction framework

Citation

If you use this work in your research, please cite:

@misc{oz_lee_2025,
    author       = { Oz Lee },
    title        = { Quantitative_Mapping_of_Computational_Boundaries (Revision 9dcb0f8) },
    year         = 2025,
    url          = { https://huggingface.co/datasets/OzTianlu/Quantitative_Mapping_of_Computational_Boundaries },
    doi          = { 10.57967/hf/7067 },
    publisher    = { Hugging Face }
}

Key Findings Summary

1. Logarithmic Scaling (Machine Precision)

Comparison of different scaling models:

Model Formula MSE
Power law d = 0.722 L⁻⁰·³⁹¹ 1.53×10⁻⁴
Exponential d = 0.287 e⁻⁰·⁰⁰⁸⁷ᴸ 3.17×10⁻⁴
Logarithmic d = -0.0809 ln(L) + 0.501 2.62×10⁻³²
Linear d = -0.00151 L + 0.275 6.45×10⁻⁴

The logarithmic model achieves machine precision—unprecedented in complexity theory!

2. Self-Constraint Theory

Traditional keyword-based methods vs. our approach:

Feature Keyword Method Self-Constraint
Keyword list Required ✓ Not needed
Domain dependence Strong ✓ None
Math foundation Empirical ✓ Spectral analysis
Physical meaning Weak ✓ Strong (eigenvalues)
Interpretability Low ✓ High (geometric)

Core insight: Constraints are not linguistic features—they are geometric properties of semantic embedding spaces.

3. Information-Constraint Phase Diagram

Universal scaling law:

∂C_c/∂I = -0.0809

Interpretation: Each additional bit of information reduces constraint tolerance by 8.09%.


Applications

1. Algorithm Selection

Predict problem difficulty before running any solver—choose appropriate algorithm based on μ prediction.

2. Constraint Generation

Design problem instances with target difficulty by controlling (L, d) parameters.

3. Complexity Estimation

Estimate computational cost from natural language problem descriptions.

4. Educational Tools

Visualize computational phase transitions for teaching complexity theory.


Future Directions

Theory

  • Derive α, β, σ from first principles
  • Prove asymptotic properties of logarithmic law
  • Classify other NP problems into universality classes
  • Explore quantum computation phase transitions

Experiments

  • More problem types (SAT, graph coloring, knapsack)
  • Different solvers (SMT, DPLL, genetic algorithms)
  • Industrial real-world instances
  • Large-scale parallelization

Applications

  • Automated algorithm selection systems
  • Intelligent constraint generation
  • Complexity estimation APIs
  • Interactive educational software

Limitations

  1. Model dependence: NLP predictions rely on sentence-transformers/all-MiniLM-L6-v2
  2. Solver baseline: Only tested backtracking (other algorithms may differ)
  3. Problem scope: Mainly constraint satisfaction (need more problem types)
  4. Small-size effects: Discrete artifacts for L < 16
  5. Language: Only validated on English text

Technical Details

Benchmark Problem: OpenXOR

A minimal NP-hard problem with:

  • Search space: 2ⁿ (exponential)
  • Solution density: ≈ 2⁻ᵏ for k checkpoints
  • Minimal DSL: Only 2 operations (XOR, NOP)
  • No confounds: Pure constraint satisfaction

Self-Constraint Computation

For problem text T with words {w₁, ..., wₙ}:

  1. Get embeddings: V = [v₁, ..., vₙ] ∈ ℝⁿˣᵈ
  2. Compute covariance: Σ = Cov(V)
  3. Eigenvalue decomposition: Σ = Σᵢ λᵢ uᵢuᵢᵀ
  4. Extract constraint: C = 1 - λ_min/λ_max

Physical intuition:

  • λ_min ≈ λ_max (isotropic) ⇒ unconstrained (C ≈ 0)
  • λ_min ≪ λ_max (compressed) ⇒ constrained (C ≈ 1)

Information Complexity

I = ln(n+1) × (1 + ln(1 + σ²_sem)) × r_unique

where:

  • ln(n+1) = word count (problem size)
  • σ²_sem = semantic diversity
  • r_unique = unique word ratio (information density)

Acknowledgments

We thank the "pea experiment" inspiration from Monte Carlo area estimation. This work demonstrates the power of statistical methods in theoretical computer science.


License

MIT License - See LICENSE file for details


Contact

For questions, collaborations, or discussions:

Zixi Li (Oz Lee) Email: lizx93@mail2.sysu.edu.cn Affiliation: Noesis Lab (Independent Research Group)


Related Work

  • The Incompleteness of Reasoning: HuggingFace Dataset
  • Previous work on computational boundaries and reasoning limits

Last Updated: January 2025 Version: Revision 9dcb0f8

Downloads last month
59

Collection including OzTianlu/Quantitative_Mapping_of_Computational_Boundaries