This research paper showcases spectacular discoveries across multiple disciplines. The main question — yet unanswered — is how the mathematical engineering behind the scenes could be applied to modern AI, deep neural networks (DNNs) and LLMs in particular, to dramatically accelerate the convergence of some slow algorithms. Most notoriously, the laborious and expensive training attached to transformer models, with quantum-like methods inspired from my results. The technology also features an interesting application of data synthetization, not just for number generation matching some pre-specified distribution, but to build artificial structures mimicking the properties of real ones. All this based on a generic universal function rivaling those at the core of modern DNNs, coming from number theory and the Riemann Hypothesis framework while setting up a new path to solve this famous 150-year-old problem.

Illustrations
Figure 1 shows the convergence of a math series, when adding extra terms. It is supposed to converge to zero the more terms you add, that is, the further you go to the right on the X-axis. This is very surprising since most math functions show some degree of smoothness. But that one does not. Indeed, it behaves as the loss function in the history plot attached to a DNN (each value corresponding to an epoch). Yet, unlike DNNs, there is nothing random behind it. What’s more, this series has the exact same chaotic zeros as the famous Riemann zeta function, but it is a different function!

In figure 1, I replaced the prime numbers 3, 5 and 7 by fake ones that are not even integers. What you see is the behavior of the transformed zeta function, after the change. It preserved its zeros but introduced significant chaos. In figure 2, I removed the primes 3, 5, and 7 instead. Think of it as the analogy to distillation in DNNs. The red curve is the standard zeta function. The green curve (yes, there is only one even if you see several) is the result after the distillation. Still the exact same zeros. But the values jump from one level to another each time you add a term in the resulting series, that is, moving to the right on the X-axis. We are now dealing with 3 major quantum states, and each one has sub-quantum states. Most importantly, one of the quantum states shows much faster convergence to zero than the standard zeta function in red.

In Figure 3, I replaced all primes by random numbers, none of them being an integer. In short, I synthesized fake primes as well as the underlying math structure: a multiplicative semi-group. My random “primes” must meet some conditions, just like synthetic data is random but must mimic the distribution of the real observations it is derived from. Again, we get the exact same zeros as the standard zeta function, this time with infinitely many quantum states.
Discussion
Quantum states also appear in a context linked to another major math conjecture, about the binary digits of special math constants such as e. For details, see my book “0 and 1: from Elemental Math to Quantum AI”, here. Also, I used the zeta function as a universal function to build a new generation of Blackbox-free, stable deep neural networks, see paper 55, here.
The full document with Python code, link to GitHub, and detailed explanations is available as research paper 56, here. To no miss future articles, sign up to my newsletter (same link).
About the Author

Vincent Granville is a pioneering GenAI scientist, co-founder at BondingAI.io, the LLM 2.0 platform for hallucination-free, secure, in-house, lightning-fast Enterprise AI at scale with zero weight and no GPU. He is also author (Elsevier, Wiley), publisher, and successful entrepreneur with multi-million-dollar exit. Vincent’s past corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, and CNET. He completed a post-doc in computational statistics at University of Cambridge.