Quantum Derivatives, GenAI, and the Riemann Hypothesis

Have you ever encountered a function or cumulative probability distribution (CDF) that is nowhere differentiable, yet continuous everywhere? Some are featured in this article. For a CDF, it means that it does not have a probability density function (PDF), and for a standard function, it has no derivative. At least, not until now. The quantum derivative — the solution to differentiating such a function — is an awkward mathematical object.  If it was of no use, I would not write about it. In fact, this object contains a lot of information about the original function. I use it here to gain deep insights about the concepts of interest. Of course, it is related to quantum physics.

Now, you may wonder how it is related to number theory and the Riemann Hypothesis, and how is generative AI involved. More specifically, is there something of interest for AI and machine learning professionals? While not all my mathematical papers have a strong machine learning component, this one does. First, the number-theoretic functions discussed here and their synthetic counterparts, all have quantum derivatives. The synthetic ones play the same role as augmented data in GenAI: they are created to enrich the collection of existing functions, while mimicking their behavior. Working with augmented data significantly helps solidify the conclusions obtained.

Then the purpose is to study a particular case of the Generalized Riemann Hypothesis (GRH), paving the way to a proof of this famous unsolved problem, with an original approach that hasn’t been tried yet. In short, it leads to spectacular approximations of the chaotic functions involved, by isolating the chaos stemming from the somewhat erratic distribution of the prime numbers. This may be of interest to mathematicians, but maybe not so much to machine learning practitioners, you may wonder.

Quite the contrary. The main contribution is a new type of curve fitting algorithm rarely implemented before. The correct features used in the fitting technique are hard to guess, though the synthetic functions help with this task. But the originality is the fact that curve fitting is done incrementally on samples of increasing size, to check when the coefficients in the non-linear regressions are stable (not depending on the size), and when they are not. Areas of stability lead to a simplified exploration of GRH, where the prime number complexity has been eliminated, reducing it to a standard real analysis problem, albeit still of considerable difficulty.

In addition to interesting AI and machine learning techniques, you will learn how to use scientific libraries such as MPmath and PyPrime, and deal with huge datasets, indeed infinite datasets! Despite featuring hard numbers, thus with no observation error other than precision, these infinite datasets are a lot more challenging than standard data arising from real life industry problems. In particular, a pattern observed on the first few billions of observations, may be invalidated when working with trillions of rows. Sometimes, counterexamples cannot be found by computation alone.

Accessing the Material

Download the 12-page detailed paper with full Python code and illustrations including quantum derivatives, here (on GitHub). The material starts on page 51, and the code is also on GitHub. Free, no sign-up or password required.

To not miss future articles and access members-only content, sign-up to my free newsletter, here.

Author

Vincent Granville is a pioneering GenAI scientist and machine learning expert, co-founder of Data Science Central (acquired by a publicly traded company in 2020), Chief AI Scientist at MLTechniques.com, former VC-funded executive, author and patent owner — one related to LLM. Vincent’s past corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, and CNET.

Vincent is also a former post-doc at Cambridge University, and the National Institute of Statistical Sciences (NISS). He published in Journal of Number Theory,  Journal of the Royal Statistical Society (Series B), and IEEE Transactions on Pattern Analysis and Machine Intelligence. He is the author of multiple books, including “Synthetic Data and Generative AI” (Elsevier, 2024). Vincent lives in Washington state, and enjoys doing research on stochastic processes, dynamical systems, experimental math and probabilistic number theory. He recently launched a GenAI certification program, offering state-of-the-art, enterprise grade projects to participants.

3 thoughts on “Quantum Derivatives, GenAI, and the Riemann Hypothesis

    1. Yes, I am in the process of developing open-source Python libraries for various techniques that would enhance deep neural networks, see here. If this type of optimizer is of interest besides experimental math, would love to have it added to the standard set of tools.

Leave a Reply

%d