Differences between transformer-based AI and the new generation of AI models
I frequently refer to OpenAI and the likes as LLM 1.0, by contrast to our xLLM architecture that I present as LLM 2.0. Over time, I received a lot of questions. Here I address the main differentiators. First, xLLM is a no-Blackbox, secure, auditable, double-distilled agentic LLM/RAG for trustworthy Enterprise AI, using 10,000 fewer (multi-)tokens, […]
Read More
BondingAI Acquires GenAItechLab, Add Core Team Members
BondingAI acquisition of GenAItechLab.com was recently completed, including all the IP related to the xLLM technology, the material published on MLtechniques and the most recent technology pertaining to deep neural networks watermarking. GenAItechLab was founded in 2024 by Vincent Granville, a world-class leader and well-known scientist building innovative and efficient AI solutions from scratch, hallucination-free, […]
Read More
Language Models: A 75-Year Journey That Didn’t Start With Transformers
Introduction Language models have existed for decades — long before today’s so-called “LLMs.” In the 1990s, IBM’s alignment models and smoothed n-gram systems trained on hundreds of millions of words set performance records. By the 2000s, the internet’s growth enabled “web as corpus” datasets, pushing statistical models to dominate natural language processing (NLP). Yet, many […]
Read More
How to design LLMs that don’t need prompt engineering
Standard LLMs rely on prompt engineering to fix problems (hallucinations, poor response, missing information) that come from issues in the backend architecture. If the backend (corpus processing) is properly built from the ground up, it is possible to offer a full, comprehensive answer to a meaningful prompt, without the need for multiple prompts, rewording your […]
Read More
The Rise of Specialized LLMs for Enterprise
In this article, I discuss the main problems of standard LLMs (OpenAI and the likes), and how the new generation of LLMs addresses these issues. The focus is on Enterprise LLMs. LLMs with Billions of Parameters Most of the LLMs still fall in that category. The first ones (ChatGPT) appeared around 2022, though Bert is […]
Read More
Watermarking and Forensics for AI Models, Data, and Deep Neural Networks
In my previous paper posted here, I explained how I built a new class of non-standard deep neural networks, with various case studies based on synthetic data and open-source code, covering problems such as noise filtering, high-dimensional curve fitting, and predictive analytics. One of the models featured a promising universal function able to represent any […]
Read More
Video: the LLM 2.0 Revolution
What if you could build a secure, scalable RAG+LLM system – no GPU, no latency, no hallucinations? In this session, Vincent Granville shares how to engineer high-performance, agentic multi-LLMs from scratch using Python. Learn how to rethink everything from token chunking to sub-LLM selection to create AI systems that are explainable, efficient, and designed for […]
Read More
Stay Ahead of AI Risks – Free Live Session for Tech Leaders
Exclusive working session about trustworthy AI, for senior tech leaders. View PowerPoint presentation, here. AI isn’t slowing down, but poorly planned AI adoption will slow you down. Hallucinations, security risks, bloated compute costs, and “black box” outputs are already tripping up top teams, burning budgets, and eroding trust. That’s why this session blends three things you […]
Read More
Benchmarking xLLM and Specialized Language Models: New Approach & Results
Standard benchmarking techniques using LLM as a judge have strong limitations. First it creates a circular loop and reflects the flaws present in the AI judges. Then, the perceived quality depends on the end user: an enterprise LLM appeals to professionals and business people, while a generic one appeals to laymen. The two have almost […]
Read More
10 Tips to Boost Performance of your AI Models
These model enhancements techniques apply to deep neural networks (DNNs) used in AI. The focus is on the core engine that powers all DNNs: gradient descent, layering and loss function. Reparameterization — Typically, in DNNs, many different parameter sets lead to the same optimum: loss minimization. DNN models are non-identifiable. This redundancy is a strength that […]
Read More
You must be logged in to post a comment.