“Generative AI and its possibilities”. (2/5). The technical foundations of generative AI are simple but very powerful. Human validation is essential for critical decisions. And for the company, it should only be used after data enrichment.
“It’s amazing. Honestly, it’s bluffing!”. Who hasn’t had that thought when trying out ChatGPT. And indeed, the word “bluffing” applies well to this Generative AI: it’s both very impressive and gives a slightly “inflated” impression of what can be done with it. This is not to condemn the technology, but to understand what we can expect from it.
Generative AI has taken up considerable space in the press in recent months, but it’s actually not all that new. French researcher Yann leCun, head of AI at Meta, declared some time ago that:
“In terms of underlying techniques, ChatGPT is not particularly innovative. There’s nothing revolutionary about it, even if that’s how it’s perceived by the public. It’s just that, you know, it’s well put together, it’s nicely done.” 
In fact, text (or image) generation is an evolution (or culmination?) of successive work on deep neural networks. Time Series processing had initiated efforts to take context into account when making a prediction. And in fact, technically, the basis of Generative AI is to propose a word (a token) from the preceding sequence.
Caricaturally, it’s like asking you what word follows in the sentence “Merry Christmas and Happy XXXX”. No doubt you’ve replaced the “XXXX” with “year”, because that’s usually the word that follows. So, obviously, for GPT (Generative Pre-Trained Transformers) models, we had to give many, many, many examples to arrive at the result we all know. But from a technical point of view, the aim is “simply” to find the word that is statistically the most probable in a sentence under construction, because the machine has no real understanding of it. In order to vary the answers, DeepLearning models include a “temperature” parameter to adjust the generation probabilities. Microsoft on its Azure OpenAI platform describes it as a parameter “to be used to control the apparent creativity of generated completions” .
Artificial Intelligence or Artificial Experience?
This leads us to question the role these new tools can play, and in an almost philosophical way, our own way of working and being.
Ever since work began on these subjects, the term “Artificial Intelligence” has been almost a false friend. We should probably be talking more about “Artificial Experience”, given that the mathematical principles that govern its existence presuppose that we subject it to a large number of cases. In the world of work, even more than elsewhere, it’s the fact of having encountered numerous cases that means a senior can react faster and better than a beginner: “That’s what you have to do, because that’s how, in this type of case, you come out on top”.
How many of us, after several years with a company, have developed veritable “libraries” of PowerPoint slides that we use and reuse to our heart’s content? Who doesn’t end up with “ready-made answers” to objections from a customer or colleague? That’s the “ready-to-think” that we sometimes denounce.
Can it be used in companies?
But within your company, are there any positions or missions that first and foremost require a good knowledge of the company, its processes and its documentation? Call centers come to mind, where answers are often scripted or require in-depth knowledge of contracts and their interpretation. FAQs or even the digital assistants so present on websites are good candidates to benefit from the contributions of generative AI. A model (Large Language Model) can be adapted to your company’s context, thanks to additional “training” based on an existing document base.
On the other hand, and fortunately, not all jobs meet these automation criteria. For example, if you’re looking for an idea to distinguish yourself from the competition… well – by design – you don’t have to do what you’re used to, like everyone else. In a pinch, you could rely on generative AI to give you an idea… of what not to do!
At the start of this article, we also mentioned the “statistical” nature of artificial intelligence. A ChatGPT-type robot will mechanically give an answer to any question put to it. Even a term that is unlikely for a human being… has a non-zero probability and is therefore acceptable to the machine. These are the famous “hallucinations” that the big groups are working against with massive human validation. One of the most striking examples is that of a lawyer who asked a robot about case law. The machine gave him exactly what he was hoping to find… until checks proved that it had all been made up! As the saying goes, “Facts die hard”. And that’s what’s needed for critical decisions. Would we accept being condemned on the basis of “habitual behavior”? Of course not! So should we make important decisions for our company without a solid business case? Neither should we. Here again, the machine can be seen as an aid, but we need to make sure that a proposal is explicable and therefore acceptable.
In this way, generative AI joins with great fanfare the tools that these techniques make available to us. The fruit of training on massive examples, the fact remains that the answers provided remain the application of statistical calculations. Admittedly, this is an excellent way of “typing” into a company’s knowledge base and thus assisting the roles that rely on it, but we mustn’t lose sight of the non-creative aspect, often difficult to justify, of the answers produced.
Depending on the level of task automation, however, this type of solution can prove a valuable ally once you have enriched a base model with your own data.
Patrick CHABLE – Practice Manager Data Science