AI: let’s clarify some stuff

  • 02/12/2023

With the advent of ChatGPT and the misuses of the word “AI” (and it’s application even on effing tooth-brushers) there has been a creation of a veil of scares and general worry.

Let’s make some clarification about why OpenAI will not release the first Skynet, how journalists (in general) misuses words and why we are very safe.

AI vs LLM

AI (Artificial Intelligence) and LLM (Large Language Model) are related concepts, but they refer to different aspects of technology.

FeatureArtificial IntelligenceLarge Language Model
Small DefinitionBroad (vague) term regarding the technologies used for “intelligent” tasksSubset of AI intentionally designed for tasks regarding languages
ScopeRegards natural language, computer vision, robotics….Just focused on language-related task
FunctionsWay to many to includeTasks like: translation, text generation/completition and more
How they learnMostly a combination of either ML and Rule-Based or either oneOften uses deep learning techniques, such as transformer architectures
Model & Training DatasVaries widely based on the specific AI applicationLarge model size and extensive training on massive language datasets
UsesVarious industries and domainsOnly for language processing (chat, translation, summarization…)
ExamplesRobotics, computer vision, game playing, healthcare systems and much moreWell, ChatGPT

So, if you followed the table, you realized how many people misunderstand and (specially) misuses the word “AI” often creating random chaos by giving people fear for something we are light years away.

Why there’s no need to worry (more or less)

People who should worry are solely the “content producer”. By that i mean mainly: writer, artists (of any kind), developer and whoever else produce a piece of electronic information that can have value for the world.

For ChatGPT (for example) have been used technique very borderline from illegal to legal (and sometimes proved to be illegal) in order to train it’s model.

I don’t want you to forget that those systems can function only when they have a massive dataset. And almost never there is someone checking the genuity of an information that has been processed (it’s impossible to do it alone, but if you have money you can hire hundreds of people labelling and checking the information, and here we talk of months – or even years – of work).

What all this means

Well, we are talking of internet (that has become the most toxic place in the world) where to find good and reliable information is become very hard.

So, reality like OpenAI, with enough money, time and zombie workers (people manually checking the source and information itself) could create reality like ChatGPT. If you ever used it you can realize that still has many limits and sometime blurp out statements/solutions with no sense at all.

Even tho i hate Altman, OpenAI and ChatGPT i have to admit that for common (average) use is kinda brilliant and amazingly well made. And all i can do is to appreciate the work behind it (as developer).

What about the present and the future ?

We are talking of some algorithms (i do not intend in any way to diminishing this) that are developed by humans and that are using humans datas. Therefore they will be “bugged” from the begin.

If that’s not enough to calm you: we are light years from an AI system that the general population imagine (thank’s Hollywood !).

So even if you hear people labeled as Genius (like Altman, Musk and so on) talking about “how AI is gonna doom us all” they are doing that based on no fact all. But i tell you more. Since they have a certain influence they have put in place systems/actions (on law/amendments and governative basis) in order to help them be more richer and to be the only reference for when come to this argument. (no, it’s not a conspiracy theory. I just looked in detail their propositions and the laws/amendments that are being made or will be made).

LLMs and AI are not new concepts. You can find traces of them even on “ancient” technical literature (i remember a book from the 80s on how to develope an AI on your macintosh – or maybe mine is a Mandela effect UPDATE: found here). I remember when perl language had it’s initial boom and some people teached their machine how to make basic math additions. So, at the end, there is nothing to worry about (unless you have copyrighted material online).

As always,

Stay Safe !