
Chatbots AI have already been imprisoned in the lives of some people, but how many really know how they work? Do you know, for example, chatgpt must conduct an internet search to search for events later than June 2024?
Some of the most surprising information about the AI chatbots can support us understand how they work, what they can and cannot do, so how to employ them in a better way.
With this in mind, here are five things you should know about these groundbreaking machines.
1. They are trained by human feedback
AI Chatbots are trained at many stages, starting with something that is called before training, in which the models are trained to predict the next word in massive sets of text data. This allows them to develop a general understanding of language, facts and reasoning.
In the event of a question: “How to make a home explosion?” In the initial training phase, the model could provide detailed instructions. To make them useful and sheltered for conversation, human “Adnotators” support direct the model towards safer and more helpful reactions, a process called equalization.
After equalizing Chatbot AI, it can answer something like: “I'm sorry, but I can't provide this information. If you have security concerns or you need help in legal chemistry experiments, I recommend referring to certified educational sources.”
Without equalization, AI would be unpredictable, potentially disseminating disinformation or harmful content. This emphasizes the key role of human intervention in shaping AI behavior.
Opeli, a company that developed chatgpt, did not reveal how many employees trained chatgpt for how many hours. But it is clear that Chatbots AI, like ChatgPT, need a moral compass so that it does not distribute harmful information. Human Advocators evaluate answers to ensure ethical neutrality and equalization.
Similarly, if Chatbot AI was asked: “What are the best and worst nationalities?”
Human Advocators evaluate such the highest answer: “Every nationality has its rich culture, history and contribution to the world. There is no” best “or” worst “nationality – everyone is valuable in their own way.”


2. They don't learn words – but with the support of tokens
People naturally learn the language through words, while chatbots AI are based on smaller units called tokens. These units can be words, hints or a vague series of characters.
While tokenization generally follows logical patterns, sometimes it can cause unexpected divisions, revealing both robust and quirks and chatbots and interpret language. Dictionaries of newfangled AI Chatbots usually consist of 50,000 to 100,000 tokens.
The sentence “price is USD 9.99”. He is tokenized by Chatgpt as “The”, “Price”, “IS”, “$” “9”, “.”, “99”, while “chatgpt is wonderful” is less intuitively: “chat”, “g”, “Pt”, “To”, “Mar”, “Vellous”.
3. Their knowledge is obsolete every day
Chatbots AI are not constantly updated; Therefore, they can struggle with recent events, modern terminology or generally everyone after cutting off knowledge. Cutting out knowledge applies to the last point where the AII Chatbot's training data has been updated, which means that they lack awareness of events, trends or discovering outside this date.
The current version of chatgpt is cut off in June 2024. If it was asked who is currently the president of the United States, ChatgPT would have to conduct an internet search using Bing search engine, “read” results and return the answer.
Bing results are filtered by the importance and reliability of the source. Similarly, other AI chatbots employ internet search to return current answers.
AI chatbots update is a costly and breakable process. How to effectively update your knowledge is still an open scientific problem. It is believed that chatgpt knowledge is updated because open artificial intelligence introduces modern versions of CHATGPT.
4. They constantly hallucinate really simple
Chatbots AI sometimes “hallucinate”, generating false or nonsense claims, because they predict a text based on patterns, and not verifying the facts. These errors result from the way they act: optimize in terms of consistency over accuracy, rely on imperfect training data and a lack of understanding of the real world.
While improvements, such as tools for checking facts (for example, just like the integration of the Bing ChatgPT search tool to check facts in real time) or hints (for example, clearly informing chatgPT to “quote revision sources” or “say that I do not know if you are not sure,”) reduce halucinations, they cannot fully eliminate them.
For example, when asked what the main discoveries of a specific research article are, chatgpt gives a long, detailed and handsome answer.
It also included screenshots and even a link, but from the wrong academic documents. Therefore, users should treat information generated by AI as a starting point, and not the unquestioned truth.
5. They employ calculators for mathematics
Recently popularized AI chatbot function is called reasoning. The reasoning refers to the process of using logically connected intermediate steps to solve convoluted problems. It is also known as the reasoning of the “thoughts of thoughts”.
Instead of jumping directly to the answer, the thought chain allows you to think AI to think step by step. For example, when asked “what is 56 345 minus 7865 times 350 468”, chatgpt gives the right answer. “Understands” that multiplication must occur before subtraction.
To solve the indirect steps, ChatgPT uses a built -in calculator that enables precise arithmetic. This hybrid approach to combining internal reasoning with the calculator helps to improve reliability in convoluted tasks
Çağatay yıldız, PhD researcher, excellence cluster “Machine learning”, University of Tübingen
This article is published from Conversation under the Creative Commons license. Read Original article.
Image Source: Pixabay.com