"Like a person"
New AI model o1 from OpenAI “thinks more”
ChatGPT developer OpenAI has presented a new AI model that can solve more complex tasks than previous chatbots. The software, called o1, spends more time "thinking" before giving an answer - "just like a person would".
The artificial intelligence tries out different approaches and recognizes and corrects its own mistakes, OpenAI explained in a blog post. This is having an effect in areas such as mathematics and software programming. For example, the o1 model solved 83 percent of the tasks in the International Mathematical Olympiad exam. The current ChatGPT-4o only achieved 13 percent.
At the same time, the new model still lacked many of ChatGPT's useful functions. For example, it cannot search for information on the web and does not support the uploading of files and images - and is also slower so far. From OpenAI's point of view, the new model can help researchers with data analysis or physicists with complex mathematical formulas, for example.
o1 also hallucinates
The documents also show that the new model knowingly gave the wrong answer in 0.38 percent of cases in a test selection of 100,000 queries. This mainly happened when o1 was asked to refer to articles, websites or books.
However, in many cases this was not possible without access to the Internet search. So the software invented plausible-looking examples. However, the software only ever wanted to fulfill the user's wishes. Hallucinations, in which AI software simply invents information, are generally an unsolved problem.
This article has been automatically translated,
read the original article here.








Da dieser Artikel älter als 18 Monate ist, ist zum jetzigen Zeitpunkt kein Kommentieren mehr möglich.
Wir laden Sie ein, bei einer aktuelleren themenrelevanten Story mitzudiskutieren: Themenübersicht.
Bei Fragen können Sie sich gern an das Community-Team per Mail an forum@krone.at wenden.