Is SuperIntelligent AI around the corner, or is just a sci-fi dream?

Is SuperIntelligent AI around the corner, or is just a sci-fi dream?

Do machines become smarter than people?

Chan2545 / istockphoto / getty images

If you have taken out artificial leaders of intelligence in their word, their products meant that the History of the History “, where the researchers worked out a different reality, which even the best models were failing to solve the main puzzles, while the promise to AI to be “Reason” seems to be overblown. So, who do you need to believe?

Sam Altman and Demis Hassabis, Ceope Opui and Google Defermind, actually, both produce new claims that AI systems around the corner. on A blog postAltman wrote that “2030s are likely to be different from any time to come before”, from an important computer sciential computer in the next year “.

HASBISIS, SA Interact with WiredAlso that in 2030s, the artificial general intelligence (through) to resolve problems such as “more than the better sources of energy.” Then he has been a time to the stars and colonize the galaxy. “

This vision is very dependent on the mind that many models of language (LLMS) are like ChatGPT which can make more training data and the power of the computer we give them. This “law of scaling” seems to be true in the last few years, but there are signs of it to faint. For example, the new GPP-4.5 model that is likely to cost hundreds of millions of dollars that only trains moderate progress in the previous GPT-4. And that cost is nothing compared to future spending, with reports suggesting that Meta is about to announce $ 15 billion investments to an attempt to achieve “SuperIntellence”.

Money is not only the tested solution to this problem, however – AI companies also turned out to be “rational” models, like Openia’s O1, released last year. These models use more computation time and more to take a long time to make an answer, feed their own outputs themselves. This iterative process marked “chain-in-minded”, in an effort to get the contrasts in the way a person can think by step. “There are legitimate reasons to worry about AI plaseauing,” NOAM Brown in Openia told New Scientist Last year, but O1 and models like this mean “scaling law” can continue, he argues.

However new research is found that these models of arguments can stumble even in simple logic puzzles. For example, Apple researchers Tried Chinese AI Company Foldesek and Anthuudic models of anthropic mental models, working like O1-family OPUII models. Researchers know that they have “limits of exact comparison: they fail to use clear algorithms and reasoning uniformly in puzzles”, researchers wrote.

The team has tried AI in many puzzles, like a scenario where a person should bring the items to a river with a single one’s bigger ring a little bit a bit. Although models can solve puzzles in the fastest settings, they struggle with increasing the number of rings or things to carry. While we spend a longer period of time to think about a more complex problem, researchers know that AI models used “tokens” – the complexity of the problems that the models have shown.

“The harmful side is that it is tasks that are easy to dissolve,” as Artur Garcez In City, University of London. “We’ve already learned 50 years ago how to use symbolic AI arguments to solve it.” It is possible that these new systems can be corrected and improve that ultimately can make complex problems, but it is shown in this research that will always occur in models or computation of products provided, as Garcez gives them, as Garcez.

It is also a reminder that these models are still struggling to solve scenarios they do not see outside their training data, as Nikos Aletras at the University of Sheffield. “They work well in many cases, like finding, collecting information and after these models trained to make these kinds of things, but they don’t do it,” they seemed to do it, “they said aletras. “Now, I think the apple research finds a blind spot.”

Meanwhile, other research shows that the addition of “thinking” time can damage a performance of a AI model. Soulma Suvra Ghosal and his partners at the University of Maryland were tested models of Dereeek and found highly high “chain of thought” processes carries a reduced accuracy of tests of mathematical arguments. For example, for a mathematical benchmark, they know that trianging the amount of tokens used in a model can increase these about 5 percent. But using 10 to 15 times many tokens also dropped benchmark marks at about 17 percent.

In some cases, this “mental chain” outputs are made by an AI bear associated with the last response it has provided. When Models of testing ability to navigate simple mazes,, Subbarao Kambhampati In Arizona State University and his associates found that even AI solves the problem, “mental chain” contains errors not found in the final solution. What else is, the feeding of AI meaningless “mental chain” can give better answers.

“Our results challenged the dominant tokens or ‘mental chains’ can be semantically interpreted as traces of AI models,” Kambhampati said.

In fact, all studies suggest that “thinking” or “rational” labels for these AI models is a misdemeanor, as Ana Rogers in this university of Copenhagen in Denmark. “Until I am in this field, every famous approach I can imagine the first sprayed on some vague causes, which ultimately proves wrong.”

Andreas Vlachos At the University of Cambridge focuses that LLMS still has a clear application of generation and other tasks

“Mainly, there is a mismatch between what these models are trained to do, which is the next prediction, which we have to do, which is making reason,” says Vlachos.

However, Openi disagrees. “Our work shows that methods of reasoning such as chain-of-mind can enhance the performance of complex problems, and we actively develop these capabilities through better training, evaluating,” an evaluate, “saying a spokesman. Derepesek does not respond to a request for commentary.

Topics:

Leave a Reply

Your email address will not be published. Required fields are marked *