The dangers of so-called AI experts believe in their own hype

The dangers of so-called AI experts believe in their own hype

Demis Hassabis, CEO of Google Defermind and a Nobel Prizewinner For his role in developing the Algfold AI algorithm for predicting protein structures, gives a stunning claim to 60 minutes Show in April. With AI’s help like AlphaFold, he says, the end of all illnesses reach, “maybe in the next decade or more”. With that, the interview went on.

In those who actually work in drug development and pain, this claim is funny. According to In medicinal chemist Derek Lowe, who works for decades of drug statements, Hassabis statements “that I want to spend time staring at my own”. But there is no need to be an expert in hyperbole recognition: the idea that all illnesses end around a decade of unreasonable.

Some suggested that Hassbastabi’s remarks are just an instance of tech leaders overpromising, perhaps to attract investors and funds. Is it not like Elon Musk make foolish forecasts about mertian colonies, or Openia’s Sam Alsman Claiming that artificial general intelligence (through) is only in the room? But while the cynical view can be competent, it allows these experts from hook and drop off the problem.

This is something if such authorities make Grand Shegay outside their area with skills (see Stephen Hawking in AI, foreigners and travel trips). But it can be as Hassabis stayed in his way here. Her nobs mentioned new pharmacists as a potential benefit of alpelahon predictions, and the release of the algorithm is accompanied by media headlines regarding drug change.

Also, when his company 2024 Nobel Laurelate Geoffrey Hinton, once a AI counselor with Google, MANILA That many language models (LLMS) helped make work in a way that resembles human learning, as he speaks from deep knowledge. So don’t think about the CRY In protest from human cnognition researchers – and, in some cases, in Ai also.

What such moments are revealed is, strange, some of the AI experts appear to have shot their products while there is understanding of otherwise, the best, skinny skin.

Here is another example: Daniel Kokotajlo, a researcher who has stopped Openi about its work concerns and today is the Executive Director of AI Futures Project in California, has Says: “We are subject to our lies AIS, and we are sure they know that the thing they say is false.” Her anthropomorphic language of knowledge, purpose and cheat shows lost in kokotajlo exactly what llms are.

The dangers of thinking these experts know the most featured on Hinton’s comment in 2016 that, thanks to AI, “people should stop training radiologists today”. Luckily, radiology experts do not believe in him, Even some suspect art Linking between what he said and growing concerns from medicinal students about radiology job prospects. Hinton has Since repeated claims – But imagine how much you can do if he has given the Nobel. The same applies to Hassabis’s comments of the disease: The idea to do with AI the heavy revolt can be competent, if we need exactly, if we need exactly the opposite, science and political science.

These “experts” prophets are likely to get little pushback from the media, and I personally prove that even some smart scientists believe them. Many government leaders also give the impression that they swallow the hype of CEOs in Tech and Silicon Valley Gurus. But I recommend that we start treating their statements like people themselves, meeting their superficial trust in doubt until the fact is checked.

Philip Ball is a science based on London-based science. Her newest book is How is life work

Topics:

Leave a Reply

Your email address will not be published. Required fields are marked *