Recently, Nvidia Founder Jensen Huang, whose company builds chips using the most advanced artificial intelligence systems Tomorrow: “The thing is true, the more amazing the way your AI program is like your program to someone.” Ilya sutskever, co-founder of Openi and one of the leading numbers of the AI revolution, also SAYS that it is only an hour before AI can do everything that people can do, because “the brain is a biological computer.”
I am a neuroscience neuroscience advocate, and I think they are dangerously wrong.
The greatest threat is not that these metaphors confuse us how AI is moving, but we are misleading our own brain. During past technology revolutions, scientists, as well as popular culture, seeking to investigate the idea that a new machine after a new machine after a new machine after a clock, a switchboard, a computer. The most recent misconception is that our brain is like AI systems.
I see this transition over the past two years of conferences, courses and conversations in the Neuroscience field and beyond. Words like “training,” “tuning” and “optimizing” often used to describe human behavior. But we don’t train, good enough or optimizing the way AI did. And such inaccurate metaphor can cause real harm.
The 17th Century idea of the mind as a “blank slate” imagined children as empty surfaces shaped entirely by outside influences. This is causing strict educational systems that attempt to eliminate differences in neurodygn children, such as those with autism, adhd or dyslexia, instead of providing personal support. Similarly, the model in the first 20th century “black box” from the psychological behavior claimed only to be found ethical. As a result, mental health often focuses on handling symptoms instead of understanding their emotional or biological cause.
And now there are new Misbegotten methods that come out as we begin to see ourselves with AI image. Digital education equipment developed Recent years, for exampleAdjusting lessons and questions based on a child’s responses, theory that the student is hidden in an optimal level of learning. It is more inspired by how a model AI is trained.
This stimulating approach can provide impressive results, but it does not find measurable factors such as motivation or motivation. Consider two children who learn the piano with the help of a smart app adjusting for their changing skills. One can easily learn to play imperfect but hate every practice session. Others make frequent mistakes but enjoy every minute. Only judging the terms we used in AI models, we say the child who is playing perfectly endless with other students.
But educating children is different from training an AI algorithm. That simple examination does not account for the suffering of the suffering of the first student or the ability to suffer the second. Those reasons; Have a great opportunity the child who enjoys is another playing a decade from now – and can still end a better and most original musician because they enjoy the activity, mistakes and all. I think AI’s learning is equally unavoidable and can be changed for better, but if we examine the children what can be “trained” and “repent of the old mistake in emphasis on output.
I see this play with undergraduate students, who, for the first time, believe they can achieve the best measure of the process of learning process. Many have used AI tools in the past two years (some courses allow it and others are not) and now trusts them to maintain goodness, always supportive and genuine understanding. They use AI as a tool that helps them to produce good essays, though the process in many cases no longer have a lot of connection with the original thinking or to discover what students arouse curiosity.
If we continue to think of this brain-as-ai framework, we also risk losing important thinking processes carrying major breakthroughs of science and art. These successes do not come from identifying familiar patterns, but from breaking them through eternity and unexpected mistakes. Alexander Fleming discovered penicillin by announcing that the mold grows in a petri dish accidentally he left the killer of the surrounding bacteria. A useful error made by a dangerous researcher who continues to save the life of hundreds of millions of people.
This eternity is not only important for the scientific equations. It is important to every human brain. One of the most interesting neuroscience discoveries over the past two decades is the “default mode network,” a group of brain regions that have been active and does not focus on a specific task. This network is also found to have a role in reflecting on the past, imagine and thinking about ourselves and others. The ignoring this thinking-shandering behavior as a glitch rather than a main part of the person can never be avoided to establish the wrong education, law and law systems.
Unfortunately, it is easier to confuse AI in human thinking. Microsoft describes AI models like Chatgpt on this Official Website As tools “mirror the expression, change our relationship with technology.” And Openi CEO Sam Allman who recently highlighted his favorite new Chatgpt part called “memory.” This function allows the system to continue and remember personal details of conversations. For example, if you ask the chatgpt where to eat, you can remind you of a Thai restaurant you’re talking about wanting to try the months at first. “It’s not your plug in your brain a day,” Altman explained“But … you’ll know you, and it can be this upbringing yourself.”
The proposal that “memory” in AI can be a further self an error metaphor – bringing us out of understanding new technology and our own mind. Unlike person’s memory, which has been developing forgetting, updating and reshaping memories based on many factors, AI memory can be resigned with information with less distortion or forgetting. A life where people outsource memory of a system remembers nearly not a self-raising; It destroys from its mechanisms that make us human. This will mark a shift of how we behave, understand the world and make decisions. It can start with little things, such as choosing a restaurant, but it’s easy to move on to greater decisions or choosing a different connections and the AI tumors related to a cause or other.
This outsourcing can be tempting because this technology is as human, but AI has learned, understanding and seeing the illness, love or curiosity we do. The consequences of this continuing confusion can be harmful – not because AI is always harmful, but since we are molded to our human thoughts, we will allow it to reshape our human mind, let us reshape our human mind.
To him low gef and phOs Cognitive Neuroscience candidate at Columbia University and Author of Novel “The factory of Mrs. Lilienblum. “. His substack newsletter, Neuron storiesconnects to the needs of neuroscience of human behavior.