Xai blames an “unauthorized again” for a bug in AI-Powered Grok Chatbot causing grock repeatedly referred to “White genocide in South Africa” when asked in some X.
On Wednesday, the groke refers started several x posts with information about South Africa’s white genocide, even as response to unrelated subjects. The odd answers from the x account for the groke, responding to users with AI posts when a person tags “@grok.”
According to a post Thursday from Xai’s official X Account, a change is made by Wednesday System that prompts the grok bot – groke-ordered with a “specific answer” to a “political subject. “Xai says the tweak” violated (ITS) internal policies and primary qualities, “and that the company” conducts a thorough examination. ”
This is the second time publicly recognizes XI to an unauthorized change in the groke code causing AI to respond to controversial ways.
In February, Grek In short censorship Invalid reference to Donald Trump and Elon Musk, the billionaire founder of Xai and Owner of X. Igor Babuschkin, a Proiding Engineering President, says the grak is ordered by a Rogue’s employee The ignition of sources mentioned musk or trumps spread misinformation, and Xai abandoned the change when users have begun.
Xai said Thursday will make great changes to avoid the same events from the near future.
Beginning now, Xai The prompt of the groke prompt system to Github as well as a Changelog. The company also said “Put additional checks and steps” to ensure that XAI employees cannot change the answers to incidents not obtained with automated systems. “
In spite of frequent musk warnings in the dangers of Ai lost uncheckedXai has a bad record of AI safety. A recent report found that the groke contemplates women’s pictures when asked. Chatbot can also be more extreme acting For AI like Google’s Gemini’s Gemini and Chatgpt, cursing without restrainting.
A safety study, a nonprofit trying to improve the accountability of AI Labs, found that Xai resto is less secure with its peers, for this “Very weak” risk management practices. Before this month, Xai missed a set specified deadline to proclaim a first ai safety structure.