A AI-Poutered system will soon be responsible for assessing potential injuries and privacy risks up to 90% of the updates made by meta apps like Instagram and whatsapp, according to internal documents reported to be viewed by NPR.
Says NPR A 2012 Agreement Between Facebook (now Meta) and the Federal Trade Commission requires that the company to conduct privacy reviews of any potential update. So far, those reviews are mainly held by human evaluations.
Under the new system, Meta Product Teams are reported to fill a question about their work, with requirements that have the requirements with an update or part that should meet before they launch.
This AI-Centric Method will allow the meta to update its products easier, but a former executive tells “higher risks of changes before they start the world’s problems.”
In a statement, a Meeta speaker said that the company has “invested in $ 8 billion in our privacy program” and delivered to new regulatory obligations. “
“As risks evolve and our program matures, we enhance our processes to better identify risks, streamline decision-making, and improve people’s experience,” The spokesperson said. “We use technology to add consistency and ability to decide on low risk and trust human skills for strictly evaluations and supervision issues.”
This post has been updated with additional quotations from Meta’s statement.