It is said that Yuanren used AI instead of humans to assess product risks

According to a new internal document review by NPR, META is allegedly planning to replace human risk assessors with AI as the edge of the company is closer to full automation.
Historically, Meta relied on human analysts to assess the potential harm caused by new technologies on its platform, including updates to algorithms and security features, part of a process called privacy and integrity comments.
But, in the near future, these basic assessments may be taken over by robots, as the company hopes to use artificial intelligence to automate 90% of the effort.
DeepSeek R1 update proves this is a positive threat to Openai and Google
NPR reported that although previously said AI was only used to evaluate “low-risk” distributions, META is now rolling out technology in AI security, youth risk and integrity decisions, including false information and violent content regulation. Under the new system, the product team submits questionnaires and receives immediate risk decisions and suggestions, and engineers have greater decision-making power.
Mixable light speed
While automation may speed up application updates and developers have released it based on Meta’s efficiency goals, insiders say it could also pose greater risks to billions of users, including unnecessary threats to data privacy.
In April, Meta’s oversight committee issued a series of decisions, while validating the company’s stance on allowing “controversial” speeches and denounced the tech giant’s content moderation policy.
“As these changes are launched globally, the Board stressed that it is now essential to identify and address the adverse effects on possible human rights that may be caused,” the decision reads. “This should include assessing whether reducing their reliance on automated detection of policy violations will have imbalanced consequences, especially in countries experiencing current or recent crises, such as armed conflict.”
Earlier this month, Meta shut down human fact-checking programs, replaced it with crowdsourcing community notes, and relied more on its content tuning algorithms – internal technology – it is well known that it was wrongly missed the wrong misinformation and violated the company’s recent overhaul content policy.
theme
Artificial Intelligence Meta