Technology

Unpublished report on AI security in the U.S. government

On the computer Last October, in a security conference in Arlington, Virginia, dozens of AI researchers participated in the first exercise of the “Red Team” or emphasized testing cutting-edge language models and other artificial intelligence systems. Over two days, the team identified 139 new methods to make the system perform poorly, including generating misinformation or leaking personal data. More importantly, they show shortcomings in the new U.S. government standards, designed to help companies test AI systems.

The National Institute of Standards and Technology (NIST) did not release a report detailing the exercise, which was completed by the end of the Biden administration. The document may have helped companies evaluate their own AI systems, but sources familiar with the situation, who spoke on anonymity, said it was one of several AI files from NIST because they were worried about clashes with the upcoming government and were not released.

“Even below, it becomes very difficult [president Joe] Biden, to obtain any papers. “It feels like climate change research or cigarette research,” a source at NIST said. ”

Neither NIST nor the Ministry of Commerce responded to requests for comment.

Before taking office, President Donald Trump said he plans to reverse Biden’s executive orders on AI. Since then, Trump’s administration has guided experts to study issues such as algorithmic bias or fairness in AI systems. The AI Action Plan issued a clear call for revisions to NIST’s AI risk management framework to “eliminate references on misinformation, diversity, equity and inclusion, and climate change.”

Ironically, though, Trump’s AI action plan also calls for accurate coverage of unpublished reports. It calls on numerous agencies and NIST to “coordinate an AI Hackathon initiative to solicit the best and smartest initiatives in the American academia to test the transparency, effectiveness, usage controls and security vulnerabilities of AI systems.”

The Red Teams activity is organized by Huanane Intelligence, a person who assesses the risks and impacts of the AI (ARIA) program, a person specializing in AI Systems Sea Sea Sea Sea Sea Sea Sea Save Teams attack tools. The event took place at the Conference on Applications of Machine Learning for Information Security (CAMLIS).

CAMLIS RED TEARMING REPORT describes efforts to explore Meta’s open source large language model, including Llama, multiple state-of-the-art AI systems; Anote, a platform for building and fine-tuning AI models; a system that blocks attacks on AI Systems, Robust Intelligence of a company acquired by Cisco; and a platform for generating AI avatars from company synthesis. Representatives from each company also participated in the work.

Participants were asked to use the NIST AI 600-1 framework to evaluate AI tools. The framework covers risk categories, including the generation of misinformation or cybersecurity attacks, the leakage of private user information or critical information about related AI systems, and the potential for users to be emotional on AI tools.

Researchers have discovered various techniques for testing models and tools to skip guardrails and generate error messages, leak personal data and help create cybersecurity attacks. The report said those involved believed that some elements of the NIST framework were more useful than others. The report says some risk categories of NIST are not sufficient to define as useful in practice.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button