Technology

A poisoned file can leak “secret” data through chatgpt

The latest creatures Not only are AI models independent text-generating chatbots, but they can easily be connected to your data to provide personalized answers to your questions. OpenAI’s Chatgpt can be linked to your Gmail inbox, allowing you to check your GitHub code, or find dates in your Microsoft calendar. But these connections have the potential to be abused – researchers show that just one “poisoning” file can do it.

New discoveries by security researchers Michael Bargury and Tamir Ishay Sharbat revealed at the Black Hat Hacker conference in Las Vegas today how weaknesses in OpenAI connectors allow for the extraction of sensitive information from Google Drive accounts using indirect prompt injection attacks. In the demonstration of the attack, Bargury is known as AgentFlayer, which shows how to extract developers’ secrets in the form of API keys that are stored in the demonstration drive account.

The vulnerability highlights how connecting AI models to external systems and sharing more data in them will increase the potential surface of malicious hackers and may multiply the way in which vulnerability may be introduced.

“There is no need for users to compromise, and the user does not have to do anything to get out of the door,” Bargury, chief technology officer at security firm Zenity, told WIRED. “We have proven that this is completely zero-click; we just need your email, we share the document with you, and that’s it. Yes, it’s very, very bad,” Bargury said.

Openai did not immediately respond to Wired’s request for comment about vulnerabilities in the connector. The company introduced Chatgpt’s connector as a Beta feature earlier this year, and its website lists at least 17 different services that can be linked to its accounts. It says the system allows you to “integrate tools and data into chatgpt” and “search files, delete real-time data, and references in chats.”

Bargury said he reported the finding to OpenAI earlier this year, and the company quickly introduced mitigation to prevent the technology he used to extract data through the connector. The way an attack works means that a limited data can only be extracted at once – as part of the attack, the FULL document cannot be deleted.

“While this question is not specific to Google, it illustrates why strong protections for rapid injection attacks are important,” said Andy Wen, senior director of security product management at Google Workspace.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button