A poisoned document can leak “secret” data through ChatGpt


Last productive Artificial intelligence models are not just independent text chats-in mode, they can easily connect to your data to give personal answers to your questions. ChatGpt Openai can be linked to your Gmail Inbox, allowed to inspect your GitHub code or find appointments on the Microsoft calendar. But these connections have the opportunity to be harassed – and researchers have shown that it can only do one “poisoned” document to do so.

The new findings of Michael Bargari’s and Tamir Eisha Sharbat security researchers, leaked today at the Black Hacker Hacker Conference in Las Vegas, show how the weakness in OpenAI has allowed to extract sensitive information from a Google Drive account using an indirect rapid injection attack. In a show of the attack, the nickname Agentflayer, the company shows how the developer secrets can be extracted in the form of API keys stored in a demonstration drive account.

This vulnerability shows how to connect AI models to external systems and to share more data across them increases the potential attack level for malicious hackers and potentially multiply ways to introduce vulnerabilities.

Bargury, CTO at Zenity Security Company, tells Wired, “There’s nothing that the user does to endanger, and there is nothing the user to do to get the data out.” “We have just shown that it is quite zero; we only need your email, we share the document with you, and that’s the same. So yes, that’s a lot of bad.”

Openai did not immediately respond to Wired request to comment on vulnerability in connections. Earlier this year, the company introduced connections for ChatGPT as a beta feature, and its website lists at least 17 different services that can be related to its accounts. The system says the system allows you to “insert your tools and data into the umbrella” and “search files, draw live data and the right reference content in the chat.”

Bargury says he reported Openai’s findings earlier this year and has rapidly declined to prevent the technique he used to extract data through connections. The way the attack works means that only a limited amount of data can be extracted simultaneously – FULL documents cannot be eliminated as part of the attack.

“Although this is not specific to Google, it shows why it is important to create strong support against rapid injection attacks,” said Andy Van, a senior manager of security products in Google recently.

Leave a Reply

Your email address will not be published. Required fields are marked *