"Are You a CEO, Director, or Founder interested in a Feature Interview?"
All Interviews are 100% FREE of Charge
Other attacks created by Balgley showed how hackers (who, again, would have already compromised email accounts) could access people’s salaries and other confidential information. Activate Microsoft’s protection for sensitive filesWhen requesting data, Bargury’s prompts require that the system not provide a reference to the file from which it retrieves the data. “A little intimidation certainly helps,” Bargury says.
In other instances, an attacker may not have access to an email account, but could send malicious emails to poison the AI’s database. Manipulate your banking answers Users must provide their bank account details. “Every time you give an AI access to your data, you give an avenue for attackers to get in,” Balgley said.
In another demo, an external hacker was able to Are corporate financial statements good or bad?Regarding that last example, Balgley says: Turning Copilots into “Malicious Insiders”By providing users with a link to a phishing site,
Philip Misner, head of AI incident detection and response at Microsoft, said the company is grateful to Burghley for identifying the vulnerability and that the company is working with him to evaluate the findings. “The risks of AI post-compromise exploitation are similar to other post-compromise techniques,” Misner said. “Security prevention and monitoring across environments and identities can help mitigate or prevent these actions.”
Generative AI systems such as OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini have been developing over the past two years and are on track to eventually perform tasks on behalf of humans, such as booking meetings or shopping online. However, security researchers have consistently noted that allowing external data to AI systems, such as through access to email or website content, poses security risks through indirect prompt injection and poisoning attacks.
“I think there’s a really poor understanding of how effective attackers can actually be right now,” said Johan Rehberger, a security researcher and Red Team director. Security weaknesses in AI systems have been widely documented“We have to worry about [about] That’s what LLM is actually generating and sending out to users right now.”
Balgley said Microsoft has put a lot of effort into securing its Copilot system against prompt injection attacks, but that hackers have found ways to exploit it by figuring out how the system is built. This includes: Extracting Internal System Prompts“And we’re looking at how it can be accessed,” he said. Company Resources And about the technology used to do it: “When you talk about Copilot, the conversation is limited because Microsoft has built in a lot of controls,” he says, “but when you say a few magic words, it opens up the conversation and you can do anything.”
Rehberger warned broadly that some of the data problems are linked to a long-standing issue of companies giving too many employees access to files and not setting access permissions properly across the organization. “Imagine that problem with Copilot,” he said. He said he used the AI system to search for common passwords such as Password123 and it returned results from within the company.
Both Rehberger and Burghley say there needs to be more focus on monitoring what AI generates and sends to users. “The risks have to do with how the AI interacts with the user’s environment, how it interacts with the data, and how it performs operations on the user’s behalf,” Burghley says. “We need to know what the AI agent is doing on the user’s behalf, and whether that’s aligned with what the user actually requested.”
"Elevate Your Brand with an Exclusive Feature Interview!"