Google Threat Intelligence Group (GTIG) has published a new report warning about AI model extraction/distillation attacks, in which private-sector firms and researchers use legitimate API access to ...
Researchers identified an attack method dubbed “Reprompt” that could allow attackers to infiltrate a user’s Microsoft Copilot session and issue commands to exfiltrate sensitive data. By hiding a ...
Microsoft has fixed a vulnerability in its Copilot AI assistant that allowed hackers to pluck a host of sensitive user data with a single click on a legitimate URL. The hackers in this case were white ...
Researchers discover Gemini AI prompt injection via Google Calendar invites Attackers could exfiltrate private meeting data with minimal user interaction Vulnerability has been mitigated, reducing ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results