Background waves

The glaring security risks with AI browser agents

GettyImages-1290478576.jpg

New AI-powered web browsers such as OpenAI’s ChatGPT Atlas and Perplexity’s Comet are trying to unseat Google Chrome as the front door to the internet for billions. A key selling point of these products is their web browsing AI agents, which promise to complete tasks on a user’s behalf by clicking on websites and filling out forms. However, people may not be aware of the major risks to user privacy that come along with agentic browsing—a problem the entire tech industry is trying to address.

Risks and Weaknesses of AI Browsers

Cybersecurity experts argue that AI browser agents pose a greater risk to privacy when compared to traditional browsers. Users should consider how much access they grant web browsing AI agents, carefully weighing the benefits against potential risks. To work efficiently, AI browsers like Comet and ChatGPT Atlas request a high level of access, such as viewing and acting within a user’s email, calendar, or contact list. In testing by TechCrunch, these agents prove moderately useful for simple tasks, particularly when granted broad access. Nevertheless, current AI browser agents often struggle with complex tasks or require significant time to complete them, which can make them seem more like a novelty than a productivity booster.

However, this broad access comes with significant drawbacks. A primary concern is “prompt injection attacks”—a vulnerability that can be exploited if bad actors hide harmful instructions on web pages. When an agent analyzes such a page, it could be tricked into executing commands from an attacker. Without enough safeguards, these attacks can unintentionally expose user data like emails or logins or cause agents to take actions such as making unintended purchases or social media posts on behalf of the user. This type of attack is relatively new and is closely linked to the emergence of AI agents, and currently, there is no definitive solution to prevent it entirely.

The launch of ChatGPT Atlas may encourage more people to try out AI browser agents, which in turn could make these security risks more pressing. Brave, a browser company focused on privacy and security, published research concluding that indirect prompt injection attacks are a “systemic challenge facing the entire category of AI-powered browsers.” While initially observed in Perplexity’s Comet, the issue is now recognized as widespread across the industry. Shivan Sahib, a senior research & privacy engineer at Brave, points out that although there is a massive opportunity to improve user convenience, the browser performing actions on behalf of users introduces fundamentally new security risks.

Industry Response and User Protection Strategies

OpenAI’s Chief Information Security Officer, Dane Stuckey, publicly acknowledged that “prompt injection remains a frontier, unsolved security problem,” and that adversaries will continue to attempt new attacks on ChatGPT agents. Meanwhile, Perplexity’s security team emphasized that prompt injection attacks are so severe they “demand rethinking security from the ground up,” and they noted these attacks manipulate the AI’s decision-making, turning it against its user.

To address these challenges, both OpenAI and Perplexity have developed several safeguards. OpenAI introduced a “logged out mode” for ChatGPT Atlas, so the agent isn’t logged into user accounts while browsing, limiting both the risk and the agent’s usefulness. Perplexity claims to have developed a detection system capable of identifying prompt injection attacks in real time. Cybersecurity experts recognize these improvements as positive, but they warn there is no guarantee that these defenses make web browsing agents immune to attackers.

Steve Grobman, CTO at McAfee, explains that prompt injection attacks exploit the inability of large language models to distinguish the source of instructions clearly. There is a somewhat vague separation between core model instructions and the data it consumes, making it difficult to eliminate the issue. These attacks are constantly evolving, moving from simple hidden text prompts on web pages to more sophisticated methods such as hidden instructions embedded in images.

Users can take several practical steps to protect themselves when using AI browsers. Rachel Tobac, CEO of SocialProof Security, recommends using unique passwords and enabling multi-factor authentication on AI browser accounts since attackers will likely target these credentials. Users should also consider restricting the access granted to AI agents, especially by siloing them from sensitive banking, health, or personal accounts. As AI browsers mature, their security is expected to improve, so users may wish to wait before allowing broad access to these tools.

Tags: siguria kibernetike, agjentët AI, privatësia online, prompt injection, shfletues me inteligjencë artificiale, mbrojtja e të dhënave