AI browsers like Atlas from OpenAI and Comet from Perplexity promise convenience. But they come with major cybersecurity risks, forming a new playground for hackers.
AI powered web browsers compete with traditional browsers like Google Chrome and Brave, aiming to attract billions of daily internet users.
A few days ago, OpenAI released Atlas, while Perplexity’s Comet has been around for months. AI-powered browsers can type and click through pages. Users can tell it to book a flight, summarize emails, or even fill out a form.
Basically, AI-powered browsers are designed to act as digital assistants and navigate the web autonomously. They are being hailed as the next big leap in online productivity.
Security researchers flag AI browser flaws
But most consumers are unaware of the security risks that come with the use of AI browsers. Such browsers are vulnerable to sophisticated hacks through a new phenomenon called prompt injection.
Hackers can exploit AI web browsers, gain access to users’ logged-in sessions, and perform unauthorized actions. For example, hackers can access emails, social media accounts, or even view banking details and move funds.
According to recent research by Brave, hackers can embed hidden instructions inside web pages or even images. When an AI agent analyzes this content and sees the hidden instructions, it can be tricked into executing them as if they were legitimate user commands. AI web browsers cannot tell the difference between genuine and fake user instructions.
Brave engineers experimented with Perplexity’s Comet and tested its reaction to prompt injection. Comet was found to process invisible text hidden within screenshots. This approach enables attackers to control browsing tools and extract user data with ease.
Brave’s engineers called these vulnerabilities a “systemic challenge facing the entire category of AI-powered browsers.”
Prompt injection is hard to fix
Security researchers and engineers say that prompt injection is difficult to fix. That’s because artificial intelligence models do not understand where instructions come from. They can’t differentiate between genuine and fake prompts.
Traditional software can tell the difference between safe input and malicious code, but large language models (LLMs) struggle with that. LLMs process everything, including user requests, website text, and even hidden data, and treat it as one big conversation.
That’s why prompt injection is dangerous. Hackers can easily hide fake instructions inside content that looks safe and steal sensitive information.
AI companies admit prompt injection is a serious threat
Perplexity stated that such attacks don’t rely on code or stolen passwords but instead manipulate the AI’s “thinking process.” The company built multiple defense layers around Comet to stop prompt injection attacks. It uses machine learning models that detect threats in real time and has integrated guardrail prompts that keep the AI focused on user intent. Moreover, the browser requires mandatory user confirmation for sensitive actions like sending an email or purchasing an item.
Security researchers believe AI-powered browsers should not be trusted with sensitive accounts or personal data until major improvements are rolled out. Users can still utilize AI web browsers, but with no access to tools, disabled automated actions, and should avoid using them when logged in to banking accounts, emails, or healthcare apps.
The Chief Information Security Officer (CISO) of OpenAI, Dane Stuckey, acknowledged the dangers of prompt injection and wrote on X, “One emerging risk we are very thoughtfully researching and mitigating is prompt injections, where attackers hide malicious instructions in websites, emails, or other sources to try to trick the agent into behaving in unintended ways.”
He explained that OpenAI’s goal is to make people “trust ChatGPT agent[s] to use your browser, the same way you’d trust your most competent, trustworthy, and security-aware colleague or friend.” Stuckey said the team at OpenAI is “working hard to achieve that.”
Join Bybit now and claim a $50 bonus in minutes














English (US)