Artificial intelligence has changed the way people use the internet, turning ordinary searches into full conversations, but with that comes a new kind of risk. A recent report revealed that security firm SquareX discovered a cyberattack technique known as “AI sidebar spoofing.” The attack targets AI browsers like Atlas and Comet.
AI sidebars are chat windows on the side of a browser that help summarize articles and answer questions.
They are built to make online work faster and easier, but researchers at SquareX found that hackers have turned that convenience into a trap.
By copying the look and feel of these sidebars, they can fool users into trusting a false version that could potentially steal data, spread malware or send them to dangerous websites without any sign that something is wrong.
The process is simple. A person installs a seemingly normal browser extension, something for taking notes or improving productivity, but inside the extension is hidden code that secretly adds a fake sidebar each time the browser opens.
The false version looks perfect, with the same icons, fonts and responses as the real thing.
It behaves normally at first, but once a certain type of request appears, the attacker’s code inserts harmful instructions or redirects the user to phishing websites.
SquareX says the real danger is how convincing the imitation looks.
Researchers explained that there is no visible or functional difference between the fake and real AI sidebar, so users have no reason to doubt what they see.
In an example, a fake sidebar managed to trick users into running commands that gave hackers full access to their computer.
OpenAI said that Atlas has built-in safeguards. It cannot install extensions, download files or run codes.
However, those protections cannot prevent someone from being tricked into installing a fake tool.
SquareX reported the issue to both OpenAI and Perplexity, but researchers admitted there is no reliable solution. The problem is not in the software but in human trust, which is much harder to fix.
This discovery reveals a bigger truth about AI tools. The line between helping users and harming them is becoming thinner. Browsers such as Chrome, Edge and Brave have started adding AI features into everyday browsing, which combines chat directly with search.
While this makes the internet smarter and more personal, it also gives hackers more ways to hide.
A simple search bar has evolved into an interactive space, making it easier for attackers to deceive users.
Experts warn this is only the beginning. As people get familiar with talking to AI sidebars, attackers will keep finding ways to take advantage of that trust.
Cybersecurity is no longer focused on antivirus software or strong passwords but focused on noticing when something feels wrong, even if it seems right.
Protecting users will require more awareness than code. People need to be careful about what they install and where they click. Developers need to make it clear when an AI sidebar is authentic.
The rise of AI browsing has brought new possibilities, but it also calls for shared responsibility.
Companies developing these tools must think beyond innovation and prioritize security from the start.
At the same time, users must stay informed and cautious, as well as understand that not every friendly user interface is trustworthy. As AI continues to merge into daily life, the challenge will be keeping that balance between progress and protection.
The internet feels more personal; however, it has also become easier to deceive people. In this new age of AI browsing, trust is not only important but also fragile.
Categories:
AI horizon: New AI sidebars feature leads to trouble
Manish Kumar, Science & Technology Editor
November 3, 2025
1
More to Discover
About the Contributor
Manish Kumar, Science & Technology Editor
Manish Kumar is the Science & Technology Editor of The Ticker.
