Welcome to the Identity Theft Resource Center’s (ITRC) Weekly Breach Breakdown for November 7, 2025. I’m Tatiana Cuadras, Communications Assistant for the ITRC. Thanks to SentiLink for their support of the podcast and the ITRC. Each week, we look at the most recent events and trends related to data security and privacy. This week, we discuss how artificial intelligence (AI) search tools can be fooled by fake content, a form of AI manipulation that’s reshaping how systems learn and interpret information.
I assume that we are all aware of how AI is reshaping how we access and process information. However, it turns out some of these “smart” systems aren’t as smart as we thought. In fact, new research shows that AI can be fooled easily. Think of it like your mom seeing an AI-generated photo of you and Michael Jackson on Facebook and telling all her coworkers that you actually met the Michael Jackson. The image looks so real that she believes it even though it’s completely fake. That’s exactly how some AI systems get “fooled” by realistic but false information.
Follow on LinkedIn: www.linkedin.com/company/idtheftcenter/
Follow on Twitter: twitter.com/IDTheftCenter
I assume that we are all aware of how AI is reshaping how we access and process information. However, it turns out some of these “smart” systems aren’t as smart as we thought. In fact, new research shows that AI can be fooled easily. Think of it like your mom seeing an AI-generated photo of you and Michael Jackson on Facebook and telling all her coworkers that you actually met the Michael Jackson. The image looks so real that she believes it even though it’s completely fake. That’s exactly how some AI systems get “fooled” by realistic but false information.
Follow on LinkedIn: www.linkedin.com/company/idtheftcenter/
Follow on Twitter: twitter.com/IDTheftCenter
Show Notes
Follow on LinkedIn: www.linkedin.com/company/idtheftcenter/
Follow on Twitter: twitter.com/IDTheftCenter
Show Transcript
Welcome to the Identity Theft Resource Center’s (ITRC) Weekly Breach Breakdown for November 7, 2025. I’m Tatiana Cuadras, Communications Assistant for the ITRC. Thanks to SentiLink for their support of the podcast and the ITRC. Each week, we look at the most recent events and trends related to data security and privacy. This week, we discuss how artificial intelligence (AI) search tools can be fooled by fake content, a form of AI manipulation that’s reshaping how systems learn and interpret information.I assume that we are all aware of how AI is reshaping how we access and process information. However, it turns out some of these “smart” systems aren’t as smart as we thought. In fact, new research shows that AI can be fooled easily. Think of it like your mom seeing an AI-generated photo of you and Michael Jackson on Facebook and telling all her coworkers that you actually met the Michael Jackson. The image looks so real that she believes it even though it’s completely fake. That’s exactly how some AI systems get “fooled” by realistic but false information.
Researchers at SPLX ran a few experiments that involved AI cloaking. AI cloaking is a technique where a bad actor sets up a website that shows different content to individuals browsing and AI crawlers. It’s like wearing a disguise for the internet and showing one version of yourself to humans and another to machines. AI cloaking is one of the most common forms of AI manipulation. It’s been used in SEO for years. However, now it’s being used to trick AI systems instead of search engines.
To test how effective AI cloaking is, the researchers at SPLX created websites that showed one thing to human visitors and something completely altered and irrelevant to AI crawlers— a deliberate act of AI manipulation that exposes how easily systems can be misled. Picture it like a two-faced website that is polite and professional to you but whispering something completely different to the AI crawlers.
In one test, they made a fictional designer named “Zerphina Quortane”. When you or I visited her page, we’d see a normal portfolio. However, when an AI system came knocking, it saw a wild story labeling her as a “Notorious Product Saboteur & Questionable Technologist.” Because AI can’t raise an eyebrow or run a fact-check, it just believed it and started spreading misinformation. This kind of AI manipulation highlights how false narratives can spread without human oversight.
Wait. It gets better or worse, depending on your perspective. The researchers at SPLX also tested fake resumes. They created a fake job position with very specific candidate evaluation criteria and then set up fake candidate profiles on different websites.
One fictional candidate was named “Natalie Carter,” and the researchers made it a point to have Natalie’s resume seem more qualified in the eyes of the AI crawler compared to a human’s eyes. Guess which one the AI ranked higher compared to every other candidate? Yep, the fictional candidate Natalie. When humans ranked candidates for this job, Natalie was ranked last. This experiment revealed how AI manipulation can influence hiring algorithms to favor fabricated information. Turns out even machines can fall for a good exaggeration.
All of this feeds into what experts call context poisoning, which is a deeper form of AI manipulation that poisons the data AI systems rely on to “learn” about the world. No hacking needed and no malware required, just cleverly feeding bad data into AI systems so it learns the wrong thing. Once AI learns something wrong, it repeats it like a confident friend who insists they’re right.
For organizations using AI in hiring, compliance, or security, AI manipulation poses a major risk. If your system’s data pipeline is poisoned, it might still run perfectly fine, but it’ll just be confidently wrong.
So, what can we do?
Treat these AI outputs like that one friend who usually has great ideas but occasionally goes off the rails. Always double-check their work.
If you have vendors, ask them this: “How do you know your data is clean? Are your crawlers protected against cloaking?”
Always remember that automation is great and sometimes even convenient, but human knowledge and oversight still are undeniably better.
At the end of the day, AI might be fast and fascinating, but AI manipulation can feed it fake data, making it confidently wrong. No one wants that!
If you want to know more about how to protect your business or personal information, learn about AI manipulation risk, or think you have been the victim of identity theft, fraud or a scam, you can speak with an expert ITRC advisor on the phone, chat live on the web or exchange emails during our normal business hours (Monday-Friday, 6 a.m.-5 p.m. PST). Just visit www.idtheftcenter.org to get started.
Thanks again to Sentilink for their support of the ITRC and this podcast. We will return next week with another episode of the Weekly Breach Breakdown.
Listen On
Also In Season 6
-
The Fraudian Slip Podcast: Identity Theft Resource Center – 2025 Consumer Impact Report by ITRC - S6E10
Welcome to the Fraudian Slip, the Identity Theft Resource Center’s (ITRC) podcas -
The Weekly Breach Breakdown Podcast: The Stolen Goods – Pixnapping Attacks Target Android Devices - S6E33
Welcome to the Identity Theft Resource Center's (ITRC’s) Weekly Breach Breakdown -
The Weekly Breach Breakdown Podcast: NSA Popup Warning - What to Do If You See a Suspicious Popup - S6E32
Welcome to the Identity Theft Resource Center’s (ITRC’s) Weekly Breach Breakdown