Cybersecurity Awareness Month Week 3 2025 - AI-Powered Threats & Deepfakes

Body

Seeing Isn’t Always Believing: AI Misinformation, Impersonation, and Deepfakes

Would you trust your eyes if what you saw wasn’t real? Today, artificial intelligence (AI) can create videos, voices, and images that look and sound convincing, even when they’re completely fake. These “deepfakes” and AI-powered scams spread fast on social media, in group chats, and through email or texts. The goal is simple: make you believe something that isn’t true, or pressure you into acting before you think.

What are deepfakes and AI-powered scams?

Deepfakes are synthetic media, videos or audio that have been digitally created or altered to make someone appear to say or do something they never did. AI can also clone voices from short samples or generate realistic images out of thin air. Scammers use these tools to impersonate people, fake authority, and create attention-grabbing “evidence” that looks trustworthy at a glance.

Why this matters in everyday life

These tactics show up in places you already are, your messages, feeds, and inbox:

  • Voice scams: A caller sounds exactly like a family member or colleague, asking for urgent help or a one-time passcode (OTP).
  • Fake HR or IT messages: An email or chat from “support” asks you to verify your login, install a new tool, or share private data.
  • Misleading videos or posts: A realistic clip or image spreads quickly, pushing a false claim, stoking outrage, or nudging you to donate or download something harmful.
  • Impersonation on social platforms: A new account with familiar photos and tone asks for money or sensitive info.

How to protect yourself (simple habits that work)

  • Pause before you act. Urgency is a tactic. Take 30 seconds to breathe and think.
  • Verify the source on your own. Don’t use numbers or links in a suspicious message. Call back using a known contact, official website, or in-person check.
  • Check the details. Look for odd lighting, jumpy lips, mismatched earrings, strange reflections, or background glitches. In audio, listen for choppy cadence or unnatural pauses.
  • Use quick tools. Try a reverse image search for suspicious photos. Search reliable outlets to see if trusted sources are reporting the same claim.
  • Protect your accounts. Turn on Multi-Factor Authentication (MFA). Never share passwords or OTPs — no real friend, bank, or school office will ask for them.
  • Limit what you share publicly. The less voice/video of you online, the less material a scammer can copy.
  • Report and delete. If something feels off, report it to your campus or IT security team and remove the post/message to avoid further spread.
  • Create a verbal pass phrase that you share verbally with family members and close friends to prevent deepfake scammers

Quick reality check

If a message pushes urgency, plays on emotion, or asks for private info, treat it as suspicious until proven genuine. It’s not rude to double-check, it’s smart.

TIP: AI can copy a face or a voice, but it can’t copy your judgment. Stay curious, verify first, and share responsibly. Awareness is your best defense, and it starts with you.

 

For more information and guidance, please see the resources below:

  • Security Foundations: Guarding Against AI-powered Attacks - https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp//microsoft/bade/documents/products-and-services/en-us/security/25SFAI-Quick-Reference-Guide-External-Working-doc-1-1.pdf

Details

Details

Article ID: 11630
Created
Tue 10/21/25 1:37 PM
Modified
Tue 10/21/25 2:27 PM

Attachments

;