94% of Americans don't understand privacy risks when using AI at work

94% of Americans miss privacy risks when using AI at work, and 24% can’t spot AI-driven scams. NordVPN’s data warns the AI era is widening privacy gaps—and scammers are getting sharper.

94% of Americans don't understand privacy risks when using AI at work

Meanwhile, 1 in 4 can't spot AI-powered scams targeting them.

Cybersecurity company NordVPN has today on Data Privacy Day revealed new insights into the privacy risks Americans face as AI tools become embedded in their daily work routines. 

According to data from the National Privacy Test (NPT) for the full year of 2025, 94% of Americans do not understand what privacy issues to consider when using AI for work. As millions of workers turn to AI assistants like ChatGPT, Copilot, and other generative tools to boost productivity, they may be unknowingly exposing sensitive personal and company data. 

"The rapid adoption of AI in the workplace has outpaced our understanding of its risks. People are typing confidential information into AI tools without realizing where that data goes, how it's stored, or who might have access to it," says Marijus Briedis, chief technology officer at NordVPN. 
"Unlike a conversation with a colleague, interactions with AI tools can be logged, analyzed, and potentially used to train future models. When employees share client details, internal strategies, or personal information with AI assistants, they may be creating privacy vulnerabilities they never intended," says Briedis. 

The risks of AI don’t end at work 

While Americans struggle to protect their data when using AI tools, they are also increasingly becoming targets of AI-powered attacks. The same technology that boosts workplace productivity is being weaponized by cybercriminals to create scams that are more convincing than ever before. 

According to the National Privacy Test, nearly one in four Americans (24%) cannot correctly identify common scams being carried out using AI technology, such as deepfakes and voice cloning. As AI capabilities extend past voice cloning to fabricating  entire videos complete with realistic body movements of generated characters that may look like a real person, these scams are becoming increasingly difficult to detect. 

The consequences for individuals are already severe. According to previous NordVPN research, 78% of Americans encountered online scams in the past two years, and 20% of them lost money as a result. Nearly half (46%) of US respondents admitted to engaging with an email link that they later realized was part of a cyber scam. 

"AI has simplified cybercrime. You no longer need technical expertise to craft a convincing phishing email, clone someone's voice, or build a fake shopping website that looks identical to the real thing," says Briedis.
"Scammers use AI to design almost identical replicas of popular retail sites. The barrier to entry for cybercriminals has never been lower." 

The threat is only expected to grow. NordVPN experts predict that AI-driven attacks will be among the key cybersecurity risks in 2026, with cybercriminals using increasingly sophisticated methods to exploit vulnerabilities. 

Expert advice on protecting yourself in the AI era 

To help Americans navigate the growing AI vulnerability gap, Marijus Briedis offers some risk-prevention measures. 

When using AI tools at work: 

  • Never input confidential company data, client information, or personal details into AI assistants.
  • Understand that your conversations with AI tools may be logged, stored, and potentially used to train future models.
  • Check your organization's AI usage policies before using generative AI for work tasks.

 To protect yourself from AI scams: 

  • Be skeptical of unexpected calls or messages, even if the voice sounds familiar — establish a family code word for emergencies.
  • Verify requests for money or sensitive information through a separate, trusted communication channel.
  • Remember that AI can now create convincing fake videos and images — seeing is no longer believing.
  • Use security tools correctly and keep software updated to protect against emerging threats.

 Methodology: The National Privacy Test is an open-access survey, allowing anyone from around the world to take the test and compare their own results with the global ones. In 2025, as many as 36,667 respondents from 192 countries answered 22 questions that evaluated their online privacy skills and knowledge.