Executives – Know and Manage the Risks of DeepSeek AI and Unguarded AI Tools

by | Feb/1/2025

When organizations invite me to give presentations about managing the risks of AI, the biggest concern of audiences is the privacy of AI. Executives especially are concerned that their workers will enter private company secrets or confidential customer information and have it exposed to the world. There are safety concerns, too, that must be recognized.

What is DeepSeek AI?

They’re a company that has upended the concept that only massive companies with lots of money can, given enough time, create chatbots such as OpenAI (ChatGPT), Anthropic (Claude), Google (Gemini), and Microsoft (Copilot). DeepSeek AI released a free chatbot in late January that consumers feel competes well against the big players. It does seem to excel in areas such as math and coding, although not all benchmarks agree. The revelation that DeepSeek AI achieved advanced AI capabilities with fewer and slower chips in less time shook the stock market.

While their technical achievements are remarkable, government agencies worldwide and many companies are restricting or banning using DeepSeek AI, citing privacy and security concerns.

No Privacy:

DeepSeek AI chatbot’s privacy policy states they can expose user-entered data to third parties, including information about the device you are using and your Internet address.

Interestingly, they announce they store information about how you type. Some organizations have suggested that keystroke patterns, when measured to precise timing, while not as accurate as fingerprints or facial scans, can help identify and track specific people.

One silver lining is that DeepSeek AI’s processing requirements are so light that some researchers have found ways to run DeepSeek AI’s entire large language model application offline and locally within a single user’s computer using tools such as LM Studio and Ollama. While complicated to set up, this potentially expands the possibility of eventually having your own personal assistant on your computer, which could help ensure privacy since it never sends information anywhere outside of your device.

“The Company You Keep” – The Biggest Concern

Most chatbots are designed to have guardrails to refuse to help humans do things out of alignment with ethics and morals. But adding and maintaining guardrails takes a lot of expertise, money, and time. Giving humans an all-knowing assistant without strong safety controls is dangerous.

Cisco used prompts from Cornell University’s popular HarmBench to test for safety, and they reported DeepSeek AI’s guardrails were consistently bypassed. Promptfoo states that their testing found the controls “brittle” and easy to break. There are “jailbreaks” to bypass many chatbots. This is more important now since less guarded chatbots are becoming easier to access and more popular.

We’ll see more chatbots with varying levels of safety controls; let’s consider the powerful implications these have for your business.

Nvidia CEO Jensen Huang emphasizes that AI is a tutor, mentor and coach at work. The key point he’s not mentioning: AI programming must align with our highest ideals and have a moral compass.

Could you ever have an upset worker who asks their chatbot for ideas on how to access company secrets, install a virus, retaliate against an office bully, or make an explosive? Will their favorite chatbot naively become a coconspirator since it is programmed to be helpful?

Stuart Russell (world-renowned AI pioneer) describes the competition in advanced AI development as “a race towards the edge of a cliff.” Steven Adler (safety researcher at OpenAI) quit in November, explaining he was “pretty terrified” about how quickly AI is evolving without enough attention to safety. Geoffrey Hinton (referred to as the Godfather of AI) talks about his concern about our ability to keep AI aligned with humanity’s best interests and predicts there’s a 10% to 25% likelihood that AI will cause us to become extinct in the next 30 years. Notice that he didn’t say AI will kill us; it could be humans using an unbridled AI as a tool to help them know how to create a plague or something else.

How can you help protect individual and business safety at work? See the recommendations below, including increasing awareness about how each person must be vigilant to recognize and resist a program’s bad advice.

On the bright side, Anthropic (Claude) recently released a technology designed to stop jailbreaks in AI models that are already programmed for safety. They’ve issued a challenge for people to try to break the protections. But will all AI models invest money into safety?

Many experts believe it will take an AI disaster to wake up humanity. Recent tragic fires and crash disasters in the US have stirred people to take action to increase safety measures around cities and airports. Are we so oblivious that we need an AI catastrophe to wake everyone up to the importance of having AI safety measures?

Recommended Action Steps:

  • Be sure your workers watch for unsafe recommendations and resist them, especially if the worker is upset and vents to AI.
  • Clearly classify your data and identify what information should never be entered into AI systems.
  • Inform your workers about the risks of sharing sensitive information with unguarded AI and any AI tool.
  • Require user training and give quizzes to help ensure users understand your organization’s guidance.
  • Provide additional education to your workers in highly targeted positions, such as your fellow executives, the legal team, R&D, and finance departments.
  • Consider using technology that will restrict or block access to AI tools, especially AI tools with few privacy controls, such as unguarded AI.
  • You might wait until you can run a local offline version of unguarded AI that won’t share information with third parties.
  • Utilize Data Loss Prevention (DLP) tools and features designed to monitor what information users provide chatbots while on your network or company-issued devices, block users from sharing sensitive information, and send real-time alerts to their managers or the IT Team.
  • Consult with your legal team about the risks and exposure of sensitive information.
  • Update your organization’s AI usage policies with guidelines on what is not allowed. Have users sign off.
  • Ask your third parties who generate or access sensitive information related to your organization if they use AI. Ensure your contracts address AI privacy concerns and have discussions with their executives about AI. You may find they’re oblivious to the risks or ignoring the dangers; your company cannot afford that exposure.
  • Have an incident response plan for AI data leaks.
  • Inquire with your insurance provider about AI-related coverage for reputation damage and lawsuits from releasing sensitive information.
  • Have an AI privacy and security specialist perform an AI risk assessment at your organization.

Conclusion

DeepSeek AI has cemented a memorable milestone in AI history. What happens next, including the other AI tools that will come in its wake, will set the path for our future. As an executive, you have a powerful influence. New open-data and unguarded AI tools are rocking traditional concepts related to AI; make sure it doesn’t rock your company, too.