Six Essential AI Safety Practices for Leaders

by | Dec/16/2025

Six Essential AI Safety Practices for Leaders

As organizations increasingly adopt AI tools, it’s crucial to implement basic safety measures to help maintain your competitive advantage, prevent costly breaches, and preserve client trust. But there are so many considerations, where do you start? Here are six essential AI safety tips every leader should follow:

1. Choose Which AI Tools You Will Trust with Your Data

There are third-party tools that offer features such as recording and summarizing meeting notes, ingesting all your data to augment their responses, and more.

Review their privacy policies before you use the tools. If it states the tool and company keep your information private, but then explains they share data with third parties over whom the provider has limited control, treat the tool as having no meaningful privacy protections.

Sharing sensitive information such as your customers’ information, business practices, or anything else you want to protect, with third parties can be concerning, as it could go anywhere those third parties want to share it.

That’s why some organizations stick with the primary chatbots that are under more scrutiny. But don’t give up on the third-party tools; some of them can be very useful. Just be sure to weigh the risks of sensitive data exposure vs. the benefits.

2. Clear Your Chat Histories Periodically

Chat histories are very useful for going back and picking up conversations where you left off, potentially weeks or even months later. The reality is, even with a search function, it can be difficult to go back and find a specific chat when you have too many to look through.

The reason to remove old chats is so that a threat actor cannot read them if they break in with your login information or another way. If you don’t need the old chats, remove them.

Some chatbots state that they will remove your chats 30 days after you delete them. Because they can change frequently, always check the current policy for all tools.

Some enterprise subscriptions to chatbots permit your IT department to set policies to automatically delete all chats older than the number of days you specify.

3. Disable Automatic Sharing of Meeting Notes

Meeting notes are unreliable until a human edits and finalizes them.

If you’ve used AI at all, you’re familiar with the term hallucination. Participants in the meeting know the context of the meeting; AI must attempt to figure that out. AI tools are often designed to estimate and present the most likely meaning of conversations, even when they’re not certain.

If you have a meeting where people use a lot of words like “it,” “they,” “that,” “thing,” and so on, AI sometimes guesses what they mean, and it might get everything so wrong that the summary is inaccurate. Sometimes it can get the meaning in the notes that’s exactly opposite of what was really discussed.

A key step is to disable the automatic sharing of meeting notes after the meeting finishes. The meeting notes must always be reviewed by a human, preferably you, so you can correct any mistakes in the meeting summary before sending them out. There may be people who make decisions, important ones, based on the meeting summary. Meetings contain tasks assigned and accepted, status of decisions, and other key information, so it’s essential to confirm the accuracy of the summaries.

Some organizations have elected to completely omit recording meetings to protect the privacy of the meeting and prevent inaccurate summaries from leaving their organization. If they do have AI make notes, they think twice before sending them to someone outside the organization. If meeting notes or a summary contain misinformation that leaks, you have no control of information already sent.

4. Anonymize Member or Client Information When You Give Information to AI

For example, if you’re creating a sensitive email to someone who’s upset, you might substitute a fictitious name for the person’s real name and the organization’s name, just in case there’s an information leak. Anonymization can be very simple: just use the word “Jim” where you would normally use “Tom.” This one’s up to you, but some people sleep better at night knowing they didn’t put their customer’s actual name into the AI tool.

Then, after you finish tuning up your correspondence, before you send out that message or that document, you simply do a find-and-replace to restore the names of the person and the company to their correct names. And you’re doing that outside of the AI tool.

Many people forgo anonymization most of the time because it adds two extra steps, but they use it in special cases. Keep in mind that changing people’s and organizations’ names might still not be enough to anonymize the discussion if you enter a unique event, location, project name, or another bit of context that ties back to the actual person or organization.

5. Disable the AI Model’s Training Features in the Settings

The most common concern I hear from business executives is that their organization’s sensitive information will leak into the public domain. The term “training” describes a large language model learning from your chats. If you provide information such as a customer list and the training or learning is disabled, the chatbot should not remember your sensitive information or share it with another user at another company, unbeknownst to you, anywhere on the planet.

Most chatbots allow you to disable learning or training based on the information you enter, and sometimes the training setting is “off” by default.

Disabling training typically means your data is not used to improve the public AI model. There is no guarantee that data isn’t stored, reviewed by a human, or exposed through a security incident.

6. Always Use Strong Passwords and Multi-Factor Authentication on All of Your AI Accounts

If a stranger or other unauthorized party were able to log in to your chatbot account, they could read all your saved chats and learn a lot about you and your organization. They can craft fraudulent email messages so accurately that you or members of your team would fall for them without hesitation. Threat actors could also use your chatbot in unethical ways that would appear to be you. You could get locked out of your account for misbehavior. Another risk is that threat actors are designing tailored prompts that cause chatbots to bypass their alignment boundaries. Furthermore, attackers can use compromised chatbot accounts as a trusted pathway into systems and data. Just as you benefit from AI’s power, the attackers can use your AI’s power against you.

As with any website or service, use the strongest sign-in protection the chatbot supports. Using a password alone is considered insufficient authentication protection. Passwordless multi-factor authentication is usually the strongest option available and relies on your phone, fingerprint, facial recognition, a physical USB key, or another method that doesn’t require entering a password but still has more than one factor.

If the login doesn’t support passwordless login, using an authenticator app on your phone with number matching is sometimes the next best option.

If an authenticator is not available, use a text or email message as your second factor. It is far better than having no multi-factor authentication.

Always remember that authentication protection, no matter how advanced, is not immune to threat actors using techniques to bypass MFA. Always be wary of unexpected login prompts, as they may be attempts by a threat actor to gain access through you.

Conclusion

Those are some basic AI safety tips for leaders. These are all very simple to accomplish, and there’s a good chance you’re already doing most or all of them. Please forward this to your friends so that they can make sure they’re following these steps too.

About the Author

Mike Foster, CISSP®, CISA®
Cybersecurity Consultant and Keynote Speaker
📞 805-637-7039
📧 mike@fosterinstitute.com
🌐 www.fosterinstitute.com

Mike Foster is a leading cybersecurity consultant with decades of experience helping organizations across North America secure their digital assets. He holds CISSP® and CISA® certifications and is the author of The Secure CEO. As the founder of The Foster Institute, Michael has delivered over 1,500 keynote presentations and consulting engagements, equipping executives and IT leaders to strengthen their cybersecurity posture and defend against evolving threats.