In addition to the typically included clauses in your AI usage policy, such as data privacy requirements, acceptable use guidelines, and compliance with privacy regulations like GDPR or CCPA, some overlook essential clauses. See below to determine if you want to add any if they are missing from your policy:
Tool Approval: You could include a note about a procedure to approve AI tools before they’re used, especially for work that involves private or company-sensitive information, such as “Before using a new AI tool, check with the security or IT team… Make sure it’s on the approved list.”
Human Accountability: Consider stating that they, the person, not AI, are ultimately responsible for decisions and documents they send out. AI suggestions should be reviewed by someone who understands the context, especially since AI is prone to hallucinations, trying to please the user, and being out of alignment with your culture. For example, “If an AI tool writes an email or gives advice… read it before sending it out or acting on it.”
Confidentiality Protection: Remind employees not to share confidential company or customer information with AI platforms unless approved. Such as “Don’t copy and paste customer names, contracts, or financial reports into any AI tools unless explicitly approved in writing.”
Incident Reporting: To help drive home the seriousness of privacy, tell them to notify you with wording such as “If an AI tool shares the wrong info or leaks something by accident… report it like you would a security breach.”
Usage Boundaries: You could state what activities are acceptable to use AI (e.g., summaries, brainstorming) and where AI is not allowed (e.g., signing contracts, making hiring decisions) such as “AI can help draft ideas or summarize documents and produce narratives… but don’t use it to make final calls on people or legal stuff.”
Work Documentation: Consider telling people to save a copy (or cc someone) of all AI-generated work outputs, especially if they’ll be used in decisions or presented to clients. For example, you could say, “If an AI tool creates something you plan to use or send… save a copy of the input and output so we can check it later if needed.”
Ethical Guidelines: Include something about the ethical use of AI tools, such as: “Only use AI tools in ways that are ethical, fair, and respectful of others. Just because a tool can do something doesn’t mean it should.”
Risk Assessment: You could also get them to think a little more by saying, “Before using AI for something any task… ask yourself: could this create bias, mislead someone, or share something private? Ask us if you have any doubt.” (you might want to replace “us” with a specific person).
Harassment Prevention: Address using AI for harassment or anything that violates someone’s rights. For example: “Never use AI tools to create or spread harmful, threatening, or harassing content. Report it right away if you see it.”
Societal Impact: You could also include text to get your team thinking about AI’s effects on people and society. For example, “When using AI, ask whether it could hurt someone’s rights or reputation or lead to larger problems in society… If in doubt, stop and ask.”
Mandatory Training: Providing training is essential for AI use. Include a clause that employees must participate in training about responsible AI use. You could phrase it: “We’ll offer training to help you understand how to use AI safely and fairly… and you must participate.”
Approved Tools: Mention the AI tools you have approved. You might say, “The only allowed AI tool(s) at (your organization’s name) is/are the (tool or tools) using the identity and credentials you’ve been provided by (your organization’s name). No other versions, nor any other AI tools, are allowed and are expressly prohibited unless explicitly approved ahead of time by (person’s or department’s name). Don’t sign up for AI tools using your work email or passwords unless approved.”
Usage Monitoring: Some tools help your IT team track and block access to AI tools. You might consider adding some accountability, such as: “AI usage can be so dangerous that we are keeping records of which tools you use so we can refer to that information later if there are any problems.” (This is an example of when it is essential to ask your organization’s legal counsel whether monitoring what sites they go to is okay.)
Data Ingestion: Caution them about the ingestion of data. An example would be, “Be aware that AI tools with document or email access permissions may ingest, index, and learn from content you create, even if you delete it later, from documents you save, including spreadsheets and letters, and unsent emails. Even if you delete content later, the information may remain accessible through AI systems that have previously processed it. Never enter sensitive, confidential, or potentially problematic content into any document or email draft, even temporarily. If you use the previously common practice of typing emotion-filled documents while “venting,” even if you never intend to share the information, use handwritten methods rather than documents or email messages.”
Meeting Privacy: Be sure to address that online meetings are no longer completely private due to AI. Something like, “Know that meetings are no longer private spaces to have conversations. Content discussed in meetings may be captured in AI-generated transcripts, summaries, or recordings, making even previously casual conversations potentially discoverable in legal proceedings. Avoid discussing sensitive personnel matters, confidential information, or ‘off-the-record’ topics in meetings where AI transcription or summarization tools are active. With some commonly used operating systems and tools, this recording is always enabled and difficult to block.”
Summary Review: Give guidance on meeting summaries, such as “Disable automatically sending AI-generated meeting summaries to attendees. As the meeting organizer, you must review summaries to ensure accuracy before sending them. AI technology can be prone to hallucinations and errors in transcription, especially if the audio quality is less than optimal. People may use the summaries to make decisions, so the summaries must be accurate.”
Policy Updates: Document that you’ll be updating your policy regularly. You could include “Check this policy at least once a month or when we ask you to. We will update it as new tools, laws, risks, or AI-related situations arise.”
I’m not a lawyer, and this is not legal advice; check with your legal counsel. These are essential aspects that some organizations later wish they’d included after they experience a bad outcome. As you review this list, you may think of other aspects specific to your organization or industry that you want to include.
A solid AI policy is essential. Please forward this to your friends so they can help ensure they’ve included often overlooked parts, too.