Using AI at CWA Canada: A Guide to Staying Safe & Ethical

This framework is intended to guide CWA Canada leaders, members, and staff.

Why We Need an AI Strategy

As artificial intelligence reshapes the digital lives of Canadians, CWA Canada is standing at the forefront to ensure these technologies serve workers rather than exploit them. This guide provides a practical framework for using AI tools while defending the integrity of our craft and the rights of every media worker in our union.

Human-sourced information is the foundation of public trust in news and labour advocacy. As World Press Freedom Canada warns, we must “break the stranglehold that a handful of foreign, mostly American mega-corporations maintain on the digital lives of Canadians.” 

Our AI strategy is a worker-friendly lens designed to ensure that technological advancement does not come at the cost of our job security, health, or professional standards. These guidelines are here to support you in your work, ensuring that we continue to provide the fact-based journalism and strong union representation our communities depend on.

The Core Principle: Humans in the Lead

Our union operates on the “Human-in-the-loop” principle. This means a person must always be the primary driver of the creative and editorial process. AI “output” — which can include text, summaries, transcriptions, translations, images, and data analyses — lacks the empathy, ethical nuance, and accountability of a union member.

The Golden Rule: AI is the tool, but the human is always the creator and the one responsible for the final work.

AI does not replace human judgment, editorial responsibility, or ethical decision-making. Whether you are using AI to sharpen a headline or analyze a dataset, the final product is your responsibility.

The “Green Light” List: Safe Ways to Use AI

When used responsibly, some AI tools can automate routine tasks. The following low-risk tasks are permitted:

  • Brainstorming and Ideation: Generating suggestions for story ideas or document structures.
  • Outlining: Creating initial frameworks for reports, articles, or internal documents.
  • Drafting Internal Content: Preparing first drafts of non-sensitive internal materials for review.
  • Summarizing: Creating concise versions of public or non-sensitive materials.
  • Editing for Clarity: Using tools to check grammar, structure, or style consistency.
  • Administrative Support: Creating meeting agendas or first drafts for internal review.

The “Red Light” List: Prohibited and Risky Uses

To protect our colleagues and the integrity of the bargaining unit, certain uses of AI are strictly prohibited. These restrictions are in place to safeguard our collective rights:

  • Final Decisions About People: Never use AI to make final determinations regarding complaints, membership status, grievances, or disciplinary actions. Using AI for discipline is a direct threat to worker rights.
  • Unverified Publishing: Never publish content externally without a full human review for factual accuracy, bias, and tone.
  • Relying on AI as Fact: Never treat AI output as a statement of fact. You must verify and cross-reference all AI-generated claims against trusted human sources.
  • Deception and Fake Content: Never use AI to create misleading, fabricated, or deceptive content.
  • Impersonation: Never use AI to mimic the voice, likeness, or writing style of a colleague, source, or any real person without explicit authorization.

Protecting Our Information: What Not to Type into AI

Data entered into general AI tools is often used to train models owned by foreign tech giants, meaning your input is no longer private. To protect our union’s strategic advantage and member privacy, the following sensitive information must never be entered into unapproved tools:

  • Member Records: Including membership history, contact details, or identifiable personal information.
  • Grievance and Labour Relations: Details regarding active grievances, complaints, or personnel matters.
  • Bargaining Positions: Confidential negotiation strategies, “bottom lines,” or unpublished strategic plans.
  • Legal Material: Legal advice, confidential strategies, or draft legal documents.
  • Internal Financials: Confidential planning documents, donor info, or sensitive budget details.

Quick FAQ: Your Questions Answered

Q: Will AI take my job?

A: CWA Canada is fighting for strong contract language to protect you. We view AI as a tool for production, not a replacement for your expertise. Our proposed AI Article states: “The Employer agrees that AI will not be used in a manner that results in the loss of employment of bargaining unit employees.”

Q: Do I have to tell people if I used AI?

A: Yes. Disclosure is mandatory whenever AI’s contribution is “materially significant.” For example, if an AI tool was used to summarize a 50-page report into your final article. We follow the “No Surprises” rule: audiences should never have to question whether something we’ve produced is real or AI-generated.

Q: What if the AI makes a mistake?

A: You are accountable for the final output. 

What to Do Next: A Three-Step Checklist

Before starting any AI-assisted project, follow this reference:

  • Check the Tool: Verify what the software can access and the vendor’s transparency.
  • Check the Data: Ensure you are not uploading personal member data, grievance details, or sensitive bargaining strategies.
  • Review Every Word: Perform a meaningful human review. Fact-check every claim and ensure the tone meets our standards before sharing or publishing.

Authors: George Butters, Hayley Juhl, and Lois Kirkup (April 2026)

Leave a comment