Quebec’s Law 25 and AI: what’s changing for SMBs

“We use a Canadian server, so we’re compliant.” That’s the most common phrase we hear when discussing AI and Law 25. It’s also the most wrong.

Quebec’s Law 25 isn’t limited to data residency. It changes how you must manage every AI project that touches personal information. Here are the four blind spots.

1. Explicit consent

If your AI solution processes personal information (customer emails, medical records, call transcripts), you need clear, free, and informed consent for that specific use. A general consent to your privacy policy doesn’t cover AI. You have to name it: “We use a language model to categorize your requests.”

2. Privacy impact assessments

For any high-risk project (and most AI projects fall there), a PIA is required BEFORE deployment. It’s not a formality: it’s a documented analysis of risks, mitigation measures, and decisions made. If you’re audited and this analysis doesn’t exist, you have a problem.

3. The right to erasure vs trained models

If you train or fine-tune a model with client data, and a client requests erasure, you also need to remove their data’s influence from the model. In practice, this often means retraining. Better to design with this constraint from day one.

4. The role of the PIPO

The Personal Information Protection Officer isn’t just a title on an org chart. They must be consulted on AI projects, and their name must appear publicly (typically in your privacy policy).

What to do

Before launching an AI project that touches personal information: map the data being processed, conduct a PIA, update consent notices, document the decision chain, and involve the PIPO. It’s less heavy than it sounds when done at the diagnostic phase. It’s unmanageable when added at the end.

Have an AI project?

Let's talk. 30 minutes to understand your context and identify the real opportunities.

Start a conversation