Stage 04 · Intelligence By Taim Al-Bakri · May 10, 2026 · ~9 min read

Why BI is hard in Canadian healthcare.

You have four sites. Leadership wants one number per KPI. Site A counts a patient by chart number; Site B counts by visit. The quarterly report contradicts itself every quarter. This is a solvable problem — but it’s not a software problem.

We work in healthcare a lot. Primary care networks, specialist clinics, long-term care operators, community health centres. The scale varies; the problems don't. Every healthcare BI engagement we walk into has some version of the same four issues. They're not exotic. They're not caused by bad people or bad technology. They're caused by the nature of healthcare operations in Canada: multi-site, multi-EMR, high regulatory overhead, and chronically under-resourced data teams.

This piece names those four patterns plainly, explains what the fix looks like, and answers the questions we get on every fit call. No vendor pitches. No acronym soup.

The four patterns we keep seeing

Multi-site definition drift

Site A counts patients by chart number. Site B counts by visit, because that's what their EMR makes easy to export. Site C has a hybrid approach that made sense when they set it up three years ago and nobody's touched since. Nobody agreed on a definition because nobody had a reason to — until leadership asked for a consolidated patient count and got three different numbers from three sites that collectively serve the same geography.

This gets worse when multiple EMR vendors are in play. OSCAR and Accuro store data differently, export differently, and use different field names for conceptually identical things. patient_id in one system and chart_no in another might both be the canonical identifier — or they might not be. You don't know until you trace it.

The problem isn't the data. The data is fine; it's internally consistent per system. The problem is that nobody has ever sat down across all four sites and agreed on one definition per metric — written down, versioned, and enforced at the model layer rather than in someone's head. That conversation feels bureaucratic until you've shipped a quarterly board report that contradicts itself and spent a week in damage control.

The cheapest place to define what a "patient encounter" means is before you build anything. The most expensive place is in a board meeting where two slides disagree.

EMR exports that aren't reports

OSCAR exports. Accuro exports. PointClickCare exports. None of them summarize. The export gives you a flat file — rows of raw transactional data, one row per event — and then Excel does the rest. Badly.

What this looks like in practice: one person downloads the export, opens it in Excel, applies a set of manual filters and pivot tables they built eighteen months ago, and produces a summary they email to leadership. That person is the single point of failure for your entire reporting pipeline. When they're on vacation, nobody gets numbers. When they leave, institutional knowledge walks out with them. When they make a mistake — a filter applied wrong, a column that shifted position in the latest export — it ships to leadership before anyone notices.

We see this at organizations that have been operating for years, are genuinely well-run clinically, and have no idea their data infrastructure is this fragile. The fragility is invisible until it fails at the worst possible moment.

PHIPA and PIPEDA are not optional constraints

Row-level security isn't a nice-to-have. In a multi-site healthcare context, it's a foundational requirement: site managers should see their site only; directors might see a cluster of sites; executives should see across the network; auditors need to verify what was accessed and when. These aren't theoretical requirements — they're the baseline for responsible data governance under PHIPA, and for some organizations PIPEDA as well.

The mistake we see constantly is designing the data model first and trying to bolt security on afterward. It's a miserable retrofit. The security architecture has to inform the model design from day one: how dimensions are structured, how the user-to-site mapping is stored, where the RLS filter expressions sit. If you build a beautifully clean semantic model and then realize the security layer requires restructuring the fact table, you're doing two weeks of work twice.

Audit logging matters too. If a privacy officer asks "who accessed patient encounter data for Site C in Q3?" you need to be able to answer that. Power BI's activity log can do this when configured correctly. It doesn't happen by default; it has to be designed in.

Board-prep that burns the team

The most common version of this: one person, usually a clinic manager or a senior admin, spending a full week before every board meeting manually assembling the deck. They pull exports from multiple systems, build pivot tables, copy numbers into slides, format charts, check the math, send it to leadership for review, incorporate feedback, recheck the math, and do it again next quarter. Every quarter. Predictably. On a schedule that has never changed in three years.

This is not a people problem. The person doing it is usually excellent at their job. It's a process problem: a process that should have been automated is being run manually because nobody built the pipeline. A correctly built semantic model with automated PDF export can compress that week to a scheduled job that runs overnight. The slide deck gets assembled from live data. The manager's week stays focused on clinical operations.

The total cost of that manual board-prep process, across a network of four sites, over three years, is not small. We've never had a client calculate it before we walked in. They're always surprised when they do.

What the fix actually looks like

The solution isn't a new EMR. It isn't a different dashboard tool. It's a governed semantic model with one definition per KPI, row-level security at the model layer, and an automated report generation pipeline.

Here's what that means concretely. You start with a definition workshop: half a day, the right people in the room across all sites, whiteboarding what each KPI actually means. What is a patient encounter? What counts as an active patient? What's the denominator for your panel size calculation? These feel like obvious questions; they're not. Get them on paper, get agreement, and version that document. That document is the spec for everything that follows.

The semantic model encodes those definitions. Not in a spreadsheet. Not in a Power BI report that one person built. In a shared model that every report inherits from. When the definition of "active patient" changes — and it will change — you update it in one place and every report updates automatically. No hunting through twenty Excel files to find where someone hardcoded the old filter logic.

Row-level security goes into the model at build time, not afterward. The user-to-site mapping lives in a table the model references. Adding a new site means adding a row to that table, not rebuilding the security architecture.

The automated pipeline pulls from EMR exports on a schedule, loads them into a staging layer, and feeds the semantic model. The board-prep PDF is a scheduled job. The manager gets their week back.

A properly-built Power BI environment for a four-site healthcare network can run for years with minimal maintenance. The hard work isn't the technology — it's the three weeks before that, getting four site directors to agree on what a patient encounter means.

Questions we get on every healthcare fit call

"We use [EMR vendor]. Will Power BI connect?"

Almost certainly yes. OSCAR, Accuro, and PointClickCare all have export paths — we've built pipelines from CSV extracts, ODBC connections, and REST API endpoints. The approach depends on what your EMR vendor supports and what your IT environment allows. If your EMR has any export capability at all, we can route it into a staging layer. The more interesting question is usually about refresh cadence: do you need data updated daily, hourly, or near-real-time? Most operational healthcare dashboards are well-served by a nightly refresh. Near-real-time requires a different (more expensive) architecture, and we'll tell you on the call whether you actually need it.

"Do you handle PHIPA compliance?"

We design with PHIPA principles in mind: minimum necessary access, audit logging, row-level security baked into the model before a single dashboard ships. Every access control gets documented so a privacy officer can verify it. But compliance is your responsibility and your privacy officer's. We're not legal counsel and we don't sign off on your privacy impact assessments. What we deliver is the technical controls, documented clearly enough that an audit can verify them. That's the part we can own.

"Can the data stay in Canada?"

Yes. We deploy to Azure Canada Central or Canada East by default. No data crosses the border unless you explicitly request a non-Canadian region. For organizations with specific data residency requirements, we document the storage location as part of the technical controls package.

"How small is too small?"

A single-site clinic with one EMR and a team of ten is usually better served by a tightly-built operational dashboard than a full semantic-model engagement. The governance overhead of a multi-site architecture doesn't pay off at that scale. We'll tell you on the fit call which side of that line you're on — and if the answer is "you don't need us for the full engagement," we'll say that directly.

The inflection point is typically two or more sites, or a single site where the manual reporting burden has become a real operational cost. If you're spending multiple person-days per month on data assembly that should be automated, you're past the threshold.

Healthcare BI problems are almost never data problems. The data exists. It's being captured by your EMR. It's being exported. The problems are definition problems, governance problems, and process problems. The technology is the easy part. Getting four site directors to agree on what a patient encounter means — that's where the work actually is. Everything we build is downstream of that conversation.

Taim Al-Bakri

Leads BiWize's Intelligence practice. Has shipped multi-site Power BI environments for healthcare, manufacturing, and banking clients. More about the team →

Multi-site reporting that actually agrees with itself?

30-minute fit call. We'll review your EMR stack, your definition problems, and what a working cross-site scorecard looks like.

Book a fit call →