We've run enough manufacturing BI engagements — on Dynamics 365, Sage 300, Sage X3, NetSuite, SAP Business One — to recognize the pattern when we walk in. The tools are usually fine. The data is usually adequate. The problems are structural, and they repeat so consistently across shops that we can name them before we've opened the ERP.
Here are the four. If you recognize two or more, the fix is probably closer and cheaper than you think.
The patterns
ERP ≠ reporting
Dynamics, Sage, NetSuite — all of them generate transactions reliably. None of them were designed to answer "which product line is most profitable this quarter, by region, trended over 18 months." That's not a failing; it's a category distinction. The ERP is a transaction ledger. It records what happened. It was built to close a GL, track inventory movements, and cut invoices — not to produce comparative margin analysis across a three-dimensional hierarchy.
The question your ops team actually needs answered requires a separate semantic layer: a model that sits between the raw ERP tables and the dashboards, where you define margin, where you establish the regional hierarchy, and where you decide how 18 months of data gets aggregated for trend analysis. That model doesn't exist by default in any ERP we've seen. Building it is the first thing we do on every manufacturing engagement.
The symptom that points here: your finance team runs a report in the ERP and your ops team pulls a different number from their own spreadsheet, and both claim to be looking at "revenue." They're both right by their own definitions — and that's the problem.
Spreadsheet roulette
Operations runs on a 17-tab Excel file that one person built three years ago. That person edits it today. They understand the lookup chains; they know which cells are hard-coded; they know that the tab labelled "DO NOT TOUCH" really means it. Nobody else does.
That person goes on vacation. The weekly ops report either doesn't get published, gets published late, or — worst case — gets published by someone who doesn't know which version is current. We've seen manufacturers running Q3 ops meetings off a spreadsheet last updated in Q2. Nobody caught it because nobody else understood the file well enough to check.
This is a single point of failure that almost every mid-market manufacturer we've met can identify immediately when we ask. They know the name of the person. They know the file. They've thought about what happens when that person leaves. They just haven't fixed it yet.
The spreadsheet isn't the problem. The dependency on a single person to maintain it is.
The fix isn't to eliminate spreadsheets. It's to move the logic that should live in a governed model into a governed model, and let the spreadsheet be what it's good at: flexible ad hoc analysis on top of clean, trustworthy data.
Monthly close takes 8 days when it should take 2
Half of those 8 days isn't accounting work. It's reconciliation work. The GL says one number. The ERP says another. The spreadsheet from the plant manager says a third. Before your controller can close the books, they have to figure out which source is right, why the others diverged, and how to force an agreement.
That reconciliation process is the symptom of a deeper problem: there is no single source of truth. Three systems are each producing their own version of the month, and the close process is really a negotiation between them.
When we build a governed semantic model tied to a single staging layer, the close dynamic changes. The model pulls from one place. The definitions are agreed on before anyone runs a report. When numbers diverge, the question shifts from "which system is right" to "did the data load correctly" — a much faster question to answer. Clients who close in 8 days typically get to 2 within one quarter of going live on a clean model.
Inventory visibility lags by a week
You know last week's inventory levels. Today's requires a phone call to the warehouse. For a single-site manufacturer, that's annoying. For a distributor or multi-warehouse operation, it's a cash flow problem dressed up as a reporting problem.
Inventory decisions made on week-old data lead to overstock in one location and stockouts in another. Lead times get padded to compensate. Working capital ties up in buffer stock that exists because nobody trusts the numbers enough to run lean. The reporting lag is downstream of the same structural problem — no automated, governed pipeline pulling warehouse data into a live model. The WMS updates hourly; nothing consumes that update in a way that decision-makers can see.
A properly built staging layer with automated refresh closes this gap without requiring a new WMS, a new ERP, or any change to how the warehouse operates. The data is already there. It just isn't flowing anywhere useful.
What the pattern of a fix looks like
The solution is the same across every manufacturing engagement we've run, with variation only in complexity and timeline.
A thin Azure SQL staging layer pulls from the ERP, the WMS, and wherever else the source data lives. It historizes data cleanly — which means you can run 18-month trend analysis without relying on the ERP's native reporting, which often can't look back further than its rollup periods.
On top of that sits a Power BI semantic model: one definition per metric, agreed on before we write a single DAX measure. Gross margin means the same thing in the CFO's board deck as it does in the plant manager's daily scorecard. Revenue is recognized the same way in both places. Active SKUs are counted by the same rule. The model is the contract — and once it's signed, every dashboard built on top inherits those decisions automatically.
Row-level security means regional GMs see their region. Plant managers see their plant. Finance sees everything. That gets configured once in the model, not replicated across twelve reports.
Automated refresh means inventory data updates on a schedule without human intervention. The close process runs off the same model that drives the live operational dashboard — not a separate Excel file that someone reconciles manually.
That's the architecture. It's not exotic. The work is in the definitions and the governance, not in the infrastructure.
Questions we get on manufacturing fit calls
"We're on Sage 300/X3. Does Power BI talk to it?"
Yes. Standard Sage data extracts work for Sage 300. Sage X3 supports direct ODBC. The same approach applies to NetSuite via SuiteAnalytics Connect, Dynamics 365 via Dataverse and Synapse Link, and SAP Business One via its reporting extract. If your ERP has any reporting extract capability — and every ERP we've encountered does — we can route it into a staging layer and build on top of it. The connector is rarely the hard part.
"Do we need a data warehouse, or can Power BI work directly off the ERP?"
For single-ERP shops under roughly 50 staff, a direct connection often works fine. You get clean enough data with acceptable refresh latency, and the overhead of a staging layer isn't justified.
For multi-source environments — ERP plus CRM plus inventory system plus e-commerce plus a spreadsheet or two — a thin Azure SQL warehouse handles the joins between those systems and historizes data in a way that Power BI's native connectors can't do cleanly on their own. It's also cheaper than most people expect. We're talking about a small Azure SQL instance with a handful of automated pipelines, not a Snowflake deployment with a six-figure annual contract.
"How long until the first useful dashboard?"
Three to four weeks for a starter scorecard on a single ERP: gross margin by product line, inventory position, open orders, on-time delivery. Enough to replace the ops spreadsheet and the close reconciliation in one shot.
Eight to twelve weeks for a multi-source model with executive, operational, and sales-rep layers. That timeline includes the KPI workshop, source audit, staging build, model build, report build, and handoff. Anyone promising a useful dashboard in two days is selling a template connected to a demo database. That's not a model; that's a screenshot with live data.
"Will my plant managers actually use it?"
That's the Stage 05 question, and it matters more than the Stage 04 question. A dashboard that nobody opens is worth less than the spreadsheet it replaced.
This is why training and handoff is built into every Stage 04 engagement we run. Operational dashboards are co-designed with the people who will live in them — not designed for them and then handed over. Sessions are recorded as Loom videos so the ops team can re-watch onboarding without scheduling a call with us. Adoption is the deliverable, not the artifact.
The adoption question also has a data answer: plant managers use dashboards that show them information they trust and can act on the same day. If the dashboard lags a week, they'll go back to the phone call. Fix the data freshness first; usage follows.
Why manufacturing specifically
The four patterns above aren't unique to manufacturing — service businesses and distributors run into the same walls. But manufacturers have them in a concentrated form.
Complex source systems: ERP, CRM, WMS, sometimes MES. Complex organizational structures: multi-plant, multi-region, multi-product-line, often multi-currency. And close cycles with real financial consequences if the numbers are wrong — because inventory misstated by $200K isn't just a reporting problem, it's a covenant breach, an audit finding, or a working-capital gap that someone has to explain to the bank.
The fix is almost always the same: one governed semantic model, one definition per metric, row-level security, automated refresh. Three to twelve weeks of work depending on complexity and source-system count. After that it runs itself, and the next report takes days instead of months because everything it needs is already defined.
We've run this pattern enough times that we can walk into most manufacturing environments, ask four questions, and tell you with reasonable confidence which of the four problems is costing you the most. That's what the fit call is for.