Insights/Field Notes
Field NoteMay 3, 2026 · 9 min read

What an Integration Assessment Actually Maps

A Phase 1 Integration Assessment is not a strategy deck. It's a structured map of the operational, data, identity, and governance reality that determines whether AI integration will work in your environment. Here's what we actually map, and why each layer matters.

Most AI strategy work we get called in to evaluate is a deck of opportunities, a maturity assessment, and a roadmap. The opportunities are plausible. The maturity score is a number. The roadmap is sequenced. The whole package looks like the work of a serious firm.

It is also functionally useless for actually integrating AI into the business.

The gap is not in the analysis. The gap is in what the analysis is grounded in. Strategy decks are grounded in interviews and frameworks. Integration assessments are grounded in systems, data, identity, governance, and the operational reality of how decisions actually get made and how work actually gets done. The first produces direction. The second produces a buildable plan. They are different artifacts, requiring different work, with different fees because they take different effort to produce.

We run a Phase 1 Integration Assessment as the entry point to most engagements. It takes four to six weeks and produces a single deliverable: a complete operational map plus a phased implementation plan that's ready to build against. Here's what we actually map, and why each layer matters more than it seems.

The Systems Map

Most AI initiatives stall because the team designing them doesn't have a complete picture of where work actually happens. The CRM is named in every interview. The data warehouse is on the slide. What gets missed is the shadow systems — the spreadsheet that controls the quoting process, the Slack channel that functions as the actual approval workflow, the email thread that holds the institutional memory for a decision made eighteen months ago.

The systems map is a complete inventory of every place work happens, ranked by how load-bearing each system actually is. We get this by structured interviews with operators (not just executives), by access to the actual systems, and by tracing one or two real workflows end-to-end from initiation to closure. The output is a layered diagram: systems of record, systems of engagement, shadow systems, and integration surfaces between them.

This map matters because it answers a question every AI integration depends on: when we put AI into this environment, what does it touch, what does it bypass, and what's brittle about the path between them. A RAG system that retrieves from SharePoint but ignores the Confluence wiki where the engineering team actually documents things will produce wrong answers in production. An AI assistant that lives in a separate web app but ignores the Slack channel where the team actually communicates will get bypassed by month three. The systems map prevents these failures by making them visible before the build starts.

The Data Landscape

Data assessment is where most strategy decks substitute aspiration for reality. The deck says "we'll build a RAG system over the customer knowledge base." The reality is that the customer knowledge base is fragmented across six systems with different ownership, different update cadences, different permission models, and different content quality.

Our data landscape pass produces a per-source profile for every data source in scope: where it lives, what it contains, who owns it, how often it changes, what permissions structure it uses, what the content quality is, and what the consequences are of stale data being retrieved. The profile is opinionated — we explicitly flag sources where we'd recommend not building against them in the first phase, and explain why.

Three findings recur across most assessments. First, the source the team most wants to use is usually the one in the worst shape. Second, the permissions architecture is more complex than anyone realized, and mirroring it to the AI layer is non-trivial. Third, there's at least one source nobody mentioned in interviews that turns out to be load-bearing once we start tracing actual workflows. None of these surface in a strategy deck. All of them surface in an assessment.

The Identity and Access Reality

Every enterprise AI integration touches identity at some point. The team using the AI has roles, permissions, group memberships, and access scopes that come from the corporate identity provider. The AI's behavior — what it can retrieve, what it can do, what it can show — has to mirror that identity model. Otherwise the AI either over-permissioned (a junior analyst sees C-suite documents) or under-permissioned (the system is so locked down it's useless).

We map the identity architecture in concrete detail. What identity provider? Okta or Entra ID or something else? What's the SAML or OIDC configuration? How are groups structured, and how are they used downstream? What's the SCIM provisioning state? What's the MFA posture? What's the session model? Are there service accounts, and how are they managed? What's the policy on guest access, contractor access, B2B federation?

This work is unglamorous. It's also the work that makes or breaks the build phase. The teams that skip it discover identity issues during integration testing, when the cost of changing course is highest. We discover them during assessment, when the cost is a paragraph in the implementation plan.

The Governance Posture

The governance section of the assessment is where most strategy decks become marketing. "We'll build with security in mind" is not a posture. "We'll align to industry best practices" is not a posture. A posture is a specific, defensible set of decisions about how the AI will handle PII, how it will log access, how long it will retain data, what regulatory frameworks apply, what the incident response plan is, and what the audit trail looks like.

We map the governance posture against three dimensions. First, regulatory exposure: HIPAA, GDPR, CCPA, SOX, GLBA, FERPA, industry-specific frameworks. Some of these matter; most don't, but knowing which is which is the assessment's job. Second, AI-specific governance: NIST AI RMF, ISO 42001, EU AI Act risk tiers — which apply, which we should align to voluntarily, which we should ignore. Third, operational governance: who owns the AI in production, who reviews incidents, who has authority to roll back a model upgrade, who signs off on a new data source being added to the corpus.

The output is a governance posture document that can survive a Legal review and an audit. Not a slide. A document. Most teams have never produced one for an AI deployment. The companies most likely to actually integrate AI at scale are the ones that produce one before they build, not after they get burned.

The Adoption Architecture

The most under-built layer in a typical AI strategy is the human one. The deck assumes adoption will happen because the AI is useful. Operating reality says adoption fails by default unless someone has built the architecture for it.

We map adoption as a system, not as a marketing exercise. Who are the champions, and what's their capacity to actually drive adoption (not just attend a launch event)? What's the executive sponsor's actual engagement model, and how will that hold past month three? What's the measurement framework for adoption — not vanity metrics, actual measures of who is using the system for what tasks. What's the intervention playbook when adoption dips, and who owns it? What's the cadence of communication, training, office hours, and reinforcement that will sustain usage past the launch wave?

This is the layer where most AI deployments fail. Not because the technology was wrong, but because nobody owned the human side of the integration after launch. The assessment surfaces the gap and proposes the structure to close it.

The Opportunity Portfolio

The opportunity portfolio is the layer that looks most like a traditional strategy deck — and is the most different from one in practice.

A strategy deck identifies AI opportunities in the abstract: customer service automation, document summarization, lead scoring, knowledge management. Each is a category. None of them is buildable.

The opportunity portfolio identifies specific opportunities in the operational reality of the systems map. "Customer service automation" becomes "automate the first-response triage in Zendesk for tier 1 tickets that match these specific patterns, using the knowledge base from sources X and Y, accessible to agents via this specific UI surface, with this specific rollback capability when the AI gets it wrong." That's an opportunity that can be scoped, estimated, and built against.

We score each opportunity on three axes: impact (revenue, cost, time, risk), feasibility (technical, organizational, data-readiness), and integration cost (how much of the systems, data, identity, governance, and adoption architecture has to be built or extended to support it). The top five become the candidate set for Phase 2. The rest get parked with explicit rationale, which becomes the roadmap for future phases or for the client's own internal team to address.

The Implementation Plan

Everything above feeds the implementation plan, which is the document the client actually uses. Phased build sequence. Dependencies between phases. Resource requirements. Decision gates. Risk register. The plan is buildable as written — not because we've done the build, but because we've validated every assumption it depends on against the operational reality the assessment uncovered.

This is what makes the assessment expensive in time and high-leverage in outcome. By the end of Phase 1, the client has a plan they can either build with us or hand to another vendor. Either is a legitimate outcome. The plan is the deliverable. What they do with it is their choice.

The Honest Test

If you're evaluating an integration assessment from any vendor — us or anyone else — the honest test is to look at the deliverable they describe and ask whether it could be built from. A strategy deck full of opportunities and a maturity score cannot be built from. A complete operational map with a phased plan grounded in systems, data, identity, governance, and adoption can be.

That's the difference between a strategy artifact and an integration artifact. They are not the same thing. Pricing them the same way confuses the market. The work to produce the second is several times the work to produce the first, and the second is the only one that actually drives a successful build.

That's the work we do.


Iron Pine helps mid-market companies integrate AI into how they actually operate — grounded in your data, embedded in your workflows, adopted by your people, and operated with production discipline.

Talk to us about an Integration Assessment · Try the AI Health Check

Iron Pine helps mid-market companies integrate AI into how they actually operate — grounded in your data, embedded in your workflows, adopted by your people, and operated with production discipline.

Talk to Us About an Assessment Try the AI Health Check
Continue Reading
← Back to Insights