Insights/Case Studies
Case StudyApril 19, 2026 · 9 min read

Placement Intelligence Platform

How we built a multi-tenant lab staffing intelligence platform end-to-end in twelve working sessions — from registry construction through Bullhorn CRM integration — for a PE-held laboratory staffing company now operating it as a live SaaS.

Engagement
Direct Engagement
Status
In production · Monthly retainer
12,089
Facilities monitored
2,131
Candidates enriched
5,615
AI-scored matches
88%
Average match score
PythonNext.jsSupabaseClaude APIBullhornSerpAPIApollo

Client: PE-held laboratory staffing company Engagement type: Direct Engagement (Phases 1, 2, and 3) Current status: In production. Client on monthly platform and support retainer.

The Problem

The client operates in laboratory staffing — a specialized recruiting vertical placing medical technologists, pathologists, and lab technicians into CLIA-certified facilities across the United States. The market is fragmented: thousands of lab facilities scattered across hospital systems, reference labs, specialty clinics, and academic medical centers. Each facility has its own hiring rhythm, its own career site format, its own ATS, and its own definition of what a qualified candidate looks like.

The recruiting team's day-to-day operating reality reflected that fragmentation. Recruiters were manually monitoring dozens of career sites for new openings, copy-pasting role descriptions into spreadsheets, hand-matching candidates against requirements, and entering qualified leads into Bullhorn one record at a time. The work was hours-per-day, error-prone, and impossible to scale beyond the team's headcount. A new role posting at a target facility might sit undiscovered for days. A qualified candidate already in the database might never be matched against the role they were perfect for, simply because no human had time to make the connection.

The team had evaluated three off-the-shelf solutions. One was a generic ATS plugin that did not understand lab staffing's specialized credentials. One was a recruiting CRM that promised AI matching but in practice returned generic job-board scraping with no scoring layer. One was a data provider that supplied facility lists but did not integrate with hiring workflows. None addressed the actual problem: the team needed an intelligence platform that monitored the right facilities, found the right roles, scored the right candidates, and pushed the right results into Bullhorn — the CRM the recruiters already worked in every day.

The Approach

We engaged through a Direct Engagement: Iron Pine led from initial scoping through to live production, with the client's recruiting leadership owning workflow validation and the operating cadence.

The Phase 1 Integration Assessment surfaced four findings that shaped the build:

The data layer had to come first. No matter how sophisticated the AI scoring would be, it would be useless against an incomplete or stale facility registry. We scoped a registry construction phase as the foundation — pulling CLIA-certified lab facility data from federal sources, enriching it with location and specialty metadata, and tracking it as a living dataset that could be re-scraped and verified on a cadence.

Discovery, scoring, and CRM push were three separate problems. Combining them into a single pipeline was tempting but wrong. Discovery (finding new role postings) needed to run on a schedule against thousands of career sites. Scoring (matching candidates to roles) needed to be an on-demand operation triggered by either a new role or a new candidate. CRM push (getting the qualified leads into Bullhorn) needed to be operator-controlled with two-step deduplication so recruiters did not duplicate existing client accounts. Each was its own service with its own data model.

Bullhorn was the workflow surface, not just an integration. The recruiters lived in Bullhorn. The platform's value was not measured by the dashboard's quality but by how cleanly leads arrived in Bullhorn with proper attribution, accurate company records, and no duplicates. The integration layer needed real engineering attention, not a "fire-and-forget" sync.

Multi-tenancy from day one. The first deployment served one client, but the architecture had to support a future where the same platform served multiple staffing firms. RLS, tenant scoping, and the projects/selections data model were built in from the start, even though only one tenant existed at launch.

The Phase 2 Integration Build then ran across twelve working sessions, each producing a shippable component. The cadence was deliberate: a session focused on registry construction, then a session focused on candidate enrichment, then a session focused on the discovery scrape pipeline, and so on. Every session ended with code in production and a session work packet documenting what was built.

The Architecture

The platform runs on the standard Iron Pine stack with two repos — Python backend and Next.js frontend — separated because their deployment pipelines differ.

Backend (Python). Data collection, enrichment, and scoring pipelines run as scheduled scripts. The CLIA facility registry is built and maintained from federal data sources, enriched with hospital and lab classification metadata. Career site discovery uses SerpAPI for the long tail of facilities and direct scraping for the major hospital systems with stable career site structures. Candidate enrichment runs through Apollo.io for verified contact data on technologists in target geographies. AI scoring runs through the Claude API with structured prompts and temperature 0 for deterministic match scoring against a defined rubric.

Frontend (Next.js). A multi-tenant production dashboard with role-based access. Recruiters see a Lead Browser with status filters, optimistic UI updates for status changes, and Travel/PRN flags for the role types that move fastest. The Pipeline view shows scored matches with sortable score columns and a select-all CSV export for offline review. The Candidates view shows the enriched candidate database with filtering by region, specialty, and CAP accreditation status.

Bullhorn integration layer. The most engineered part of the platform. Custom OAuth token management with refresh token rotation, stored in Supabase. Entity mapping tables (bullhorn_company_map, bullhorn_lead_map, bullhorn_candidate_map) track which Iron Pine records have been pushed to Bullhorn and prevent duplicates across runs. The push workflow includes a two-step dedup modal: the recruiter first searches Bullhorn for an existing company by name and location, then either selects an existing match or creates a new ClientCorporation. Every pushed company gets an "IRON PINE" ClientContact for native Bullhorn-side filtering by attribution source. Source field is set to "Iron Pine AI" on all pushed entities.

Auth. Supabase Auth with @supabase/ssr cookie-based sessions and middleware-enforced auth gates. Branded SendGrid invite emails using the Iron Pine email scanner buffer pattern (a small static page that protects one-time-use tokens from corporate email security systems consuming them before the user clicks). Role hierarchy enforced via the profiles table as the source of truth, not user_metadata.

Data model. Supabase PostgreSQL with row-level security on every user-facing table. Python collectors bypass RLS via direct psycopg2 for scheduled enrichment work. The schema separates company records, role records, candidate records, and match records, with a status taxonomy on leads that supports the team's actual operating workflow (new, qualified, contacted, submitted, placed, archived).

The Outcomes

By the end of Phase 2, the platform was monitoring 12,089 CLIA-certified facilities across the United States, 5,932 of which had discoverable career URLs. The candidate database had grown to 2,131 enriched contacts across the four highest-priority states. The first live match engine run produced 5,615 scored matches with an average match score of 88 percent, and 338 leads with full coverage on the recruiter's primary criteria.

The Bullhorn integration shipped with full push workflow for companies, jobs, and candidates, with the two-step dedup pattern verified against the client's existing 800+ company records. Recruiters stopped duplicating accounts. The IRON PINE contact attribution allowed the team to filter the entire Bullhorn instance for Iron Pine-sourced leads natively, which became the operational measurement layer the client used to evaluate the platform's contribution to pipeline.

The platform is now in production. The client is on a monthly retainer covering platform hosting, ongoing maintenance, and support — the standard Iron Pine Phase 3 Adoption & Expansion Retainer pattern. Quarterly reviews surface new expansion priorities; recent work has extended candidate coverage into pathologist specialties and added CAP accreditation enrichment as a filterable attribute on the company registry.

The Lessons

Five things from this engagement now inform how we approach similar staffing or vertical-intelligence builds:

Two-step dedup is not optional in CRM integrations. Real-world CRM data is messy. Company names accumulate years of variation — "Quest Diagnostics" and "Quest Diagnostics LLC" and "Quest (Fort Myers, FL)" all live in the same database, often as separate records. Auto-search will miss many of these matches. The two-step dedup modal — first an automatic search, then a user-controlled refinement search with name and location filters — is the safety net that prevents duplicate creation while preserving recruiter agency. We have since built this pattern into our standard CRM integration playbook.

Attribution is a first-class feature. The IRON PINE contact pattern emerged from a simple need: the client's recruiters wanted to see, in Bullhorn, which leads came from the platform versus which came from other sources. By creating a dedicated ClientContact named "IRON PINE" on every pushed company, the recruiting team could filter the entire Bullhorn instance for Iron Pine-sourced leads natively, without custom reporting. This became the operational measurement layer for the engagement.

Discovery economics dominate the platform's operating cost. SerpAPI credits at $150/month for 15,000 queries are the largest single line item. Building primary scrapers for the largest career site templates eliminated thousands of monthly SerpAPI calls, dropping operating cost by roughly 40 percent without reducing coverage. We now treat "primary scrape plus API fallback" as the default architecture pattern for any discovery pipeline, with the API call reserved for the long tail.

The proving-ground pattern saves real money. Before scaling discovery scripts nationally, we ran them against a smaller, well-known facility set first. This caught six bugs that would have wasted significant SerpAPI credits if discovered at scale. We have since formalized this as standard practice on every discovery pipeline build — small, known dataset first, then expand.

The retainer is where the work actually compounds. The Phase 2 build delivered the platform. The Phase 3 retainer is where the candidate database doubled, the pathologist specialty expansion shipped, the Bullhorn integration matured to handle production-scale dedup edge cases, and the operating cadence stabilized. Most of what makes the platform valuable to the client today is retainer-phase work, not initial-build work. This is consistent with what we see across all our engagements, and it is the structural reason the retainer is the default Phase 3 in our engagement model rather than an optional add-on.


Iron Pine helps mid-market companies integrate AI into how they actually operate — grounded in your data, embedded in your workflows, adopted by your people, and operated with production discipline.

Talk to us about an Integration Assessment · Try the AI Health Check

Iron Pine helps mid-market companies integrate AI into how they actually operate — grounded in your data, embedded in your workflows, adopted by your people, and operated with production discipline.

Talk to Us About an Assessment Try the AI Health Check
Continue Reading
← Back to Insights