Real-Time Pass-Through API vs Sync and Cache: The 2026 HIPAA Guide
Compare pass-through vs sync-and-cache APIs for HIPAA compliance. Learn how data-at-rest triggers BAA requirements and how zero-storage APIs reduce risk.
If your enterprise deal just stalled because the prospect's InfoSec team flagged your integration middleware for caching patient data, this is the architectural breakdown you need. The choice between a real-time pass-through API vs sync and cache for HIPAA compliance dictates whether your enterprise deals close or die in security review.
A real-time pass-through API processes data entirely in transit without persisting payloads, while a sync-and-cache API polls, stores, and serves normalized copies from its own database. Under HIPAA regulations, that architectural distinction determines whether your integration vendor is classified as a Business Associate—and whether you need a Business Associate Agreement (BAA), a sub-processor disclosure, and months of additional security scrutiny.
The reality of building integrations in the healthcare sector is unforgiving. When you sell to hospitals, clinics, or enterprise healthcare providers, your architecture is scrutinized under a microscope. Traditional integration middleware relies on a sync-and-cache model, expanding your compliance footprint and creating unmanaged shadow data.
This guide breaks down the compliance mechanics of both architectures, explains exactly where the HIPAA conduit exception applies (and where it doesn't), and walks through the technical trade-offs that engineering leaders and product managers must evaluate before choosing an integration strategy for healthcare SaaS.
The $7.4 Million Problem: Why Healthcare Integrations Fail Security Reviews
Healthcare data breaches averaged $7.42 million per incident in 2025, and healthcare has been the costliest sector for breaches for 14 consecutive years, according to IBM's annual Cost of a Data Breach Report. The containment timeline averaged 279 days—more than nine months of exposure before the organization even fully understands what happened.
Those numbers matter directly to your integration architecture. IBM's 2025 findings highlight how fast-moving threats exploit third-party vendors and unsecured environments, with healthcare seeing millions of records containing PII and PHI stolen from overlooked systems.
Here is the scenario that plays out in every upmarket healthcare deal: your sales team closes a six-figure contract with a hospital network or digital health platform. Procurement sends over a Standardized Information Gathering (SIG) Core questionnaire. Somewhere in that massive spreadsheet, Question #47 acts as a tripwire: "Does any third-party sub-processor store, cache, or replicate our data?"
If you need an integration tool that doesn't store customer data but your current vendor uses a sync-and-cache model, you have to answer "yes." That single affirmative answer triggers a cascade of follow-up scrutiny regarding encryption standards, data residency, retention policies, breach notification timelines, and BAA coverage. Deals don't just slow down—they die entirely because the integration vendor refuses to sign a BAA or cannot prove strict data isolation.
The root cause is architectural. Your choice between a pass-through and a sync-and-cache integration model isn't just a performance trade-off. It is a compliance decision with seven-figure consequences.
Sync and Cache Architecture: The Hidden HIPAA Liability
To understand why enterprise security teams reject traditional unified APIs, you have to look at how they actually move data.
A sync-and-cache unified API is an integration pattern where a middleware platform continuously polls upstream third-party APIs on a schedule, retrieves data in large batches, normalizes the payloads into a common schema, and stores that data persistently in its own managed database. When your application queries the unified API, it reads from that cached copy—not from the source system.
This is the dominant architecture for unified APIs that advertise fast response times and offline query capabilities. It works well for non-regulated data. For healthcare, however, it creates a massive compliance landmine.
Why Cached ePHI Makes Your Vendor a Business Associate
HHS guidance is explicit: Cloud Service Providers (CSPs) that provide cloud services involving creating, receiving, or maintaining ePHI meet the definition of a business associate, even if the CSP cannot view the ePHI because it is encrypted and the CSP does not have the decryption key. The conduit exception is limited to transmission-only services.
A sync-and-cache API does not merely transmit Electronic Protected Health Information (ePHI). It receives, normalizes, and maintains it in a database. That is textbook Business Associate territory. The consequences include:
- You must execute a comprehensive Business Associate Agreement (BAA) with the integration vendor before any ePHI flows through their system.
- The vendor must implement strict HIPAA Security Rule safeguards, conduct risk analyses, and report breaches.
- Oregon Health & Science University paid $2.7 million after an investigation revealed that ePHI had been stored on a cloud platform without a BAA.
- Your vendor appears as a sub-processor on every security questionnaire, expanding your compliance surface area.
The Shadow Data Problem
Sync-and-cache architectures create what security teams call shadow data—normalized copies of sensitive records that exist on a third-party's infrastructure, outside your direct control. You do not manage the encryption keys. You do not control the retention schedule. You cannot guarantee deletion when an integrated account is disconnected.
For healthcare SaaS, this shadow data problem is particularly toxic. If your integration vendor stores a cached copy of employee health records from a customer's HRIS, and that vendor suffers a breach, your customer's patients or employees are affected—and your organization is caught in the blast radius.
flowchart LR
A[Your App] -->|API Call| B[Sync & Cache<br>Unified API]
B -->|Scheduled Poll| C[Upstream API<br>e.g. Workday / Epic]
B -->|Stores ePHI| D[(Vendor's<br>Database)]
D -.->|Shadow Data<br>Exists Here| E[Compliance<br>Liability]
style D fill:#ff6b6b,color:#fff
style E fill:#ff6b6b,color:#fffThe diagram above illustrates the fundamental flaw. The middleware database acts as a persistent man-in-the-middle. It holds customer data hostage, increasing the surface area for a potential breach and violating the core security principle of data minimization.
Real-Time Pass-Through APIs: Processing Data in Transit
The alternative to building a shadow database is processing data entirely in transit.
A real-time pass-through API acts as a stateless proxy. When your application makes a request, the middleware immediately routes that request to the upstream third-party API in real time, transforms the response in memory using expression-based mapping (e.g., JSONata), and returns the normalized result to your application. No payload data is written to disk. No database stores a copy of the response.
This is the architecture Truto utilizes. When a request hits the Truto zero-storage unified API, the system looks up the integrated account, securely retrieves the necessary OAuth tokens from an encrypted vault, and fetches the resource directly from the provider. Once the response is delivered to your application, the data ceases to exist on Truto's infrastructure.
The key architectural properties include:
- No data at rest: Customer payload data is processed entirely in memory during transit. Once the response is returned, the data is gone.
- Real-time freshness: Every response reflects the current state of the source system. No stale cache. No sync lag.
- Smaller compliance footprint: Without persistent ePHI, the integration layer's HIPAA exposure is fundamentally different from a sync-and-cache model.
flowchart LR
A[Your App] -->|Real-Time Request| B[Truto Pass-Through<br>Stateless Proxy]
B -->|Direct API Call| C[Upstream API<br>e.g. Epic / HRIS]
C -->|Raw Response| B
B -->|In-Memory<br>Transform| A
style B fill:#51cf66,color:#fffWhere the Conduit Exception Actually Applies
The HIPAA Conduit Exception is narrow and excludes an extremely limited group of entities from having to enter into business associate agreements. The Rule applies to entities that transmit PHI but do not have persistent access to the transmitted information and do not store copies of data. They simply act as conduits through which PHI flows.
A pass-through API that truly processes data only in transit—without logging payloads, without caching responses, and without writing ePHI to any persistent store—has a much stronger argument for conduit-like treatment. But be careful: vendors that are often misclassified as conduits include email service providers, cloud service providers, and messaging services. These are NOT considered conduits and all must enter into a BAA.
The distinction is surgical. If the integration layer touches ePHI in any persistent way—logging request bodies, storing webhook payloads, caching responses in a database—the conduit exception evaporates. Temporary storage that occurs only as part of routing or delivery generally fits the conduit exception. Once storage extends beyond transient transmission, conduit status ends and business associate obligations apply.
Legal nuance matters here. The conduit exception is a fact-specific determination. Even if your API middleware uses a pure pass-through architecture, consult healthcare counsel before relying on conduit status to avoid a BAA. The safest path for most healthcare SaaS companies is to execute a BAA with any integration vendor that touches ePHI—and then use a pass-through architecture to minimize the operational and financial exposure under that agreement.
Honest Trade-Offs of Pass-Through Architectures
Pass-through APIs are not magic. They come with real engineering constraints that you should evaluate honestly:
| Trade-off | Pass-Through | Sync & Cache |
|---|---|---|
| Latency | Depends on upstream API response time (can be slow) | Fast reads from local cache |
| Availability | If upstream API is down, your integration is down | Can serve from cache during outages |
| Bulk queries | Must paginate through upstream API in real time | Can query cached data with SQL-like flexibility |
| Compliance surface | Minimal - no ePHI at rest on vendor infrastructure | Significant - cached ePHI triggers full BA obligations |
| Data freshness | Always current | May be stale by minutes or hours |
For healthcare use cases where compliance is a hard requirement, the latency trade-off is usually acceptable. A 500ms slower API call is a minor engineering problem. A $7.4 million breach is not.
How Truto Handles Rate Limits and Retries (Without Caching)
A common engineering objection to pass-through architectures is the management of rate limits. "If you are not caching data, how do you handle rate limits from upstream APIs?"
This is a critical question for healthcare data pipelines where you might be pulling records across dozens of connected accounts. Traditional sync-and-cache tools mask this problem. They absorb rate limit errors by placing your requests into a persistent queue, applying exponential backoff, and retrying the request later.
The problem? A persistent queue is a database. If an integration tool queues a POST request containing patient data to handle a retry, that ePHI is now stored at rest on their servers. The zero data retention promise is broken, and a hidden data-in-flight vulnerability is created.
Truto takes a radically different, highly deterministic approach. Truto does not retry, throttle, or apply backoff when an upstream API returns a rate-limit error (HTTP 429).
The 429 response passes directly back to you, the caller. No request is silently queued. No retry loop runs on Truto's infrastructure. No ePHI sits in a buffer waiting to be reprocessed.
Instead, Truto normalizes the chaotic, provider-specific rate limit information into standard IETF headers. Upstream APIs express rate limits in wildly inconsistent ways—Salesforce uses Sforce-Limit-Info, HubSpot returns X-HubSpot-RateLimit-* headers, and some APIs bury rate limit data in response bodies. Truto normalizes all of these into consistent response headers:
ratelimit-limit: The maximum number of requests permitted in the current window.ratelimit-remaining: The number of requests remaining before you hit the limit.ratelimit-reset: The number of seconds until the rate limit window resets.
This normalization is based on the IETF RateLimit header fields draft, which defines a set of standard HTTP header fields to enable rate limiting, including RateLimit-Policy for quota policies and RateLimit for the currently remaining quota.
Here is what reading those headers looks like in practice:
const response = await fetch('https://api.truto.one/unified/employees', {
headers: {
'Authorization': 'Bearer YOUR_TOKEN',
'X-Integrated-Account-ID': 'acct_abc123'
}
});
const remaining = parseInt(response.headers.get('ratelimit-remaining'));
const resetInSeconds = parseInt(response.headers.get('ratelimit-reset'));
if (response.status === 429 || remaining === 0) {
// You decide the backoff strategy
await sleep(resetInSeconds * 1000);
// Retry with your own logic inside your HIPAA-compliant environment
}By passing these standardized headers back to your application, Truto empowers your engineering team to implement precise, deterministic backoff logic on your own infrastructure. You retain complete control over the data lifecycle. If a request needs to be queued, it gets queued in your secure, HIPAA-compliant environment, governed by your own internal policies, rather than leaking into an opaque third-party middleware queue.
For a deeper dive into architecting resilient systems around these headers, review our guide on best practices for handling API rate limits.
The Proxy API: An Escape Hatch for Custom Endpoints
Another reality of enterprise integrations is that unified data models never cover 100% of a provider's API surface. You will inevitably encounter a customer who needs access to a highly specific, undocumented endpoint or a custom object unique to their NetSuite, Workday, or Epic deployment.
Sync-and-cache platforms struggle with this. If the data does not fit their rigid schema, their database cannot store it, and you cannot query it. You are forced to build a parallel, custom integration alongside the unified API.
Truto solves this through its Proxy API. Because the architecture is entirely pass-through, the platform provides direct, unmapped access to integration resources. The proxy layer handles the OAuth token refresh lifecycle proactively before expiry, injects the correct authentication headers, and routes your request directly to the provider. What you send is what the third-party API receives, and what it returns is what you get back.
This zero-code execution pipeline ensures that you can hit any endpoint, at any time, without waiting for a vendor to update their unified schema. It is a critical requirement for teams building deep, enterprise-grade integrations where flexibility is just as important as compliance.
Evaluating API Middleware for Enterprise Compliance
When evaluating integration tools for healthcare or enterprise SaaS, product managers and engineering leaders must look past the marketing copy. "Secure" and "compliant" are meaningless adjectives unless backed by architectural realities. To determine which unified API does not store customer data, use this technical checklist to evaluate any integration tool for enterprise compliance:
Architecture Verification
- Verify Zero Data Retention: Ask explicitly, "Do you store our API request or response payloads on disk, even temporarily in a queue?" Ask for architectural documentation regarding webhook payloads, request/response logging, and sync job outputs. If the answer involves a database, you have a compliance liability.
- Where Does Data Transformation Happen?: In-memory transformations (using expression languages like JSONata) leave no persistent trace. ETL pipelines that write to intermediate storage do.
- Check BAA Willingness: Will the vendor sign a Business Associate Agreement? Even with a pass-through architecture, the safest approach is to have a BAA in place. If they use a sync-and-cache model and refuse a BAA, you cannot legally use them for healthcare data.
Encryption and Transport Security
- TLS 1.2+ for All Data in Transit: This is table stakes, not a differentiator, but still requires verification.
- No Payload Logging in Production: Verify that the vendor's observability stack logs metadata (status codes, latency, resource types) without capturing request or response bodies containing ePHI.
- Proactive Authentication Management: The platform should refresh OAuth tokens proactively before expiry to avoid failed requests that might trigger retry queues.
Transparency in Error Handling
- Audit the Retry Mechanism: Ask how they handle HTTP 429 errors. If they claim to automatically retry failed requests, ask where the payload is stored between retries. Automatic retries require durable state. Rate limit errors should be visible to you, not absorbed.
- Contextual Upstream Errors: Upstream errors should pass through with context. A generic "500 Internal Server Error" from your integration layer masks the actual problem and makes debugging healthcare data pipelines much harder.
Sub-Processor and Data Residency
- Sub-Processor Chain: How many sub-processors touch ePHI? Fewer is better. A pass-through architecture that doesn't store data inherently limits the sub-processor chain.
- Data Processing Region: For healthcare organizations with data residency requirements, the ability to control where API requests are processed (not just stored) matters immensely.
The Penalty Math That Should Drive Your Architecture Decision
HIPAA violation penalties in 2026 include civil monetary penalties ranging from $145 to $2,190,294 per violation, depending on the level of culpability. That is per violation, not per incident. A single breach affecting thousands of patient records can trigger penalties for each individual's data that was improperly handled.
OCR enforcement actions have been increasing, ending 2025 with 21 settlements and civil monetary penalties. The trend is clear: HHS is not slowing down its scrutiny of digital health platforms and their vendors.
Consider the math from the integration vendor's perspective. If your sync-and-cache middleware stores normalized copies of ePHI across hundreds of your customers' connected accounts, a single breach on their infrastructure exposes data from every one of those connections. Your exposure is not isolated to your own security posture—it is chained to theirs.
A pass-through architecture doesn't eliminate HIPAA risk. Nothing does. But it removes one of the most dangerous variables from the equation: a third-party database full of your customers' healthcare data that you don't control, can't audit, and may not even know exists until the breach notification arrives.
Choosing the Right Architecture for Your Healthcare SaaS
The decision between pass-through and sync-and-cache isn't purely ideological. It is driven by your specific use case.
Choose pass-through when:
- You sell to healthcare, finance, or government clients with strict data handling requirements.
- Your integration needs are primarily real-time reads and writes (CRUD operations).
- Your customers' InfoSec teams have flagged sub-processor data storage as a deal blocker.
- You are building HIPAA-compliant integrations for healthcare SaaS.
Consider sync-and-cache when:
- You need offline analytics or aggregate queries across connected accounts.
- Your use case requires historical data that upstream APIs do not retain.
- You are operating in a non-regulated domain where compliance isn't a strict gate.
For many healthcare SaaS teams, the practical answer is a hybrid: use a pass-through API for real-time CRUD operations on sensitive data, and sync only the non-sensitive metadata you actually need for analytics. Truto supports both patterns—the unified API for pass-through operations and a Proxy API for direct, unmapped access when unified models don't cover specific EHR or HRIS endpoints.
Strategic Wrap-Up
The architecture you choose for your integration layer fundamentally dictates your go-to-market velocity in the enterprise segment.
Sync-and-cache unified APIs optimize for developer convenience at the expense of security. They force you to trust a third-party database with your most sensitive customer data, triggering brutal InfoSec reviews and expanding your compliance footprint.
Real-time pass-through architectures optimize for trust. By processing data entirely in transit, normalizing schemas in memory, and passing rate limit controls back to the caller, you eliminate shadow data. You keep your compliance boundaries tight. You give your sales team an integration story that enterprise security teams actually want to hear.
Building integrations is hard enough. Do not let your middleware vendor become the reason you lose your next major deal. Pick the architecture that lets you answer Question #47 with confidence.
Frequently Asked Questions
- What is the difference between pass-through and sync-and-cache APIs?
- Sync-and-cache APIs continuously poll third-party systems and store normalized data in their own database. Pass-through APIs act as stateless proxies, transforming data in memory during transit without ever saving payloads to disk.
- Does a pass-through API need a HIPAA Business Associate Agreement?
- While the HIPAA conduit exception applies to transmission-only services, it is extremely narrow. Even with a pass-through architecture, most healthcare compliance teams recommend executing a BAA as the safest approach to minimize risk.
- What is the shadow data problem in unified APIs?
- Shadow data occurs when integration middleware stores an unmanaged, persistent copy of sensitive customer records on its own servers, creating a hidden compliance liability and expanding the attack surface.
- How do pass-through APIs handle rate limits without queuing?
- Instead of absorbing requests into a persistent queue, Truto passes HTTP 429 rate limit errors directly back to the caller along with standardized IETF headers. This allows the calling application to handle retries securely within its own compliant infrastructure.
- Does encrypting cached ePHI exempt an API vendor from HIPAA?
- No. HHS guidance is clear that cloud service providers storing ePHI are considered Business Associates even if the data is encrypted and the provider does not have the decryption key.