---
title: "Real-Time Pass-Through API vs Sync and Cache: The 2026 HIPAA Guide"
slug: real-time-pass-through-api-vs-sync-and-cache-the-2026-hipaa-guide
date: 2026-04-08
author: Yuvraj Muley
categories: [Engineering, Security]
excerpt: "Compare pass-through vs sync-and-cache unified APIs for HIPAA. See which integration platforms store data, how architecture affects enterprise workflows, and why it matters for healthcare SaaS deals."
tldr: "Sync-and-cache APIs store sensitive ePHI at rest, triggering strict HIPAA BAA requirements. Real-time pass-through APIs process data in transit, eliminating shadow databases and compliance liabilities."
canonical: https://truto.one/blog/real-time-pass-through-api-vs-sync-and-cache-the-2026-hipaa-guide/
---

# Real-Time Pass-Through API vs Sync and Cache: The 2026 HIPAA Guide


If your enterprise deal just stalled because the prospect's InfoSec team flagged your integration middleware for caching patient data, this is the architectural breakdown you need. The choice between a real-time pass-through API vs sync and cache for HIPAA compliance dictates whether your enterprise deals close or die in security review.

**A real-time pass-through API processes data entirely in transit without persisting payloads, while a sync-and-cache API polls, stores, and serves normalized copies from its own database.** Under HIPAA regulations, that architectural distinction determines whether your integration vendor is classified as a Business Associate—and whether you need a Business Associate Agreement (BAA), a sub-processor disclosure, and months of additional security scrutiny.

The reality of building integrations in the healthcare sector is unforgiving. When you sell to hospitals, clinics, or enterprise healthcare providers, your architecture is scrutinized under a microscope. Traditional integration middleware relies on a sync-and-cache model, expanding your compliance footprint and creating unmanaged shadow data.

This guide breaks down the compliance mechanics of both architectures, explains exactly where the HIPAA conduit exception applies (and where it doesn't), and walks through the technical trade-offs that engineering leaders and product managers must evaluate before choosing an integration strategy for healthcare SaaS.

## The $7.4 Million Problem: Why Healthcare Integrations Fail Security Reviews

<cite index="1-8,1-9">Healthcare data breaches averaged $7.42 million per incident in 2025, and healthcare has been the costliest sector for breaches for 14 consecutive years, according to IBM's annual Cost of a Data Breach Report.</cite> <cite index="3-1">The containment timeline averaged 279 days</cite>—more than nine months of exposure before the organization even fully understands what happened.

Those numbers matter directly to your integration architecture. <cite index="3-3,3-4">IBM's 2025 findings highlight how fast-moving threats exploit third-party vendors and unsecured environments, with healthcare seeing millions of records containing PII and PHI stolen from overlooked systems.</cite>

Here is the scenario that plays out in every upmarket healthcare deal: your sales team closes a six-figure contract with a hospital network or digital health platform. Procurement sends over a Standardized Information Gathering (SIG) Core questionnaire. Somewhere in that massive spreadsheet, Question #47 acts as a tripwire: *"Does any third-party sub-processor store, cache, or replicate our data?"*

If you [need an integration tool that doesn't store customer data](https://truto.one/blog/need-an-integration-tool-that-doesnt-store-customer-data/) but your current vendor uses a sync-and-cache model, you have to answer "yes." That single affirmative answer triggers a cascade of follow-up scrutiny regarding encryption standards, data residency, retention policies, breach notification timelines, and BAA coverage. Deals don't just slow down—they die entirely because the integration vendor refuses to sign a BAA or cannot prove strict data isolation.

The root cause is architectural. Your choice between a pass-through and a sync-and-cache integration model isn't just a performance trade-off. It is a compliance decision with seven-figure consequences.

## Sync and Cache Architecture: The Hidden HIPAA Liability

To understand why enterprise security teams reject traditional unified APIs, you have to look at how they actually move data.

**A sync-and-cache unified API** is an integration pattern where a middleware platform continuously polls upstream third-party APIs on a schedule, retrieves data in large batches, normalizes the payloads into a common schema, and stores that data persistently in its own managed database. When your application queries the unified API, it reads from that cached copy—not from the source system.

This is the dominant architecture for unified APIs that advertise fast response times and offline query capabilities. It works well for non-regulated data. For healthcare, however, it creates a massive compliance landmine.

### Why Cached ePHI Makes Your Vendor a Business Associate

<cite index="21-14,21-15,21-16">HHS guidance is explicit: Cloud Service Providers (CSPs) that provide cloud services involving creating, receiving, or maintaining ePHI meet the definition of a business associate, even if the CSP cannot view the ePHI because it is encrypted and the CSP does not have the decryption key. The conduit exception is limited to transmission-only services.</cite>

A sync-and-cache API does not merely transmit Electronic Protected Health Information (ePHI). It receives, normalizes, and **maintains** it in a database. That is textbook Business Associate territory. The consequences include:

*   You must execute a comprehensive **Business Associate Agreement (BAA)** with the integration vendor before any ePHI flows through their system.
*   The vendor must implement strict HIPAA Security Rule safeguards, conduct risk analyses, and report breaches.
*   <cite index="23-14">Oregon Health & Science University paid $2.7 million after an investigation revealed that ePHI had been stored on a cloud platform without a BAA.</cite>
*   Your vendor appears as a sub-processor on every security questionnaire, expanding your compliance surface area.

### The Shadow Data Problem

Sync-and-cache architectures create what security teams call **shadow data**—normalized copies of sensitive records that exist on a third-party's infrastructure, outside your direct control. You do not manage the encryption keys. You do not control the retention schedule. You cannot guarantee deletion when an integrated account is disconnected.

For healthcare SaaS, this shadow data problem is particularly toxic. If your integration vendor stores a cached copy of employee health records from a customer's HRIS, and that vendor suffers a breach, your customer's patients or employees are affected—and your organization is caught in the blast radius.

```mermaid
flowchart LR
    A[Your App] -->|API Call| B[Sync & Cache<br>Unified API]
    B -->|Scheduled Poll| C[Upstream API<br>e.g. Workday / Epic]
    B -->|Stores ePHI| D[(Vendor's<br>Database)]
    D -.->|Shadow Data<br>Exists Here| E[Compliance<br>Liability]
    style D fill:#ff6b6b,color:#fff
    style E fill:#ff6b6b,color:#fff
```

The diagram above illustrates the fundamental flaw. The middleware database acts as a persistent man-in-the-middle. It holds customer data hostage, increasing the surface area for a potential breach and violating the core security principle of data minimization.

## Real-Time Pass-Through APIs: Processing Data in Transit

The alternative to building a shadow database is processing data entirely in transit.

**A real-time pass-through API** acts as a stateless proxy. When your application makes a request, the middleware immediately routes that request to the upstream third-party API in real time, transforms the response in memory using expression-based mapping (e.g., JSONata), and returns the normalized result to your application. No payload data is written to disk. No database stores a copy of the response.

This is the architecture Truto utilizes. When a request hits the Truto [zero-storage unified API](https://truto.one/blog/why-truto-is-the-best-zero-storage-unified-api-for-compliance-strict-saas/), the system looks up the integrated account, securely retrieves the necessary OAuth tokens from an encrypted vault, and fetches the resource directly from the provider. Once the response is delivered to your application, the data ceases to exist on Truto's infrastructure.

The key architectural properties include:

*   **No data at rest**: Customer payload data is processed entirely in memory during transit. Once the response is returned, the data is gone.
*   **Real-time freshness**: Every response reflects the current state of the source system. No stale cache. No sync lag.
*   **Smaller compliance footprint**: Without persistent ePHI, the integration layer's HIPAA exposure is fundamentally different from a sync-and-cache model.

```mermaid
flowchart LR
    A[Your App] -->|Real-Time Request| B[Truto Pass-Through<br>Stateless Proxy]
    B -->|Direct API Call| C[Upstream API<br>e.g. Epic / HRIS]
    C -->|Raw Response| B
    B -->|In-Memory<br>Transform| A
    style B fill:#51cf66,color:#fff
```

### Where the Conduit Exception Actually Applies

<cite index="11-13,11-14,11-15">The HIPAA Conduit Exception is narrow and excludes an extremely limited group of entities from having to enter into business associate agreements. The Rule applies to entities that transmit PHI but do not have persistent access to the transmitted information and do not store copies of data. They simply act as conduits through which PHI flows.</cite>

A pass-through API that truly processes data only in transit—without logging payloads, without caching responses, and without writing ePHI to any persistent store—has a much stronger argument for conduit-like treatment. But be careful: <cite index="11-23,11-24">vendors that are often misclassified as conduits include email service providers, cloud service providers, and messaging services. These are NOT considered conduits and all must enter into a BAA.</cite>

The distinction is surgical. If the integration layer touches ePHI in any persistent way—logging request bodies, storing webhook payloads, caching responses in a database—the conduit exception evaporates. <cite index="18-32,18-33">Temporary storage that occurs only as part of routing or delivery generally fits the conduit exception. Once storage extends beyond transient transmission, conduit status ends and business associate obligations apply.</cite>

> [!WARNING]
> **Legal nuance matters here.** The conduit exception is a fact-specific determination. Even if your API middleware uses a pure pass-through architecture, consult healthcare counsel before relying on conduit status to avoid a BAA. The safest path for most healthcare SaaS companies is to execute a BAA with any integration vendor that touches ePHI—and then use a pass-through architecture to minimize the operational and financial exposure under that agreement.

### Honest Trade-Offs of Pass-Through Architectures

Pass-through APIs are not magic. They come with real engineering constraints that you should evaluate honestly:

| Trade-off | Pass-Through | Sync & Cache |
|---|---|---|
| **Latency** | Depends on upstream API response time (can be slow) | Fast reads from local cache |
| **Availability** | If upstream API is down, your integration is down | Can serve from cache during outages |
| **Bulk queries** | Must paginate through upstream API in real time | Can query cached data with SQL-like flexibility |
| **Compliance surface** | Minimal - no ePHI at rest on vendor infrastructure | Significant - cached ePHI triggers full BA obligations |
| **Data freshness** | Always current | May be stale by minutes or hours |

For healthcare use cases where compliance is a hard requirement, the latency trade-off is usually acceptable. A 500ms slower API call is a minor engineering problem. A $7.4 million breach is not.

## How Truto Handles Rate Limits and Retries (Without Caching)

A common engineering objection to pass-through architectures is the management of rate limits. *"If you are not caching data, how do you handle rate limits from upstream APIs?"*

This is a critical question for healthcare data pipelines where you might be pulling records across dozens of connected accounts. Traditional sync-and-cache tools mask this problem. They absorb rate limit errors by placing your requests into a persistent queue, applying exponential backoff, and retrying the request later.

The problem? A persistent queue is a database. If an integration tool queues a POST request containing patient data to handle a retry, that ePHI is now stored at rest on their servers. The [zero data retention](https://truto.one/blog/what-does-zero-data-retention-mean-for-saas-integrations/) promise is broken, and a hidden data-in-flight vulnerability is created.

Truto takes a radically different, highly deterministic approach. **Truto does not retry, throttle, or apply backoff when an upstream API returns a rate-limit error (HTTP 429).**

The 429 response passes directly back to you, the caller. No request is silently queued. No retry loop runs on Truto's infrastructure. No ePHI sits in a buffer waiting to be reprocessed.

Instead, Truto normalizes the chaotic, provider-specific rate limit information into standard IETF headers. Upstream APIs express rate limits in wildly inconsistent ways—Salesforce uses `Sforce-Limit-Info`, HubSpot returns `X-HubSpot-RateLimit-*` headers, and some APIs bury rate limit data in response bodies. Truto normalizes all of these into consistent response headers:

*   `ratelimit-limit`: The maximum number of requests permitted in the current window.
*   `ratelimit-remaining`: The number of requests remaining before you hit the limit.
*   `ratelimit-reset`: The number of seconds until the rate limit window resets.

This normalization is based on the IETF RateLimit header fields draft, <cite index="31-3,31-4">which defines a set of standard HTTP header fields to enable rate limiting, including RateLimit-Policy for quota policies and RateLimit for the currently remaining quota.</cite>

Here is what reading those headers looks like in practice:

```typescript
const response = await fetch('https://api.truto.one/unified/employees', {
  headers: {
    'Authorization': 'Bearer YOUR_TOKEN',
    'X-Integrated-Account-ID': 'acct_abc123'
  }
});

const remaining = parseInt(response.headers.get('ratelimit-remaining'));
const resetInSeconds = parseInt(response.headers.get('ratelimit-reset'));

if (response.status === 429 || remaining === 0) {
  // You decide the backoff strategy
  await sleep(resetInSeconds * 1000);
  // Retry with your own logic inside your HIPAA-compliant environment
}
```

By passing these standardized headers back to your application, Truto empowers your engineering team to implement precise, deterministic backoff logic on your own infrastructure. You retain complete control over the data lifecycle. If a request needs to be queued, it gets queued in your secure, HIPAA-compliant environment, governed by your own internal policies, rather than leaking into an opaque third-party middleware queue.

For a deeper dive into architecting resilient systems around these headers, review our guide on [best practices for handling API rate limits](https://truto.one/blog/best-practices-for-handling-api-rate-limits-and-retries-across-multiple-third-party-apis/).

## The Proxy API: An Escape Hatch for Custom Endpoints

Another reality of enterprise integrations is that unified data models never cover 100% of a provider's API surface. You will inevitably encounter a customer who needs access to a highly specific, undocumented endpoint or a custom object unique to their NetSuite, Workday, or Epic deployment.

Sync-and-cache platforms struggle with this. If the data does not fit their rigid schema, their database cannot store it, and you cannot query it. You are forced to build a parallel, custom integration alongside the unified API.

Truto solves this through its Proxy API. Because the architecture is entirely pass-through, the platform provides direct, unmapped access to integration resources. The proxy layer handles the OAuth token refresh lifecycle proactively before expiry, injects the correct authentication headers, and routes your request directly to the provider. What you send is what the third-party API receives, and what it returns is what you get back.

This zero-code execution pipeline ensures that you can hit any endpoint, at any time, without waiting for a vendor to update their unified schema. It is a critical requirement for teams building deep, enterprise-grade integrations where flexibility is just as important as compliance.

## Real-Time vs. Background Sync: Quick Legend

If you are looking for an integration platform with unified API support for your enterprise SaaS product, the single most important architectural question is: does the platform fetch data at request time, or sync it in the background?

These two models have fundamentally different compliance, freshness, and reliability profiles. Here is the distinction:

*   **Request-time (pass-through)**: Your app makes an API call, the integration platform forwards it to the upstream provider in real time, transforms the response in memory, and returns it. No data is stored. Every response is live.
*   **Sync-and-cache (background sync)**: The integration platform polls upstream APIs on a schedule (ranging from every few minutes to every 24 hours), stores normalized copies in its own database, and serves your queries from that cache.

Neither model is universally better. But in regulated industries like healthcare, the storage question is the compliance question. Data at rest on a third party triggers BAA obligations, sub-processor disclosures, and breach liability chains that pass-through architectures avoid entirely.

## Platform Comparison: Request-Time vs. Sync-and-Cache

When evaluating a unified API integration platform, knowing the underlying data architecture is non-negotiable. The table below classifies the major platforms based on their publicly documented behavior as of early 2026.

| Platform | Architecture | Stores Customer Data? | Enterprise Workflow Impact |
|---|---|---|---|
| **Truto** | Request-time (pass-through) | No | Real-time CRUD for employee offboarding and EHR writes; zero vendor-side data footprint for HIPAA reviews. |
| **Merge** | Sync-and-cache (background sync) | Yes - normalized copies in Merge's database | Fast cached reads for analytics dashboards; sync delays (daily to high-frequency depending on plan) mean offboarding actions are eventually consistent, not instant. |
| **Apideck** | Request-time (pass-through) | No | Real-time reads and writes across accounting, CRM, and HRIS categories; no cached data simplifies SOC 2 and HIPAA questionnaires. |
| **Nango** | Hybrid (proxy + sync engine) | Yes - synced records stored in Postgres | Proxy mode gives real-time access; sync mode caches data for RAG and reporting, requiring you to manage the compliance implications of stored records. |
| **Kombo** | Sync-and-cache (background sync) | Yes - mirrors data every ~3 hours | Deep HRIS/ATS models for European payroll; cached data means employee termination visibility lags until the next sync cycle completes. |
| **StackOne** | Request-time (no storage by default) | No (optional caching available) | Real-time proxy for AI agent workflows; optional caching for performance, but enabling it shifts compliance responsibility. |

> [!NOTE]
> **How to read this table.** "Stores Customer Data" refers to whether the platform persists API response payloads (your customers' actual records) on its own infrastructure as part of normal operation. All platforms store connection metadata like OAuth tokens. The distinction that matters for HIPAA is whether *ePHI payloads* land in a vendor-controlled database.

A few things stand out. <cite index="16-1,16-2">Merge's core architecture relies on background data synchronization, and Merge stores customer records as part of its sync architecture.</cite> <cite index="14-2">Sync frequency is the rate at which Merge initiates requests to fetch data from third-parties</cite>, and on the free or launch plan, that frequency defaults to daily. <cite index="24-16,24-17">Apideck does not store your data; API calls are processed in real-time and passed directly from the source to your app.</cite> <cite index="42-2">Kombo mirrors data in the source systems into a database at regular intervals.</cite> <cite index="51-15,51-16">StackOne proxies every request in real-time, and no customer data is persisted by default.</cite> <cite index="34-9,34-10">Nango stores cached records and synced data in Postgres</cite> when using its sync engine, though its proxy mode operates without storage.

For teams selling into healthcare, the "Stores Customer Data" column is the first filter. If the answer is "yes," you are adding a sub-processor that holds ePHI - and the BAA, breach notification, and audit requirements follow.

## Implications for Enterprise Workflows

The architecture your integration platform uses directly shapes what you can and cannot do in production enterprise workflows. Here are three scenarios where the difference is concrete:

**1. Real-time employee offboarding**

When an employee is terminated, downstream systems - identity providers, device management tools, benefits platforms - need to reflect that change immediately. With a request-time architecture, your app writes the termination to the HRIS through the unified API, and the change propagates instantly. With a sync-and-cache platform, the termination is recorded in the source system, but your integration layer won't reflect it until the next sync cycle completes. That gap - minutes to hours - is a security exposure window where the terminated employee may still have active access.

**2. Prior authorization checks against EHR data**

A digital health product checking insurance eligibility or prior authorization status needs current data. If your integration platform is serving a cached snapshot that is three hours old, you risk acting on stale coverage information. Pass-through architectures return whatever the EHR system has right now, which is the only acceptable source of truth for clinical workflows.

**3. Aggregate analytics and reporting**

This is where sync-and-cache architectures genuinely shine. If you need to run cross-tenant analytics, generate compliance reports across hundreds of HRIS connections, or build dashboards that query historical data, a cached database gives you SQL-like flexibility that no pass-through API can match. You cannot run a GROUP BY across a live upstream API.

The practical answer for most healthcare SaaS teams is to recognize which workflows are latency-sensitive and compliance-critical (use pass-through) versus which are analytical and tolerant of eventual consistency (use sync or build your own data warehouse). Picking a unified API integration platform that supports both patterns - real-time proxy for sensitive operations and a raw proxy for direct API access - gives you the most flexibility without doubling your vendor count.

## Evaluating API Middleware for Enterprise Compliance

When evaluating integration tools for healthcare or enterprise SaaS, product managers and engineering leaders must look past the marketing copy. "Secure" and "compliant" are meaningless adjectives unless backed by architectural realities. To determine [which unified API does not store customer data](https://truto.one/blog/which-unified-api-does-not-store-customer-data-in-2026/), use this technical checklist to evaluate any [integration tool for enterprise compliance](https://truto.one/blog/which-integration-tools-are-best-for-enterprise-compliance-soc2-hipaa/):

### Architecture Verification

1.  **Verify [Zero Data Retention](https://truto.one/blog/what-does-zero-data-retention-mean-for-saas-integrations/)**: Ask explicitly, "Do you store our API request or response payloads on disk, even temporarily in a queue?" Ask for architectural documentation regarding webhook payloads, request/response logging, and sync job outputs. If the answer involves a database, you have a compliance liability.
2.  **Where Does Data Transformation Happen?**: In-memory transformations (using expression languages like JSONata) leave no persistent trace. ETL pipelines that write to intermediate storage do.
3.  **Check BAA Willingness**: Will the vendor sign a Business Associate Agreement? Even with a pass-through architecture, the safest approach is to have a BAA in place. If they use a sync-and-cache model and refuse a BAA, you cannot legally use them for healthcare data.

### Encryption and Transport Security

4.  **TLS 1.2+ for All Data in Transit**: This is table stakes, not a differentiator, but still requires verification.
5.  **No Payload Logging in Production**: Verify that the vendor's observability stack logs metadata (status codes, latency, resource types) without capturing request or response bodies containing ePHI.
6.  **Proactive Authentication Management**: The platform should refresh OAuth tokens proactively before expiry to avoid failed requests that might trigger retry queues.

### Transparency in Error Handling

7.  **Audit the Retry Mechanism**: Ask how they handle HTTP 429 errors. If they claim to automatically retry failed requests, ask where the payload is stored between retries. Automatic retries require durable state. Rate limit errors should be visible to you, not absorbed.
8.  **Contextual Upstream Errors**: Upstream errors should pass through with context. A generic "500 Internal Server Error" from your integration layer masks the actual problem and makes debugging healthcare data pipelines much harder.

### Sub-Processor and Data Residency

9.  **Sub-Processor Chain**: How many sub-processors touch ePHI? Fewer is better. A pass-through architecture that doesn't store data inherently limits the sub-processor chain.
10. **Data Processing Region**: For healthcare organizations with data residency requirements, the ability to control where API requests are processed (not just stored) matters immensely.

## The Penalty Math That Should Drive Your Architecture Decision

<cite index="42-3">HIPAA violation penalties in 2026 include civil monetary penalties ranging from $145 to $2,190,294 per violation, depending on the level of culpability.</cite> That is per violation, not per incident. A single breach affecting thousands of patient records can trigger penalties for each individual's data that was improperly handled.

<cite index="42-18">OCR enforcement actions have been increasing, ending 2025 with 21 settlements and civil monetary penalties.</cite> The trend is clear: HHS is not slowing down its scrutiny of digital health platforms and their vendors.

Consider the math from the integration vendor's perspective. If your sync-and-cache middleware stores normalized copies of ePHI across hundreds of your customers' connected accounts, a single breach on their infrastructure exposes data from every one of those connections. Your exposure is not isolated to your own security posture—it is chained to theirs.

A pass-through architecture doesn't eliminate HIPAA risk. Nothing does. But it removes one of the most dangerous variables from the equation: a third-party database full of your customers' healthcare data that you don't control, can't audit, and may not even know exists until the breach notification arrives.

## Choosing the Right Architecture for Your Healthcare SaaS

The decision between pass-through and sync-and-cache isn't purely ideological. It is driven by your specific use case.

**Choose pass-through when:**
*   You sell to healthcare, finance, or government clients with strict data handling requirements.
*   Your integration needs are primarily real-time reads and writes (CRUD operations).
*   Your customers' InfoSec teams have flagged sub-processor data storage as a deal blocker.
*   You are building [HIPAA-compliant integrations for healthcare SaaS](https://truto.one/blog/how-to-build-hipaa-compliant-integrations-for-healthcare-saas/).

**Consider sync-and-cache when:**
*   You need offline analytics or aggregate queries across connected accounts.
*   Your use case requires historical data that upstream APIs do not retain.
*   You are operating in a non-regulated domain where compliance isn't a strict gate.

For many healthcare SaaS teams, the practical answer is a hybrid: use a pass-through API for real-time CRUD operations on sensitive data, and sync only the non-sensitive metadata you actually need for analytics. Truto supports both patterns—the unified API for pass-through operations and a Proxy API for direct, unmapped access when unified models don't cover specific EHR or HRIS endpoints.

## Strategic Wrap-Up

The architecture you choose for your integration layer fundamentally dictates your go-to-market velocity in the enterprise segment.

Sync-and-cache unified APIs optimize for developer convenience at the expense of security. They force you to trust a third-party database with your most sensitive customer data, triggering brutal InfoSec reviews and expanding your compliance footprint.

Real-time pass-through architectures optimize for trust. By processing data entirely in transit, normalizing schemas in memory, and passing rate limit controls back to the caller, you eliminate shadow data. You keep your compliance boundaries tight. You give your sales team an integration story that enterprise security teams actually want to hear.

Building integrations is hard enough. Do not let your middleware vendor become the reason you lose your next major deal. Pick the architecture that lets you answer Question #47 with confidence.

> Stop losing enterprise deals to integration compliance blockers. Book a technical deep dive with our engineering team to see how Truto's zero-storage pass-through architecture handles ePHI securely.
>
> [Talk to us](https://cal.com/truto/partner-with-truto)
