How to Build ERP Integrations (NetSuite, SAP) Without Storing Customer Data
Learn how to build ERP integrations with NetSuite, SAP, and D365 using a zero data retention architecture that passes enterprise InfoSec reviews.
B2B SaaS teams face a brutal reality when moving upmarket. You build an AI agent or a financial workflow tool that needs to read and write data to enterprise ERPs like NetSuite, SAP, or Microsoft Dynamics 365. The engineering team ships the integration. The product looks great in demos. Then, the deal dies in procurement.
If you are trying to integrate your B2B SaaS product with enterprise ERPs, and your integration middleware writes any customer financial data to a database, you are building a compliance liability. Your InfoSec review will fail, your deal will stall, and your competitor who figured out pass-through architecture will close the contract instead. If you want to know how to build ERP integrations securely, the answer requires abandoning the traditional sync-and-cache model entirely in favor of a pass-through, zero data retention architecture.
This guide breaks down exactly why caching ERP data is an architectural mistake that kills enterprise deals, the technical realities of integrating with complex systems like NetSuite and SAP, and how to build a zero data retention integration layer that normalizes data on the fly and passes enterprise procurement without drama.
The Enterprise Procurement Wall: Why Caching ERP Data Kills Deals
Enterprise deals die in procurement, not in product demos. And the number one thing that kills them is your data storage architecture.
When your sales team pushes a six-figure contract to the final stage, the buyer's InfoSec team sends over a security questionnaire. Enterprise procurement teams use data residency and security questionnaires as a primary filter for software vendors. One of the first questions you will encounter is: "Does the vendor store, cache, or persist any customer data? If so, describe the data types, retention periods, and storage locations."
If your integration layer syncs NetSuite general ledger entries, SAP purchase orders, or Dynamics 365 invoice records into your own managed database just so your application can query it faster, you now have to answer "yes" to that question. That single answer triggers a massive compliance review and a cascade of follow-up requirements: SOC 2 Type II audit evidence, data residency documentation, encryption-at-rest certificates, data processing agreements, sub-processor lists, and a retention policy that satisfies the buyer's legal team.
The financial stakes behind these requirements are real. IBM's Cost of a Data Breach Report 2024 found the global average breach hit a record $4.88 million - a 10% increase from 2023 and the largest spike since the pandemic. Lost business and post-breach customer and third-party response costs drove the year-over-year cost spike. Because of this, security teams are hyper-sensitive to vendor data storage. When they look at your architecture diagram, they are tracing the flow of their sensitive data. They are not going to approve a vendor that stores their chart of accounts on infrastructure they cannot control.
According to Gartner's 2024 Global Software Buying Trends report, the ability to support the integration process is the number one sales-related factor in driving a software decision. But that integration capability becomes a massive liability the moment it introduces data storage. The vendors who win are the ones who can demonstrate connectivity without custody.
The procurement trap: Many integration platforms - including embedded iPaaS tools and some legacy unified APIs - sync and cache third-party data to normalize it. This works fine for SMB deals. It destroys enterprise deals where financial data residency is non-negotiable. You are suddenly responsible for the data residency requirements of their financial records.
To bypass this procurement wall, you must adopt a Zero Data Retention (ZDR) architecture. This means your integration layer processes third-party API payloads entirely in memory, never writing them to persistent storage.
The Technical Reality of ERP Integrations (NetSuite, SAP, D365)
Building an integration with a modern SaaS tool like a CRM is relatively straightforward. If you have built integrations with Salesforce or HubSpot, you might think you understand the complexity. You don't. ERP systems are an entirely different category of engineering pain.
Data collected over the years by Panorama Consulting Group on ERP implementations found that 50% to 64% fail the first time around, with integration issues cited as a significant challenge. The schema variability alone makes traditional sync-and-cache architectures fragile.
Let's look at Oracle NetSuite, SAP, and Dynamics 365 to understand this complexity.
NetSuite: Three API Surfaces and a SOAP Sunset
NetSuite is one of the most complex enterprise integrations you will encounter. Unlike simpler REST APIs, NetSuite requires orchestrating across three distinct API surfaces, each with its own capabilities and limitations. If you attempt to cache this data, you will spend all your engineering cycles managing state synchronization across these different endpoints.
- SuiteTalk REST API and SuiteQL: The REST record API handles basic CRUD operations but returns a single record at a time with limited filtering. For performant read operations, you must use SuiteQL - NetSuite's SQL-like query language. SuiteQL runs through the REST API as POST requests but behaves like a database query layer, enabling JOINs across related tables (e.g., joining vendor entity addresses, subsidiary relationships, and currency tables in a single call). However, SuiteQL is read-only. Writes must still use the REST record API. Your integration layer must seamlessly route read operations to the SuiteQL endpoint and write operations to the REST endpoint.
- SOAP API Fallbacks: Oracle NetSuite is actively deprecating its legacy SOAP endpoints. Starting with the 2026.1 NetSuite release, all newly built integrations should use REST web services with OAuth 2.0. With the 2027.1 release, it will no longer be possible to build new integrations using SOAP web services. Starting with the 2027.2 release, only the last endpoint will be supported, and older endpoints will still be available but no longer supported until the final removal of SOAP web services with the 2028.2 release. However, certain data structures - like detailed sales tax item configurations - are not fully exposed in SuiteQL. Fetching these records still requires the legacy SOAP
getListoperation. Authentication for SOAP uses Token-Based Authentication (TBA) formatted as a SOAP header with an HMAC-SHA256 signature, completely different from the REST OAuth 2.0 flow. - RESTlet (SuiteScript) Deployments: Certain capabilities are impossible through REST or SuiteQL alone. For example, generating a PDF of a Purchase Order or fetching dynamic form field metadata (including select options and mandatory flags that change based on user context) requires deploying a custom SuiteScript Suitelet into the customer's NetSuite account.
If you cache data, you have to build infrastructure to poll these three distinct surfaces, reconcile the data, handle the differing authentication schemes, and store the result. It is a maintenance nightmare. For a deeper technical walkthrough, see our guide on architecting a reliable NetSuite API integration.
SAP and D365: Schema Complexity at Scale
SAP exposes data through OData services with deeply nested entity structures, custom function imports, and customer-specific extensions that vary per implementation. Dynamics 365 uses a Web API built on OData v4 with its own set of quirks - batch request limits, change tracking tokens, and entity metadata that can shift between versions.
The common thread across NetSuite, SAP, and D365 is that every customer's ERP instance is wildly different. OneWorld vs. standard edition in NetSuite. Multi-currency vs. single-currency. Custom fields, custom forms, custom record types. An integration that works perfectly for one customer's NetSuite instance might break on another's because they have different subsidiary structures, different chart of accounts configurations, or different custom validation rules on their purchase order forms. You end up maintaining customer-specific transformation logic that scales linearly with your customer count.
What is Zero Data Retention Architecture?
As we've detailed in our breakdown of what zero data retention means for SaaS integrations, Zero Data Retention (ZDR) architecture is the design pattern where your integration layer processes third-party API payloads entirely in memory, never writing them to persistent storage. The middleware acts as a stateless proxy: it authenticates, transforms, and forwards requests between your application and the upstream ERP, then discards the payload the moment the response is delivered.
No database writes. No log files containing financial records. No cached copies of your customer's general ledger sitting on infrastructure you have to secure and audit.
Here is what the flow looks like in a true pass-through system:
sequenceDiagram
participant App as Your Application
participant Proxy as Pass-Through Layer
participant ERP as NetSuite / SAP / D365
App->>Proxy: GET /unified/accounting/invoices
Proxy->>Proxy: Authenticate (OAuth token refresh)<br>Transform query params<br>Resolve ERP-specific endpoint
Proxy->>ERP: Native API call (SuiteQL / OData / REST)
ERP-->>Proxy: Raw ERP response
Proxy->>Proxy: Map response to<br>unified schema (in memory)
Proxy-->>App: Normalized JSON response
Note over Proxy: No data persisted.<br>Payload discarded.The memory lifecycle works in 8 distinct steps:
- Your application sends a unified request (e.g.,
GET /unified/accounting/contacts). - The proxy layer loads the customer's OAuth credentials into memory (decrypting them only for the duration of the call).
- A declarative mapping translates your unified request into the ERP-specific format (e.g., a SuiteQL query or OData parameters).
- The proxy executes the HTTP request to the ERP.
- The ERP returns the raw payload to the proxy.
- A response mapping transforms the raw payload back into your unified schema in memory.
- The proxy delivers the normalized JSON to your application.
- The memory is cleared. The payload is discarded. No database writes occur.
The key architectural properties of this system include:
- Stateless request processing: Each request is self-contained. The proxy holds no state between requests beyond the OAuth credentials needed to authenticate.
- In-memory transformation: Field mapping, schema normalization, and data enrichment all happen in the request lifecycle. Nothing touches disk.
- Credential isolation: OAuth tokens and API keys are encrypted at rest and decrypted only for the duration of the API call. The proxy refreshes OAuth tokens proactively before they expire, so your application never handles raw credentials.
- Pass-through error propagation: When the upstream ERP returns an error (including rate limits), that error passes through to your application with normalized metadata. Your app owns the retry logic.
This architecture completely bypasses enterprise InfoSec data residency objections because you are not storing their data. You are simply providing a secure, real-time pipe. You lose the ability to query historical data locally, and you inherit the upstream API's latency, but you gain a compliance posture that enterprise procurement teams actually approve. For a deeper look at why this matters, see our breakdown on compliance-strict SaaS environments.
Normalizing Complex ERP Schemas Without a Database
The traditional argument for caching ERP data is that you need a persistent layer to normalize it. NetSuite calls a vendor's tax ID vatregnumber. SAP calls it TaxNumber1. Dynamics 365 calls it accountnumber. You cannot expose three different field names to your application, so the legacy approach is to store the data in your own database schema and normalize it there.
But that argument does not hold up architecturally. If you cannot store the data in a database to normalize it, how do you translate between your application's format and the ERP's format? You normalize in real time using declarative mapping expressions evaluated during the request lifecycle.
Instead of writing integration-specific code (if provider == 'netsuite'), you define integration behavior entirely as data. You store JSON configuration blobs that describe how to talk to the API, and use a functional transformation language like JSONata to describe how to translate the data.
JSONata is a Turing-complete expression language purpose-built for reshaping JSON objects. It supports conditionals, string manipulation, array transforms, and custom functions. Because expressions are pure functions - they transform input to output without modifying state - they are perfect for in-memory processing. At runtime, the engine evaluates the expression against the raw API response. The output is a normalized object. The input is discarded.
Here is a concrete example. A JSONata expression mapping a complex NetSuite response to a unified contact schema on the fly:
response.{
"id": $string(id),
"first_name": firstname,
"last_name": lastname,
"name": firstname & ' ' & lastname,
"email": email,
"phone": phone,
"company_name": companyname,
"tax_number": vatregnumber,
"status": isinactive = "F" ? "active" : "inactive",
"currency": currency.refName,
"created_at": $fromMillis(datecreated_unix * 1000)
}And the equivalent mapping for a different ERP (like Dynamics 365) that uses completely different field names:
response.{
"id": $string(Id),
"first_name": FirstName,
"last_name": LastName,
"name": FirstName & ' ' & LastName,
"email": EmailAddress,
"phone": Telephone1,
"company_name": CompanyName,
"tax_number": TaxNumber,
"status": StatusCode = 0 ? "active" : "inactive",
"currency": TransactionCurrencyId,
"created_at": CreatedOn
}Both expressions produce the exact same output shape. Neither requires storing the raw response. The mapping expression is the source of truth for how each ERP's data translates to your common model. The transformation engine evaluates this expression against the raw ERP payload as it streams through the proxy. The function has no idea what integration it is talking to; it simply evaluates the expression and returns the result.
The 3-Level Override Hierarchy
As mentioned, enterprise ERP instances differ wildly. A standard NetSuite instance will have dozens of custom fields, custom objects, and unique validation rules. A rigid unified API will break when it encounters these customizations.
To handle this per-customer variability without writing custom code for every enterprise customer, a zero-storage architecture must implement a declarative override hierarchy:
| Level | Scope | Example |
|---|---|---|
| Platform Base | All customers on a given ERP | The default JSONata mapping that works for standard NetSuite vendor configurations. |
| Environment Override | One customer's deployment environment | Customizations applied to a specific region (e.g., adding custom_field_123 to the response for all EU customers). |
| Account Override | One specific connected account | Per-customer mapping overrides. If one enterprise customer's SAP instance uses a highly specific custom object for purchase orders, you apply an override specifically to their connected account. |
This layered override hierarchy means you handle enterprise edge cases through configuration, not code. A customer can add their own custom fields to the unified response without your engineering team changing any backend code or running a database migration.
Pro Tip for PMs: When evaluating integration tools, ask how they handle custom fields. If the answer involves "syncing custom schemas to our database," you will face massive latency and compliance hurdles. Demand in-memory transformation.
Handling Rate Limits and Pagination on the Fly
We must practice radical honesty about the trade-offs of pass-through architecture. When you do not cache data, you hit upstream APIs directly. You inherit their rate limits, their pagination behaviors, their latency, and their downtime. There is no local cache to absorb burst traffic.
The Reality of ERP Rate Limits
ERPs strictly enforce rate limits to protect their infrastructure. When an upstream API like NetSuite or SAP returns an HTTP 429 Too Many Requests error, a true pass-through proxy does not absorb or retry that error.
The proxy passes that 429 error directly back to the caller.
However, dealing with dozens of different rate limit formats across different ERPs is a nightmare for your engineering team. To solve this, the integration layer must normalize upstream rate limit information into standardized headers per the IETF specification:
ratelimit-limit: The maximum number of requests permitted in the current window.ratelimit-remaining: The number of requests remaining in the current window.ratelimit-reset: The time at which the current rate limit window resets.
Your application is responsible for reading these standardized headers and implementing exponential backoff.
Why the caller should own retry logic: Your application knows its own traffic patterns, priority queues, and acceptable latency thresholds. A middleware layer that silently retries on your behalf introduces unpredictable latency and makes it impossible to implement intelligent backoff strategies like prioritizing real-time user requests over background sync jobs. This ensures your application remains in control of its own scheduling, rather than relying on a black-box middleware queue.
Building rate limit handling into your application layer is real work. But the alternative - a caching middleware that absorbs rate limits by serving stale data - creates a different set of problems: stale reads, cache invalidation complexity, and a persistent data store that procurement teams will flag.
Declarative Pagination
Different ERPs paginate data differently. SAP OData uses $skip and $top parameters with server-driven paging via @odata.nextLink. NetSuite SuiteQL uses offset and limit. Dynamics 365 uses a similar OData pattern but with different default page sizes and continuation tokens. Others use cursor-based pagination or link headers.
Your pass-through layer must abstract this away. A good pass-through proxy normalizes pagination into a consistent interface - cursor-based by default. The integration configuration defines the pagination strategy declaratively. When your application requests GET /unified/accounting/invoices?limit=100, the proxy reads the configuration, translates the limit into the ERP's specific pagination format, executes the request, and extracts the next-page indicator from the response, translating it back into a standard next_cursor string. Your application only ever deals with this normalized cursor, regardless of how the underlying ERP handles pagination.
Building a Pass-Through Integration Strategy
If you are a Product Manager or Engineering Leader tasked with shipping enterprise ERP integrations, here is how you build a strategy that unblocks sales and bypasses InfoSec scrutiny.
Step 1: Audit Your Data Flow
Map every point where third-party ERP data touches your infrastructure. If any of those points involve a database write, a cache layer, a log file with financial data, or a message queue that persists payloads - that is your compliance exposure. Be ruthless. Even "temporary" caches are discoverable in a security audit.
Step 2: Decouple Integration Logic from Business Logic
Never write integration-specific code in your core application repository. Your application should only speak one language: your unified data model. Offload the complexity of OAuth token refreshes, HMAC signature generation, pagination normalization, and payload translation to a dedicated integration layer.
Step 3: Standardize on Declarative Configurations
Adding a new ERP integration should be a data operation, not a code operation. Use JSON configurations to define how to talk to the API (base URL, auth scheme, endpoints) and JSONata expressions to define how to translate the data. This allows you to ship new connectors without waiting for a backend deployment cycle and handle per-customer schema customization gracefully.
Step 4: Implement Proactive Token Refresh Mechanisms
Enterprise auth is difficult. Access tokens expire frequently. Your integration layer must utilize durable lock mechanisms to ensure that when a token expires, only one refresh operation runs per connected account at a time. Concurrent callers should simply await the in-progress operation, preventing race conditions that lead to revoked tokens. The proxy must refresh OAuth tokens proactively before they expire, so your application never handles raw credentials.
Step 5: Evaluate Build vs. Buy and Architecture Options
Building a pass-through ERP integration layer in-house means you are signing up for multi-surface API orchestration (REST + SuiteQL + SuiteScript), error expression evaluation for non-standard formats, and OAuth lifecycle management. That is 6-12 months of senior engineering time per ERP, and you will maintain it forever. The true cost of building integrations in-house is almost always higher than teams estimate.
You have three real options:
- Build custom connectors: Maximum control, maximum maintenance burden.
- Embedded iPaaS: Visual workflow builders that often require data to pass through persistent infrastructure. Watch for data residency implications.
- Pass-through unified API: A proxy layer that normalizes ERP data in real time without storage.
The right strategy depends on where you are in your upmarket journey. If you need an integration tool that doesn't store customer data to unblock procurement, a pass-through unified API gives you the compliance posture and coverage across multiple ERPs without building dedicated integration teams.
Step 6: Validate the Compliance Story
Before committing to any approach, pressure-test it against a real enterprise security questionnaire. Can you truthfully answer "No" to "Does the vendor store customer financial data"? Can you provide architecture diagrams showing data flows without persistent storage? Can you demonstrate that credentials are encrypted at rest? If not, you are going to hit the same procurement wall.
How Truto Powers Zero Data Retention ERP Integrations
Truto's unified API operates as a true pass-through proxy for ERP integrations, guaranteeing zero data retention by design. When your application calls GET /unified/accounting/invoices, the platform authenticates against the customer's ERP, translates the request into the native API format, evaluates the response mapping in memory, and returns a normalized JSON response. No ERP data is written to the platform's infrastructure.
For NetSuite specifically, the platform orchestrates seamlessly across all three API surfaces - SuiteTalk REST for CRUD, SuiteQL for reads that need JOINs and complex filtering, and a deployed Suitelet for capabilities like PDF generation and dynamic field metadata. Queries dynamically adapt to each customer's NetSuite edition - including or excluding JOINs for currency and subsidiary tables based on feature detection at connection time.
The override system supports per-customer schema customization through the three-level configuration hierarchy, so when one enterprise customer's NetSuite instance has custom fields on their purchase orders, you add a mapping override for that account without touching the integration code or affecting any other customer.
All of this runs with zero integration-specific code in the runtime. The mapping expressions, API configurations, and authentication flows are stored as declarative configuration, keeping your compliance story pristine.
Next Steps for Engineering Teams
Moving upmarket requires a fundamental shift in how you handle third-party data. Caching ERP records is a relic of SMB integration strategies. If enterprise deals are stalling in your pipeline because of InfoSec objections to your integration architecture, the fix is structural, not incremental. Switching from a sync-and-cache model to a pass-through model is the single highest-leverage change you can make to unblock procurement.
Start by auditing where ERP data touches your infrastructure today. Identify which deals are blocked by data residency concerns. By adopting a zero data retention architecture powered by declarative JSONata mappings, you can ship deep, read/write integrations with NetSuite, SAP, and Dynamics 365 in days instead of months.
The companies closing enterprise ERP deals right now are not the ones with the most features. They are the ones whose architecture passes the security questionnaire. You keep your codebase clean, your compliance risk low, and your sales team unblocked.
FAQ
- What is Zero Data Retention architecture?
- It is a design pattern where an integration layer acts as a stateless proxy, processing third-party API payloads entirely in memory without writing them to persistent storage. The middleware discards payloads immediately after delivering the response.
- Why is caching ERP data a compliance risk?
- Storing customer financial data like general ledger codes or invoices makes you responsible for the data residency and security of those records. This triggers strict enterprise procurement reviews, SOC 2 audits, and data processing agreements.
- Why is NetSuite deprecating SOAP web services?
- Oracle is phasing out SOAP in favor of REST and SuiteQL. Starting with the 2026.1 release, no new SOAP endpoints are included by default. SOAP will be fully removed with the 2028.2 release, forcing teams to migrate to REST and OAuth 2.0.
- How do you handle API rate limits without a database?
- A true pass-through proxy does not absorb or retry HTTP 429 rate limit errors. It normalizes the upstream API's response into standardized IETF headers (ratelimit-limit, ratelimit-remaining) and passes the error to your application, which owns the retry and backoff logic.
- How do you normalize data without storing it?
- You use declarative mapping languages like JSONata to translate between your application's unified schema and the ERP's native format on the fly. This in-memory transformation requires no persistent database storage.