Per-Customer Data Model Customization Without Code: The 3-Level JSONata Architecture
Learn how a 3-level declarative override architecture (platform, environment, account) lets B2B SaaS teams handle infinite enterprise schema variations without code.
Your unified API just torched a six-figure enterprise deal, and it has nothing to do with your core product.
The technical evaluation went perfectly. The prospect's VP of Engineering was ready to sign. Then their Salesforce administrator sent over their organization's schema: 147 custom fields on the Contact object, a highly modified Deal_Registration__c custom object with nested relationships that powers their partner pipeline, and a Revenue_Forecast__c rollup field their CFO watches every quarter.
If you want the contract, your software needs to read and write to all of it.
But your standard unified API flattens all of that data into first_name, last_name, and email. The custom objects are dropped entirely to maintain a consistent schema, highlighting the need for a unified API that doesn't use standardized data models for custom objects. To get the data your prospect needs, your engineering team is now writing raw SOQL against a passthrough endpoint—which completely defeats the entire point of buying integration infrastructure in the first place.
You cannot build a bespoke integration for this one customer without derailing your engineering roadmap. But you also cannot tell the enterprise buyer to change their business processes to fit your application's standardized data model. This is the exact moment where traditional integration strategies break down.
Per-customer data model customization without code is the architectural pattern that prevents this. It replaces hardcoded integration logic with a declarative override hierarchy—platform base, environment override, and account override—where every schema variation is stored as configuration data in a database. No pull requests. No sprint planning. No deployment windows.
This guide breaks down why rigid schemas fail in enterprise deals, the specific architectural flaws of alternatives like code-first adapters, and how to implement a 3-level override system that scales dynamically to any enterprise.
The Enterprise Integration Trap: Why Rigid Schemas Kill Deals
The enterprise integration trap occurs when a SaaS application relies on a rigid, standardized data model that cannot accommodate the highly customized schemas (custom objects and fields) required by enterprise customers, resulting in stalled deployments and lost deals.
Enterprise CRM, HRIS, and ERP instances are rarely used out-of-the-box. They are essentially bespoke databases built on top of vendor infrastructure. Research shows the average organization uses 371 SaaS applications, with enterprise firms averaging 473. According to AppSeConnect, the average enterprise now uses 897 applications across departments, up 6.4% year-over-year, creating a web of disconnected tools that share highly specific, customized data.
Salesforce Enterprise Edition alone allows up to 800 custom fields per object. In Salesforce Unlimited edition, that number climbs to 900 custom fields per object.
Standard unified APIs fail enterprise customers because they only cover standard fields. They operate on the lowest common denominator. If Salesforce, HubSpot, and Pipedrive all share 15 core fields, the unified API standardizes those 15 fields and discards the rest.
When your enterprise buyer demands that your platform syncs with their Partner_Tier__c field, a rigid schema leaves you with two bad options:
- Fork your codebase: Build a custom integration path just for this customer.
- Use a passthrough API: Bypass the unified API entirely, forcing your team to write raw, provider-specific API calls, handle provider-specific pagination, and manage provider-specific rate limits.
Both options destroy engineering velocity. Custom integrations can cost $50,000 to $150,000 per year, including maintenance, vendor changes, and QA. You need an architecture that treats custom fields as native elements of the payload, not as edge cases or engineering favors.
The Flaws of Code-First and "Custom Field Array" Approaches
The integration market has attempted to solve the custom field problem (a topic we cover in depth in our guide on how unified APIs handle custom fields), but most solutions push the complexity back onto your engineering team or your end-users. Let's look at the three dominant approaches and why they fail at scale.
1. The Code-First Adapter Pattern
Platforms like Nango argue that pre-built unified APIs inherently fail at custom fields, positioning their code-first infrastructure as the only way to handle enterprise schemas. They encourage you to write custom integration code per provider. The code-first philosophy says: "Unified APIs are too rigid. Write your own integration code for each provider."
This approach gives you total control, but it also gives you total responsibility. In this architecture, your codebase is littered with adapter scripts:
if (provider === 'hubspot') {
return hubspotAdapter.getContactsWithCustomFields(params);
} else if (provider === 'salesforce') {
return salesforceAdapter.getContactsWithCustomFields(params);
}This scales linearly with pain. Every new integration adds more code to maintain. Your SalesforceAdapter class handles SOQL queries and PascalCase field names. Your HubSpotAdapter class handles filterGroups and nested properties objects. A bug in the Salesforce pagination logic does not get fixed for HubSpot. When an API changes, your engineers have to rewrite code, redeploy, and hope they didn't break the other integrations. You don't get economies of scale—you get economies of pain.
2. The "Custom Field Array" Anti-Pattern
Platforms like Apideck attempt to handle custom data by keeping the rigid common model for standard fields, and shoving anything that doesn't fit into a generic custom_fields array.
Instead of receiving a clean, native JSON object, your application receives this:
{
"first_name": "John",
"last_name": "Doe",
"custom_fields": [
{
"id": "Partner_Tier__c",
"name": "Partner Tier",
"value": "Gold"
}
]
}This creates two massive problems for your engineering team:
- Your application has to parse two different data structures. Standard fields come from the unified response. Custom fields come from a separate array with its own key-value format. You end up writing secondary mapping logic in your application anyway to iterate through arrays and extract the data you actually need.
- Custom fields are second-class citizens. They cannot be queried, filtered, or sorted the same way standard fields can. The "unified" part of the API only applies to the fields the platform decided to model.
3. The Remote Fields Exception
Platforms like Merge.dev position "Field Mapping" and "Remote Fields" using JMESPath as a premium feature to extend their rigid common models. While better than a generic array, it still treats custom data as an exception rather than the rule, often requiring complex setup in their proprietary UI rather than programmatic control via your own database infrastructure.
The 3-Level Override Architecture: Platform, Environment, and Account
To solve this without writing code, you must move integration logic out of the runtime and into the database.
Truto uses an architecture based on the Interpreter Pattern. The runtime is a generic execution engine that takes a declarative configuration describing how to talk to an API, and a declarative mapping describing how to translate the data.
To support per-customer API mappings, this configuration is structured in a 3-level declarative override hierarchy. Each level is deep-merged on top of the previous one at runtime, with more specific levels overriding less specific ones.
graph TD
A["Level 1: Platform Base<br>Default mapping for all customers"] --> D{Deep Merge Engine}
B["Level 2: Environment Override<br>Per-customer environment customization"] --> D
C["Level 3: Account Override<br>Per-connected-account customization"] --> D
D --> E["Final Merged Mapping<br>Deep-merged JSONata result used at runtime"]
style A fill:#e8f4fd,stroke:#2196F3,stroke-width:2px
style B fill:#fff3e0,stroke:#FF9800,stroke-width:2px
style C fill:#fce4ec,stroke:#E91E63,stroke-width:2px
style D fill:#fff2cc,stroke:#d6b656,stroke-width:2px
style E fill:#e8f5e9,stroke:#4CAF50,stroke-width:2pxHere is how each level works:
Level 1: Platform Base Mapping
The base layer defines the default behavior that works for 80% of users. It is stored in a generic database table (e.g., unified_model_resource_method). This contains the standard mapping expressions that translate Salesforce's FirstName to first_name and HubSpot's properties.firstname to first_name. It covers standard fields and handles the common query patterns out of the box.
Level 2: Environment Override
A SaaS company might have different needs for their staging environment versus their production environment, or they might want to globally override a mapping for all their users. The environment_unified_model_resource_method table stores these overrides. If you want to map a specific field for every customer in your production environment, you update this JSON blob. No code deploy required.
Level 3: Account Override
This is where enterprise deals are saved. Individual connected accounts (specific tenants) can have their own mapping overrides stored directly on their integrated_account record.
If Customer A has two Salesforce orgs connected—one for North America and one for EMEA, each with different custom field configurations—the account-level override handles the divergence. Each connected account can have its own mapping adjustments.
At runtime, the merge uses array-overwrite semantics (override arrays replace base arrays, they don't concatenate), which prevents accidental field duplication.
| Override Level | Scope | Use Case |
|---|---|---|
| Platform Base | All customers, all environments | Standard field mappings, default query translations |
| Environment Override | One customer's environment | Add custom fields globally, change filter behavior, route to custom endpoints |
| Account Override | One specific connected account | Handle org-specific schema quirks, per-tenant custom object routing |
What can be overridden at each level:
- Response mapping: How response fields are translated (add custom fields to the unified schema).
- Query mapping: How filter parameters are constructed (support custom filter logic).
- Request body mapping: How create/update payloads are formatted (include provider-specific fields).
- Resource routing: Which API endpoint to call (route to custom object endpoints).
- HTTP method: Which verb to use (POST instead of GET for search).
- Pre/post processing steps: Fetch or transform additional data before or after the main call.
Using JSONata for Declarative Data Transformation
The override hierarchy is only useful if the mapping language is expressive enough to handle real-world API complexity. This is where JSONata comes in.
JSONata is a lightweight, Turing-complete query and transformation language purpose-built for reshaping JSON data without requiring imperative code.
Why JSONata over, say, JMESPath or custom scripting?
- Declarative, not imperative: JSONata's declarative approach means you can describe what you want to achieve without getting bogged down in the procedural details. A mapping expression describes the shape of the output, not the steps to produce it.
- Turing-complete: JSONata provides constructs for traversing, filtering, sorting, and transforming JSON trees. It can handle conditionals, string manipulation, array flattening, date formatting, and recursive logic—all within a single expression string.
- Storable as data: A JSONata expression is just a string. It can live in a database column, be versioned, overridden via the 3-level hierarchy, and hot-swapped without restarting any service.
- Side-effect free: Expressions are pure functions. They transform input to output without modifying state, which makes them safe to evaluate in parallel and easy to test.
By storing integration logic as declarative JSONata data, your platform can execute complex data transformations without branching logic.
Example 1: Handling Salesforce Custom Fields and SOQL
Salesforce returns flat PascalCase fields, up to six different phone number types, and uses SOQL for filtering. A base JSONata response mapping for Salesforce handles all of this purely as data:
response.{
"id": Id,
"first_name": FirstName,
"last_name": LastName,
"email_addresses": [{ "email": Email }],
"phone_numbers": $filter([
{ "number": Phone, "type": "phone" },
{ "number": MobilePhone, "type": "mobile" },
{ "number": HomePhone, "type": "home" }
], function($v) { $v.number }),
"custom_fields": $sift($, function($v, $k) { $k ~> /__c$/i and $boolean($v) })
}Notice the custom_fields handling. Salesforce custom fields end with __c. This single expression automatically detects any field matching that regex pattern and maps it natively.
If a customer needs to query based on a custom field, the JSONata query mapping dynamically generates the SOQL WHERE clause:
(
$whereClause := query ? $convertQueryToSql(
query.{
"partner_tier": partner_tier,
"email": email_addresses
},
["partner_tier", "email"],
{
"partner_tier": "Partner_Tier__c",
"email": "Email"
}
);
{
"q": query.search_term ? "FIND {" & query.search_term & "} RETURNING Contact" : null,
"where": $whereClause ? "WHERE " & $whereClause : null
}
)Example 2: Handling HubSpot FilterGroups and Properties
HubSpot returns contacts with nested properties objects and semicolon-separated email addresses, and uses a completely different search paradigm called filterGroups. The exact same unified request from your application runs through a different JSONata mapping stored in the database:
{
"id": response.id.$string(),
"first_name": response.properties.firstname,
"last_name": response.properties.lastname,
"email_addresses": [
response.properties.email
? { "email": response.properties.email, "is_primary": true },
response.properties.hs_additional_emails
? response.properties.hs_additional_emails.$split(";").
{ "email": $ }
],
"custom_fields": response.properties.$sift(function($v, $k) {
$k in $diff
})
}Here, HubSpot custom fields are properties that fall outside a known default set—the mapping uses set difference ($diff) to capture them.
For queries, the JSONata mapping translates standard requests into HubSpot's filterGroups:
rawQuery.{
"filterGroups": $firstNonEmpty(first_name, email_addresses) ? [{
"filters": [
first_name ? { "propertyName": "firstname", "operator": "CONTAINS_TOKEN", "value": first_name },
email_addresses ? { "propertyName": "email", "operator": "EQ", "value": email_addresses }
]
}]
}Why unified data models break on custom Salesforce objects is no longer a problem. Both mappings produce the same unified output shape. The runtime engine does not know what Salesforce or HubSpot are. It simply loads the JSONata string from the database, binds the incoming request data to it, and evaluates the expression.
Example 3: Applying an Account Override
If one specific customer's Salesforce instance has a Partner_Tier__c and Revenue_Forecast__c custom field, you apply a JSONata override specifically to their account ID at Level 3:
{
"response_mapping": "$merge([$, { 'partner_tier': custom_fields.Partner_Tier__c, 'revenue_forecast': custom_fields.Revenue_Forecast__c }])"
}When this specific customer makes a request, the generic execution engine deep-merges their account override on top of the platform base. The engine evaluates the combined JSONata expression, and the customer gets their custom fields natively as top-level keys in the response. Other customers are completely unaffected.
Handling the Upstream Chaos: Rate Limits and Errors
Customizing the data model is only half the battle. When you allow per-customer custom data models, handling upstream errors and rate limits becomes highly complex. If a customer writes a bad SOQL query override that triggers a 400 error, or if they pull massive custom objects that trigger a 429 Too Many Requests, the system must handle it predictably.
Standardized Rate Limit Headers
Every upstream API communicates rate limits differently. Salesforce uses Sforce-Limit-Info. HubSpot uses X-HubSpot-RateLimit-Daily. The proliferation of custom header names like X-RateLimit-UserLimit, X-Rate-Limit-Limit, and x-ratelimit-minute is exactly the problem the IETF set out to solve with its RateLimit header fields specification.
Truto normalizes the wildly different rate limit information from hundreds of upstream APIs into these standardized response headers based on the IETF specification:
ratelimit-limit: The maximum number of requests permitted in the current window.ratelimit-remaining: The number of requests remaining before hitting the limit.ratelimit-reset: The number of seconds until the rate limit window resets.
This gives callers consistent rate limit data regardless of whether the upstream provider is Salesforce, NetSuite, or HubSpot.
Truto takes a stance of radical honesty: Truto does not retry, throttle, or apply exponential backoff on rate limit errors.
Many integration platforms attempt to automatically retry on rate limit errors. This is an architectural anti-pattern. Automatic retries at the middleware layer hide latency from the caller, exhaust connection pools, and often lead to cascading failures when multiple agents hit the same rate-limited API.
When an upstream API returns HTTP 429, Truto passes that error directly back to the caller. The caller or AI agent is strictly responsible for reading the standardized ratelimit-remaining and ratelimit-reset headers and implementing their own precise retry and backoff logic. This ensures your application retains complete control over its execution flow and queuing strategy.
Normalized Error Responses
APIs return errors in wildly different formats. Slack returns { "ok": false, "error": "invalid_auth" } with an HTTP 200 OK. Salesforce returns { "errorCode": "REQUEST_LIMIT_EXCEEDED" }. Freshdesk returns HTTP 429 when the API plan doesn't include API access at all—not when you're actually rate-limited.
Truto handles this the same way it handles field mapping: with declarative JSONata error expressions stored as configuration data. Each integration can define an error expression that evaluates the raw API response and produces a structured error with a proper HTTP status code and human-readable message. The same expression-evaluation engine that maps response fields also normalizes errors. No provider-specific error-handling code needed.
Shipping Integrations as Data-Only Operations
The ultimate business value of the 3-level JSONata architecture is operational velocity.
In a traditional setup, adding support for a new enterprise custom object requires a product manager to write a ticket, an engineer to write the adapter code, a code review, a CI/CD pipeline run, and a production deployment. This process takes days or weeks. Praying it doesn't break the other 50 integrations is standard operating procedure.
With a declarative architecture, adding API connectors becomes a data-only operation. You update a JSON blob in a database table. That's it. The change is live immediately.
graph LR
A["Unified API Request"] --> B["Generic Execution Engine<br>(one code path)"]
B --> C["Integration Config<br>(JSON - how to call the API)"]
B --> D["Integration Mapping<br>(JSONata - how to transform data)"]
B --> E["Customer Overrides<br>(JSONata - per-environment/account)"]
C --> F["Third-Party API"]
D --> G["Unified Response"]
E --> G
style B fill:#e8f5e9,stroke:#4CAF50,stroke-width:2px
style C fill:#e8f4fd,stroke:#2196F3,stroke-width:2px
style D fill:#fff3e0,stroke:#FF9800,stroke-width:2px
style E fill:#fce4ec,stroke:#E91E63,stroke-width:2pxWhen a specific enterprise customer needs their custom Deal_Registration__c object mapped to your application, your support team or implementation engineers can apply the JSONata override directly to that customer's account record. The customer sees their custom data sync immediately. Your core engineering team never even hears about it.
The maintenance implications compound over time. When every integration flows through the same generic execution engine, bug fixes and performance improvements benefit all integrations simultaneously. A pagination improvement helps Salesforce, HubSpot, and NetSuite equally. The maintenance burden scales with the number of unique API patterns, not the number of integrations—and that is a fundamentally different cost curve.
Where to Start
If you are evaluating how to handle per-customer schema variations in your integration architecture, here is what to assess:
- Audit your current custom field requests: How many enterprise deals or customer support tickets involve custom fields, custom objects, or provider-specific schema variations? If the answer is "a lot," a rigid common model is already costing you money.
- Evaluate the expressiveness of your mapping layer: Can your current system handle conditional logic, array transformations, and dynamic field detection? If your mappings are limited to simple field renaming, they will not survive contact with a real enterprise schema.
- Check whether customizations require code deployments: If adding a custom field to one customer's integration requires a PR, a CI build, and a production deploy, your architecture is a bottleneck. Declarative overrides stored as data should be the target state.
- Measure the blast radius of changes: When you fix a mapping for one customer, does it risk breaking another? A 3-level override hierarchy isolates customizations by design—account-level changes cannot affect environment-level defaults.
The days of telling enterprise buyers to restructure their business processes around your fixed data model are over. The SaaS companies winning those deals are the ones whose integration architecture treats schema variation as a first-class, data-driven concern—not an exception to be handled with custom code.
Frequently Asked Questions
- How do unified APIs handle custom fields in enterprise SaaS?
- Most unified APIs either drop custom fields entirely or shove them into a secondary array. Advanced architectures use declarative languages like JSONata to natively map custom fields into the primary response object based on per-customer configurations.
- What is a 3-level override architecture for API integrations?
- It is a configuration hierarchy where API mapping logic is defined at the platform level (defaults for everyone), overridden at the environment level (per-customer changes), and further customized at the individual account level (per-tenant changes). This allows infinite schema variations without changing the core codebase.
- Why use JSONata instead of code for API integrations?
- JSONata is a declarative, Turing-complete transformation language that can be stored as a string in a database. This allows platforms to execute complex data mapping and dynamic query generation without requiring code deployments or maintaining provider-specific adapter scripts.
- How should integration platforms handle API rate limits?
- Platforms should pass HTTP 429 errors directly back to the caller while normalizing rate limit information into IETF standard headers (ratelimit-limit, -remaining, -reset). Automatic retries at the middleware layer are an anti-pattern that exhausts connection pools and hides latency.