Skip to content

Why Unified Data Models Break on Custom Salesforce Objects (And How to Fix It)

Rigid unified APIs strip custom Salesforce objects. Learn how declarative JSONata transformations and multi-level overrides solve schema normalization.

Nachi Raman Nachi Raman · · 15 min read
Why Unified Data Models Break on Custom Salesforce Objects (And How to Fix It)

Your integration infrastructure just quietly killed a six-figure enterprise deal.

The technical evaluation was flawless. The demo was sharp. The buyer's RevOps team was excited. Then the prospect's Salesforce administrator sent over the schema for their organization: 147 custom fields on the standard Contact object, a highly modified Deal_Registration__c object with nested relationships powering their entire partner pipeline, and a Revenue_Forecast__c rollup field that feeds their quarterly board deck.

Your unified API provider's common data model flattened all of it into first_name, last_name, and email. The custom objects vanished entirely. To get the data your application actually needed, your engineering team was forced to bypass the unified schema entirely and write raw Salesforce SOQL against a passthrough endpoint. The abstraction you paid for just evaporated. You were back to reading vendor documentation, managing provider-specific authentication quirks, handling custom pagination, and maintaining custom code for a single tenant.

If you are evaluating integration infrastructure for the enterprise market, this is the custom Salesforce objects problem. It is the single most common reason enterprise integration deals die in technical review. As we've explored in our analysis of why unified API data models break on custom Salesforce objects, rigid unified data models cannot represent the schema diversity that real Salesforce deployments contain.

Traditional unified APIs force data into a rigid, lowest-common-denominator schema. Modern unified API architectures solve this by replacing rigid schemas with a declarative mapping architecture that translates per-tenant schema variations dynamically. This guide breaks down exactly why rigid schemas fail in the enterprise, the architectural flaws of the passthrough alternative, what that fallback actually costs you in API quota, and how to architect an integration layer that adapts to infinite schema variations using functional transformation languages and multi-level override hierarchies.

The Enterprise Reality: Why Salesforce Custom Objects Are the Default

Custom fields are not edge cases. They are the default state of every enterprise SaaS deployment. Salesforce is not just a CRM; it is a relational database and application platform (Platform as a Service) masquerading as a sales tool.

Salesforce's official limits documentation specifies that Enterprise Edition organizations are allowed up to 500 custom fields per object, while Unlimited Edition organizations can have a ceiling of up to 800 custom fields on any single object. Furthermore, an organization can have up to 3,000 total custom objects. Enterprise businesses use every single one of them.

Whenever a user creates a custom field or object in Salesforce, the system automatically appends a __c suffix at the end of the identifier. This suffix is strictly required when writing SOQL (Salesforce Object Query Language) queries or interacting with the Salesforce REST API. It ensures the system correctly distinguishes between standard Salesforce schema and tenant-specific modifications. Custom objects get the exact same treatment.

This means a standard Contact object in a mature enterprise Salesforce org doesn't look anything like the Contact object in a fresh trial. A typical enterprise B2B workflow will rarely just hold a company name and website. A RevOps team might have Lead_Score_Q3__c, ICP_Tier__c, Buying_Committee_Role__c, and GDPR_Consent_Date__c driving their entire go-to-market motion. A custom Deal_Registration__c object with Partner_Tier__c and Approval_Status__c fields might be the backbone of their channel sales process.

When you build an integration, your software needs to read and write to these exact fields to trigger the customer's internal workflows. If your integration layer cannot dynamically discover, read, and write to __c fields, your product is fundamentally incompatible with the customer's business operations. When a unified API strips all of this to first_name and email, you're not just losing data. You're losing the reason your customer bought Salesforce in the first place.

For a deeper look at how Salesforce custom field mechanics work at the API level, see our guide to handling custom fields and custom objects in Salesforce via API.

Why Traditional Unified APIs Fail at Schema Normalization

Schema normalization is the hardest problem in SaaS integrations. The standard approach taken by legacy unified API providers is what we call "dumb key-value mapping." A rigid common data model is a lowest-common-denominator bet that breaks the moment a customer customizes anything.

In this architecture, the provider defines a fixed JSON schema for each resource type. A "CRM Contact" might look like this:

{
  "id": "string",
  "first_name": "string",
  "last_name": "string",
  "email": "string",
  "phone": "string",
  "created_at": "datetime"
}

Behind the scenes, they maintain hardcoded integration logic that maps Salesforce's FirstName to first_name and HubSpot's properties.firstname to first_name. When a request comes in, the provider's backend executes a hardcoded script:

// The legacy approach: Hardcoded integration logic
if (provider === 'salesforce') {
  return {
    first_name: salesforceData.FirstName,
    last_name: salesforceData.LastName,
    email: salesforceData.Email
    // The other 147 custom fields are completely ignored and dropped
  };
}

This schema works perfectly for simple SMB use cases where organizations use out-of-the-box CRMs, or for a quick demo. It collapses entirely in production for the enterprise. The schema knows nothing about Revenue_Forecast__c or Partner_Tier__c because those fields don't exist in the platform's rigid model. When the rigid schema encounters a Salesforce instance with 500 custom fields, it acts as a destructive filter. It strips out the __c fields because it has nowhere to put them.

To "solve" this, legacy unified APIs offer a passthrough endpoint. They tell developers: "If you need custom fields, just use our passthrough API to send raw requests directly to Salesforce."

This is an architectural failure. The moment you use a passthrough endpoint, you abandon the unified abstraction. You are back to reading Salesforce API documentation, constructing SOQL by hand, handling PascalCase field names, managing __c suffixes, and dealing with Salesforce-specific pagination cursors.

Here's what that looks like in practice. Your unified API call was supposed to be:

GET /unified/crm/contacts?integrated_account_id=abc123

But to get the custom fields your enterprise customer needs, you end up writing:

SELECT Id, FirstName, LastName, Email, Lead_Score_Q3__c,
       ICP_Tier__c, Buying_Committee_Role__c
FROM Contact
WHERE LastModifiedDate > 2026-01-01T00:00:00Z

Your engineering team is now responsible for maintaining a separate code path just for that specific customer's custom schema. Multiply this across 50 enterprise customers and your "unified" integration is now 50 bespoke Salesforce integrations wearing a trench coat.

The Passthrough Trap and API Rate Limit Nightmares

Falling back to passthrough endpoints doesn't just add code complexity. It creates a severe N+1 query problem that aggressively consumes API quotas, creating a direct path to exhausting your customer's Salesforce API limits.

Imagine you need to sync a list of Contacts and their associated custom Revenue_Forecast__c data. Because the unified API strips out the custom data, your application must first call the unified API to get the base Contact records. Then, you must iterate over those records and fire individual passthrough requests containing raw SOQL to fetch the custom fields for each Contact. For a list of 100 contacts with 3 custom object relationships, that's potentially 400 API calls where 1 should have sufficed.

Salesforce enforces strict 24-hour API rate limits. Unlike modern APIs that rate limit per second or minute, Salesforce's total API request allocation is the total number of API calls to the REST API, the SOAP API, the Bulk APIs, and the Connect API that your org is entitled to within a rolling 24-hour period. For paid org editions, such as Enterprise Edition, this starts at 100,000 requests per 24-hour period and increases based on the licenses your org is provisioned with. Limits are calculated on a 24-hour rolling basis rather than a fixed calendar day.

Chatty API patterns—like firing hundreds of passthrough requests to backfill custom data—will quickly exhaust a tenant's entire organizational quota, taking down their internal Salesforce automations in the process. The Daily API Request Limit is a soft limit that your org can exceed temporarily, but if requests continue increasing, a system protection limit blocks all subsequent API calls. The response returns an HTTP status code 403 with a REQUEST_LIMIT_EXCEEDED error, or a standard 429 Too Many Requests error.

How your integration infrastructure handles this response dictates the reliability of your system. This is where most integration platforms get the abstraction wrong. Many claim to magically handle rate limits by automatically retrying requests or silently absorbing 429 errors behind the scenes. This is dangerous behavior that delays the problem, makes it harder to diagnose, and leads to cascading system failures.

Warning

Architectural Reality: Deterministic Rate Limiting Truto takes a deterministic approach: Truto does not retry, throttle, or apply backoff on rate limit errors. When Salesforce (or any upstream API) returns a rate limit error, Truto passes that error directly back to your application. What Truto does provide is normalization.

Currently, there is no standard way for servers to communicate quotas so that clients can throttle their requests to prevent errors. However, an IETF draft defines a set of standard HTTP header fields to enable rate limiting. Truto parses the upstream provider's proprietary rate limit data (like Salesforce's Sforce-Limit-Info headers or HubSpot's X-HubSpot-RateLimit-*) and normalizes it into standardized IETF RateLimit response headers:

Header Description
ratelimit-limit Maximum requests allowed in the current window
ratelimit-remaining Requests remaining before the limit is hit
ratelimit-reset Seconds until the rate limit window resets

Your application or AI agent is responsible for reading these standardized headers and implementing its own exponential backoff logic. This gives your application consistent, predictable rate limit data regardless of the upstream provider. No surprises, no hidden retries eating quota you didn't authorize. By forcing developers to use passthrough endpoints for custom fields, legacy unified APIs guarantee that your application will hit these rate limits faster, while leaving you completely responsible for managing the fallout.

Enter JSONata: Declarative Transformations Over Rigid Schemas

To solve the custom fields problem, we must stop treating integration mapping as hardcoded business logic. The fix isn't a better rigid schema. It's eliminating rigid schemas entirely and replacing them with a declarative transformation layer. We need to treat it as configuration data.

This is where JSONata changes the architecture. Created by Andrew Coleman at IBM, JSONata is a powerful and sophisticated open-source functional query and transformation language specifically designed for JSON data. It allows you to perform complex data manipulations, including filtering, mapping, and reducing, using concise syntax. It is Turing-complete, declarative, and allows you to reshape complex, nested JSON objects without writing a single line of runtime code. It is now used in production by AWS Step Functions, Node-RED, and modern integration platforms.

In Truto's architecture, there is zero integration-specific code in the runtime engine. There are no if (provider === 'salesforce') statements. Instead, the runtime is a generic execution pipeline that reads a declarative configuration describing how to talk to the API, and a JSONata expression describing how to translate the data.

Instead of defining a rigid schema that either knows about a field or doesn't, a JSONata-based architecture uses expressions to describe how to transform any response into a target shape. Here is an example of how a JSONata mapping handles both standard and custom Salesforce fields simultaneously:

response.{
  "id": $string(Id),
  "first_name": FirstName,
  "last_name": LastName,
  "name": $join($removeEmptyItems([FirstName, LastName]), " "),
  "email_addresses": [{ "email": Email }],
  "phone_numbers": $filter([
    { "number": Phone, "type": "phone" },
    { "number": MobilePhone, "type": "mobile" }
  ], function($v) { $v.number }),
  "created_at": CreatedDate,
  "updated_at": LastModifiedDate,
  /* The magic happens here: dynamically capturing all custom fields */
  "custom_fields": $sift($, function($v, $k) {
    $k ~> /__c$/i and $boolean($v)
  })
}

Look at that last block. The $sift function iterates over the entire raw Salesforce response. The regular expression /__c$/i automatically captures every single key ending in __c (the custom field indicator) and places it in a clean custom_fields object in the unified response—without knowing what those fields are in advance. When a Salesforce admin adds Revenue_Forecast__c on a Tuesday, the mapping picks it up on Wednesday. Whether the tenant has 5 custom fields or 500, the JSONata expression handles them all dynamically. No code change. No pull request. No deployment. The unified schema adapts to the reality of the data, rather than destroying the data to fit the schema.

The same unified resource for HubSpot uses a completely different JSONata expression to handle HubSpot's nested properties structure:

response.{
  "id": $string(id),
  "first_name": properties.firstname,
  "last_name": properties.lastname,
  "email_addresses": [
    properties.email ? { "email": properties.email, "is_primary": true }
  ],
  "created_at": createdAt,
  "updated_at": updatedAt,
  "custom_fields": properties.$sift(function($v, $k) {
    $k in $diff
  })
}

Both expressions produce the same output shape. Both auto-capture custom fields. Both are stored as data in a database—not as TypeScript functions in a source repository.

This same pattern applies to request mapping. Where a rigid API would offer a fixed ?email=john@example.com filter, JSONata can dynamically construct the right native query format for each provider. When you send a unified request to filter contacts, a JSONata expression translates your unified query parameters into a valid Salesforce SOQL WHERE clause:

(
  $whereClause := query ? $convertQueryToSql(
    query.{
      "created_at": created_at,
      "name": $firstNonEmpty(name, first_name, last_name) 
        ? { "LIKE": "%" & $firstNonEmpty(name, first_name, last_name) & "%" }
    },
    ["created_at", "name"],
    {
      "created_at": "CreatedDate",
      "name": "Name"
    }
  );
  {
    "q": query.search_term 
      ? "FIND {" & query.search_term & "} RETURNING Contact(Id, FirstName, LastName)",
    "where": $whereClause ? "WHERE " & $whereClause
  }
)

Because the unified API doesn't use standardized data models for custom objects, it can execute complex SOQL generation purely through data configuration. For HubSpot, the same unified filter parameters become filterGroups with operators like CONTAINS_TOKEN and EQ. One interface, radically different native query languages, zero branching logic. Your engineering team never writes runtime code to handle the mapping.

Architecting a Multi-Level Override Hierarchy

Extracting custom fields into a custom_fields object is only half the battle. In enterprise B2B software, different tenants use the exact same custom field for completely different purposes. Tenant A might use Industry_Type__c to route leads. Tenant B might use Vertical_Segment__c for the exact same purpose. Furthermore, enterprise customers have fully custom objects, custom relationships between objects, and per-org naming conventions. A single static mapping expression can't cover every variation, and if your integration relies on one, you will eventually have to branch your codebase to handle Tenant A versus Tenant B.

To handle custom Salesforce fields across enterprise customers, modern integration architectures implement a multi-level override hierarchy. This allows you to customize mappings at increasing levels of specificity without deploying new code or modifying the base configuration.

Truto implements a three-level customization stack that deep-merges configurations at runtime:

graph TD
    A["Platform Base Mapping<br>Standard fields for all tenants"] -->|"Deep Merge"| B["Environment Override<br>Customer-specific field additions"]
    B -->|"Deep Merge"| C["Account Override<br>Individual Salesforce org quirks"]
    C --> D["Final JSONata Evaluation Context<br>Applied at runtime"]
    
    style A fill:#f9f9f9,stroke:#333,stroke-width:2px
    style B fill:#e6f3ff,stroke:#0066cc,stroke-width:2px
    style C fill:#d9ead3,stroke:#38761d,stroke-width:2px
    style D fill:#fff2cc,stroke:#d6b656,stroke-width:2px
Override Level Scope Typical Use Case Who Manages It
Platform Base All tenants, all Salesforce accounts Standard field mapping, __c auto-capture Platform engineering team
Environment One customer's environment Promoting common custom fields to top-level schema Solutions engineering or customer success
Account One specific Salesforce org Handling org-specific naming or custom object relationships Customer self-service or support

Level 1: Platform Base Mapping This is the default mapping provided by the integration platform. It handles the baseline translation between the unified schema and the provider API. It includes the standard SOQL generation and the baseline $sift logic for extracting __c fields. This works for 80% of use cases out of the box.

Level 2: Environment Override As a SaaS vendor, you might require specific data formatting that differs from the platform defaults. A customer's environment can override any aspect of the mapping. If Customer A needs Lead_Score__c promoted from custom_fields to a top-level lead_score field, they add an environment override:

{
  "lead_score": response.Lead_Score__c,
  "icp_tier": response.ICP_Tier__c
}

The override is deep-merged on top of the base mapping at runtime. Standard fields continue to work. The custom field promotion just layers on top. You could also add a JSONata rule that forces all incoming email addresses to lowercase before they hit your application database.

Level 3: Account Override This is the critical layer for enterprise sales. When Tenant B signs a contract but requires their highly specific Vertical_Segment__c field to map to your system's industry field, you do not write code. You attach a JSONata override directly to that specific tenant's integrated account record:

// Account Override for Tenant B
{
  "industry": Vertical_Segment__c,
  "custom_routing_score": $number(Lead_Score_Q3__c) * 1.5
}

At runtime, the execution engine evaluates the base mapping, evaluates the environment override, evaluates the account override, and deep-merges the results. Tenant B gets their highly customized integration behavior. Tenant A remains unaffected. Your engineering team shipped a custom enterprise integration without opening their IDE or deploying a single line of backend code.

How unified APIs handle custom fields defines whether your platform can scale upmarket. By pushing the schema normalization logic into the data layer via JSONata and override hierarchies, you eliminate the technical debt associated with per-tenant customizations.

Zero Integration-Specific Code: The Future of SaaS Integrations

Building integrations in-house or relying on legacy unified APIs forces your engineering team to absorb the domain complexity of third-party systems. When a customer's Salesforce instance deviates from the norm, your code breaks. When you fall back to passthrough endpoints, you multiply your API consumption and take on the burden of rate limit management.

The deeper insight here isn't just about Salesforce. It's about what happens when you treat integration logic as data rather than code.

In a traditional architecture, adding support for a customer's Deal_Registration__c custom object means writing a new handler function, adding a new endpoint, deploying new code, and hoping nothing else breaks. Each new custom object request is a development ticket that competes with your product roadmap.

In a data-driven architecture, the same request is a configuration change: add a JSONata mapping expression, store it in the database, and it takes effect immediately. The runtime engine—the same code path that handles HubSpot Contacts, Salesforce Contacts, Pipedrive Persons, and every other CRM—evaluates the new expression without ever branching on a provider name.

Truto's architecture takes this to its logical endpoint. The entire platform contains zero integration-specific code. No if (salesforce) conditionals. No hubspot_contacts table. The same generic execution pipeline reads a declarative configuration describing how to talk to an API, reads a set of JSONata expressions describing how to translate data, and executes both without knowing which integration it's running.

This is an instance of the interpreter pattern at platform scale. Integration configurations and JSONata mappings form a domain-specific language (DSL) for describing API interactions. The runtime is an interpreter that executes this DSL. New integrations are new "programs," not new features in the interpreter.

The practical impact for engineering teams is significant:

  • Adding an integration is a data operation, not a code deployment. No CI/CD pipeline. No code review for a field mapping change.
  • Bugs get fixed once. When pagination logic improves, every integration benefits. When error handling is enhanced, it works the same way for Salesforce and 100 other APIs.
  • Per-tenant customizations don't require engineering resources. The three-level override hierarchy lets solutions engineers and customer success teams handle Salesforce schema variations through configuration.
  • Custom objects aren't second-class citizens. A fully custom Deal_Registration__c can have its own unified mapping, query translation, and response normalization—the same way standard objects do.
Info

The original response data from Salesforce (or any provider) is always preserved in a remote_data field alongside the unified output. This means consumers can access raw __c fields even if the unified mapping doesn't explicitly cover them—a safety net that eliminates the "lost data" anxiety that plagues rigid common data models.

What This Means for Your Integration Strategy

If you're evaluating integration infrastructure for enterprise Salesforce customers, here is the decision framework you should apply:

  1. Count your enterprise customers' custom fields. Audit your enterprise customers' Salesforce orgs using the sObject Describe endpoint. Catalog all __c fields on objects you need to integrate with. If the average is under 10, a rigid common model might be tolerable. If it's over 50 (and it almost certainly is), you need a declarative mapping layer.

  2. Check the passthrough tax. Every passthrough call consumes your customer's API quota. Salesforce enforces a rolling 24-hour window with daily limits, concurrent request caps of 25 long-running requests, and Bulk API batch limits of 15,000 per day. If your integration pattern requires multiple passthrough calls per record, do the math on quota consumption across your entire customer base.

  3. Demand transparency on rate limits. Your integration platform should tell you exactly how much API quota remains, not hide it behind opaque retry logic. Standardized headers (ratelimit-limit, ratelimit-remaining, ratelimit-reset) give your application the data it needs to make smart decisions about request pacing.

  4. Evaluate the override model. Ask whether you can customize field mappings per customer, per environment, and per connected account—without writing code or deploying changes. This is the capability that separates "works for demos" from "works for enterprise."

The __c suffix is the litmus test for any integration architecture. If your system requires an engineer to write code every time a customer introduces a new custom field, your integration strategy will not scale. The platforms that win enterprise deals are the ones that treat schema variation as the default, not the exception. Stop letting rigid data models kill your enterprise deals. Architect your integration layer to embrace custom objects, not erase them.

Frequently Asked Questions

Why do unified APIs fail with Salesforce custom objects?
Unified APIs typically use rigid common data models that only map standard fields like FirstName and Email. Salesforce custom objects and fields (identified by the __c suffix) are silently dropped, forcing developers to use passthrough endpoints and write raw SOQL.
What is the __c suffix in Salesforce?
The __c suffix is Salesforce's mandatory naming convention for custom fields and custom objects. It differentiates them from standard platform fields and is strictly required when writing SOQL queries or interacting with the Salesforce REST API.
How does JSONata solve the custom fields problem?
JSONata is a declarative, Turing-complete transformation language for JSON. It can use regex patterns to dynamically identify all fields ending in __c and map them into a normalized schema using compact expressions stored as configuration data, not runtime code.
How many custom fields can a Salesforce object have?
Salesforce allows up to 800 custom fields per object on Unlimited Edition, and up to 500 custom fields on Enterprise Edition. An organization can also have a ceiling of up to 3,000 custom objects per org.
How should unified APIs handle Salesforce API rate limits?
Rather than silently retrying and hiding 429 errors, integration platforms should pass rate limit errors directly back to the caller and normalize upstream rate limit data into standardized headers like ratelimit-limit, ratelimit-remaining, and ratelimit-reset.

More from our Blog