Skip to content

Why Unified API Data Models Break on Custom Salesforce Objects (And How to Fix It)

Traditional unified APIs collapse under enterprise complexity by stripping out custom Salesforce objects. Learn how to fix this with declarative mapping.

Nachi Raman Nachi Raman · · 14 min read
Why Unified API Data Models Break on Custom Salesforce Objects (And How to Fix It)

Your integration infrastructure just cost you a six-figure enterprise deal.

The technical evaluation was flawless. The demo looked great. The buyer's RevOps team was excited. Then their largest prospect's Salesforce admin sent over the schema for their organization: 147 custom fields on the Contact object, a highly modified Deal_Registration__c object with nested relationships that drives their entire partner pipeline, and a Revenue_Forecast__c rollup field that powers their quarterly board decks.

Your unified API provider's common data model flattened all of it into first_name, last_name, and email. It completely dropped the custom objects. To get the data, your engineering team was forced to bypass the unified schema entirely and write raw Salesforce SOQL against a passthrough endpoint. You were back to reading vendor documentation, managing provider-specific authentication quirks, handling custom pagination, and maintaining custom code for a single tenant. The abstraction you paid for just evaporated, and the deal died in technical review.

If you are evaluating integration infrastructure for the enterprise market, this is the custom Salesforce objects problem, and it kills more enterprise integration deals than any bug in your core product. Traditional unified APIs force data into a rigid, lowest-common-denominator schema. Modern unified API architectures solve this by replacing rigid schemas with a declarative mapping architecture that translates per-tenant schema variations dynamically, requiring zero integration-specific code.

This guide breaks down exactly why rigid schemas fail in the enterprise, the architectural flaws of the "passthrough" alternative, how to handle the resulting rate limit nightmares, and how to architect an integration layer that adapts to infinite schema variations using functional transformation languages.

The Enterprise Reality: Why Salesforce Custom Objects Are the Default

Custom fields are not edge cases. They are the default state of every enterprise SaaS deployment.

Businesses track different types of data based on unique operational processes. A healthcare SaaS needs to track NPI numbers on a Contact record. A logistics company needs to track freight container IDs on an Opportunity. Salesforce explicitly supports and encourages this massive customization.

The big numbers to keep in your head are the 3,000 total custom objects permitted per org, a ceiling of 800 custom fields on any single object, and a firm cap of 40 total relationships per object. Furthermore, when you factor in managed packages, a single Salesforce object can have up to 800 custom fields created in the org, plus up to 100 from managed packages, for a total ceiling of 900 custom fields.

Whenever a user creates a custom field or object in Salesforce, the system appends __c at the end of the field name. This suffix is required when writing SOQL queries or using API integrations, as it ensures the system correctly identifies custom fields versus standard objects. Custom objects get the exact same treatment—a custom object called Invoice becomes Invoice__c in every API call, every SOQL query, and every metadata operation.

Think about what that means for integration infrastructure. A mid-market Salesforce customer might have 50 custom fields on their Contact object. A large enterprise might have 300+. Every one of those fields ends with __c, lives outside any standard schema, and represents data that somebody in the organization considers business-critical.

If you are building B2B software, your enterprise buyers have heavily modified their CRM. Their business logic, revenue forecasting, and lead routing live inside those __c fields. If your integration layer cannot read, write, and react to those specific fields, your integration is functionally useless to them. How do you build a native integration to a third-party API when you do not know the shape of the data until the customer connects their account?

The Lowest-Common-Denominator Trap of Unified API Data Models

To understand why integrations break, we have to look at the schema normalization problem.

API schema normalization is the process of translating disparate data models from different third-party APIs into a single, canonical JSON format. The fundamental flaw of traditional unified APIs is their rigid approach to this modeling. To make 50 different CRMs look exactly the same, providers create a lowest-common-denominator schema that only includes fields present across all platforms.

If a feature exists in Salesforce but not in Pipedrive, it gets stripped out of the unified model.

When you query a traditional unified API for a list of contacts, the internal engine executes a hardcoded mapping strategy. It pulls the Salesforce data, discards the 147 custom __c fields because they do not exist in the predefined Contact interface, and produces a schema that looks something like this:

{
  "id": "003xx000004TmiU",
  "first_name": "Jane",
  "last_name": "Chen",
  "email": "jane@acme.com",
  "phone": "+1-555-0199",
  "created_at": "2024-03-15T10:30:00Z"
}

Clean. Simple. And completely useless for any enterprise customer whose business runs on Deal_Registration__c, Preferred_Contract_Terms__c, or Customer_Health_Score__c.

Complexity underestimation is rampant in the integration space. Unified API providers might undermine the complexity of integrations, suggesting a one-size-fits-all solution, but they are generally limited to simple data and record syncing against a single Common Data Model. The core weakness of a unified API is that it works for simple reads but collapses under enterprise complexity because it hides the exact shape of the underlying system.

The real damage isn't just missing data—it's the false confidence the abstraction creates. Your product team builds against the unified schema, assumes the integration is complete, and only discovers the gap when an enterprise prospect runs a technical evaluation against their real Salesforce org. By then, you've already burned weeks of engineering time.

When you buy a standard unified API, you trade engineering velocity today for architectural gridlock tomorrow. You get a fast initial integration, but you lose the ability to support the complex data models your enterprise customers demand. Your unified APIs are lying to you when they claim a single schema can handle enterprise reality.

Why Passthrough Endpoints Defeat the Purpose of a Unified API

When a customer demands support for a custom object and the unified model cannot handle it, integration providers offer a standard fallback: the passthrough endpoint.

A passthrough endpoint is essentially a proxy. It handles the OAuth token refresh, but it forces your engineering team to construct the exact HTTP request the underlying provider expects, bypassing the unified schema entirely.

For Salesforce, this means writing raw Salesforce Object Query Language (SOQL). Instead of calling a clean unified endpoint:

GET /unified/crm/contacts?integrated_account_id=abc123

You are now forced to write:

GET /proxy/query?q=SELECT Id, FirstName, LastName, Email,
  Deal_Registration__c, Revenue_Forecast__c,
  Customer_Health_Score__c
  FROM Contact
  WHERE Customer_Health_Score__c > 70

Falling back to a passthrough endpoint introduces severe architectural friction. You need to know that Salesforce uses PascalCase for standard fields but __c suffixes for custom ones. You need to understand that __r is for relationship queries while __c is for field access. When working with external integrations via the Salesforce API, suffixes help in correctly identifying custom objects and fields, reducing the risk of errors—but they require deep platform-specific knowledge.

When you use a passthrough endpoint, you instantly lose all the benefits of the unified API:

  • You lose normalized pagination: You must now write custom logic to handle Salesforce's specific cursor and offset pagination methods, which differ entirely from HubSpot's or Zendesk's.
  • You lose normalized error handling: You must manually parse Salesforce's XML or JSON error responses to figure out if a query failed due to a syntax error or a missing field.
  • You lose code reusability: You are now maintaining a dedicated code path specifically for Salesforce. If the customer connects HubSpot tomorrow, you have to write a completely different set of custom API calls.

The passthrough endpoint also means you're now maintaining two integration paths for the same customer: the unified API for standard fields, and raw SOQL for everything else. Your code branches on whether a field is standard or custom.

Worse, every customer's custom schema is different. Customer A has Revenue_Forecast__c on their Contact object. Customer B has Annual_Contract_Value__c on a completely different custom object called Partnership__c. Your passthrough code now needs per-tenant SOQL queries, per-tenant field mappings, and per-tenant error handling. At that point, you haven't just fallen back to direct integration—you've built a worse version of one, with an extra network hop through the unified API provider.

Handling Rate Limits When Querying Custom Salesforce Objects

Querying heavily customized objects often requires complex joins, multiple describe calls to discover the schema, and large data payloads, which quickly exposes another reality of enterprise integrations: API rate limits.

Salesforce enforces strict API allocations. They enforce a 100,000 daily API request limit for Enterprise Edition orgs, plus 1,000 additional requests per user license. Limits are calculated on a rolling 24-hour basis rather than a fixed calendar day. This means that if you're running a massive historical data sync at 2 AM and a report at 3 PM, both count against the same rolling window.

Exceeding these limits results in blocked requests with HTTP 403 status and REQUEST_LIMIT_EXCEEDED error until usage drops.

The problem compounds when multiple integration consumers share the same Salesforce org's API quota. When marketing automation, support tools, and business intelligence platforms all access Salesforce concurrently, combined usage often exceeds both daily quotas and concurrent request limits.

How your integration infrastructure handles this rate limit reality is critical.

Many integration platforms attempt to hide rate limits by automatically retrying requests with exponential backoff. This is a dangerous anti-pattern in distributed systems. Silent retries consume worker memory, exhaust connection pools, and mask underlying architectural inefficiencies. If your system is hammering an API and getting rate-limited, burying that error behind an automatic retry loop prevents your engineering team from optimizing the sync strategy.

Truto takes a deliberately different, radically transparent approach: Truto does not retry, throttle, or apply backoff on rate limit errors.

When an upstream API returns a rate-limit error, Truto passes that error directly back to the caller. What Truto does instead is normalize the rate limit information from upstream APIs into standardized response headers based on the IETF RateLimit header specification:

Header What it tells you
ratelimit-limit The maximum number of requests permitted in the current window.
ratelimit-remaining The number of requests remaining in the current window.
ratelimit-reset The number of seconds until the rate limit window resets.

The IETF draft defines RateLimit-Limit as containing the requests quota in the time window, RateLimit-Remaining as containing the remaining requests quota in the current window, and RateLimit-Reset as containing the time remaining in the current window, specified in seconds.

By normalizing the headers, Truto gives your application consistent, predictable rate limit data regardless of whether you are talking to Salesforce, HubSpot, or Pipedrive. The caller—whether it is your backend worker or an AI agent—is responsible for reading these standardized headers and implementing its own retry, circuit breaker, or backoff logic.

This gives you complete control over your queueing architecture and prevents silent failures. You can build your own backoff logic once and apply it everywhere:

async function callWithBackoff(requestFn: () => Promise<Response>) {
  const response = await requestFn();
 
  const remaining = parseInt(
    response.headers.get('ratelimit-remaining') || '100'
  );
  const resetSeconds = parseInt(
    response.headers.get('ratelimit-reset') || '60'
  );
 
  if (response.status === 429 || response.status === 403) {
    // Rate limited - wait until reset
    await sleep(resetSeconds * 1000);
    return callWithBackoff(requestFn);
  }
 
  if (remaining < 10) {
    // Approaching limit - slow down proactively
    const delayMs = (resetSeconds / remaining) * 1000;
    await sleep(delayMs);
  }
 
  return response;
}

The philosophy here is transparency over magic. Standardized headers let you build rate-limit-aware logic that accounts for all the other consumers sharing that Salesforce org's API quota.

The Fix: Declarative Mapping and the Override Hierarchy

If rigid schemas drop custom fields, and passthrough endpoints force you to write unscalable custom code, what is the solution?

The real solution to the custom Salesforce objects problem isn't a bigger common data model. It's not more passthrough endpoints. It's a fundamentally different architecture: abandon code-based mapping entirely and treat field mapping as data.

Traditional unified API platforms maintain a code-per-integration architecture. Somewhere in their codebase, there's a SalesforceAdapter.ts with hardcoded field mappings, and an if/else tree deciding which adapter to use. Adding support for a customer's Deal_Registration__c field means modifying code, opening a pull request, deploying, and hoping nothing breaks.

Truto eliminates integration-specific code entirely. Instead of writing if (provider === 'salesforce') in your application logic, integration behavior is defined entirely as data. The runtime engine is a generic execution pipeline that reads declarative mapping configurations and evaluates them at request time.

Truto achieves this using JSONata, a functional query and transformation language purpose-built for reshaping JSON objects. JSONata is Turing-complete, declarative, and storable as a string in a database.

Here is what a declarative Salesforce contact mapping looks like conceptually in Truto:

response_mapping: >-
  response.{
    "id": Id,
    "first_name": FirstName,
    "last_name": LastName,
    "email": Email,
    "phone_numbers": $filter([
      { "number": Phone, "type": "phone" },
      { "number": MobilePhone, "type": "mobile" }
    ], function($v) { $v.number }),
    "created_at": CreatedDate,
    "updated_at": LastModifiedDate,
    "custom_fields": $sift($, function($v, $k) {
      $k ~> /__c$/i and $boolean($v)
    })
  }

That last line—$sift($, function($v, $k) { $k ~> /__c$/i and $boolean($v) })—is the key. It evaluates the incoming payload, extracts the standard fields, and uses the $sift function to dynamically capture any field matching the __c regex pattern.

It dynamically detects every custom field and includes it in the response under a clean custom_fields object. No hardcoded field list is required. No custom code is required. When a customer adds a new custom field, the mapping adapts automatically because it's an expression that evaluates against whatever Salesforce returns. The unified model remains clean, but the enterprise data is preserved.

For a detailed walkthrough of how this works with the __c suffix, see our guide to handling custom fields in Salesforce via API.

The Three-Level Override Hierarchy

Dynamic detection handles most cases, but enterprise customers sometimes need specific custom fields mapped to specific unified fields—not just dumped into a generic custom_fields bag. How do unified APIs handle custom fields when every tenant is different?

Truto solves this with a three-level override hierarchy that deep-merges configurations at runtime:

flowchart TD
    A["Platform Base Mapping<br>Default for all customers"] --> B["Environment Override<br>Per-customer environment"]
    B --> C["Account Override<br>Per connected Salesforce org"]
    C --> D["Final Merged Mapping<br>Evaluated at runtime"]
  1. Platform Base: The default JSONata mapping that works for 90% of standard use cases and ships natively with Truto. Maps FirstName to first_name.
  2. Environment Override: A configuration override applied to a specific customer's environment. If Customer A needs their Revenue_Forecast__c field mapped to a top-level revenue_forecast field instead of buried in custom_fields, you update their environment configuration. No code deployment needed.
  3. Account Override: A configuration override applied to a single connected account. If Customer A has two Salesforce orgs with different custom field schemas, or requires a unique query parameter to access a custom object, each org gets its own override.

These overrides are deep-merged at runtime. An account override doesn't replace the entire mapping—it patches specific fields on top of the environment mapping, which patches on top of the platform base. Because these overrides are stored as data (JSON blobs in the mapping configuration), you can customize the unified API behavior for a specific enterprise customer without deploying a single line of code. This data-driven approach is the most effective way to handle custom Salesforce fields across enterprise customers at scale.

Furthermore, to guarantee no data is ever permanently lost, Truto preserves the raw, unmodified third-party response in a remote_data field on every unified response. Even if a deeply nested custom object was not explicitly mapped, your application can always access it from remote_data.

Zero Integration-Specific Code: Scaling Enterprise Integrations

The override hierarchy only works because the underlying architecture is completely generic. The architectural difference between traditional unified APIs and a declarative execution engine is profound.

Most unified API platforms use a strategy pattern. They maintain separate adapter code files for every integration. Salesforce gets SalesforceAdapter.ts. HubSpot gets HubSpotAdapter.ts. Adding support for a new custom object means writing new code, running tests, and deploying to production. The maintenance burden grows linearly with the number of integrations and custom edge cases. A pagination bug fix in the Salesforce adapter doesn't help the HubSpot adapter.

Truto uses the interpreter pattern at platform scale. The runtime engine is a generic execution pipeline. It takes a declarative configuration describing how to talk to a third-party API, and a declarative mapping describing how to translate the data. It executes both without any awareness of which integration it is running.

There is zero integration-specific code in the runtime logic. No switch (provider). No salesforce_contacts database tables.

Operation Code-per-integration (Strategy) Declarative mapping (Interpreter)
Add a new CRM integration Write adapter code, test, deploy Add JSON config + JSONata mappings (data operation)
Map a customer's custom field Modify adapter, PR, deploy Add account-level override (config change)
Fix a pagination bug Fix in one adapter, hope others work Fix in generic engine, all integrations benefit
Support a new custom object Write new adapter endpoint Add resource config + mapping expressions

This means the unified API does not force standardized data models on custom objects. Adding support for a highly customized Salesforce deployment is a data operation, not a code operation. You update the JSONata mapping for that specific tenant, and the generic engine executes it immediately.

If your system requires an engineer to write code every time a customer introduces a new custom field, your integration strategy will not scale.

What This Means for Your Integration Strategy

If you are a product manager or engineering leader evaluating integration infrastructure for enterprise Salesforce customers, here is the honest assessment:

  1. Assume every enterprise Salesforce org is unique. Treat each Salesforce org as a unique snowflake, because it is. Any integration architecture that assumes a standard schema will fail on the first real enterprise deployment.
  2. Passthrough endpoints are a sign of architectural defeat, not flexibility. If your unified API regularly forces you into passthrough mode for custom objects, you're paying for an abstraction layer you can't actually use.
  3. Rate limit visibility matters more than rate limit absorption. When multiple integrations share a Salesforce org's API quota, you need standardized headers to build intelligent backoff—not a black box that silently retries and burns quota you can't see.
  4. Per-tenant customization without code deploys is the baseline requirement. Enterprise customers' Salesforce admins change their schemas constantly. Your integration layer needs to adapt through configuration, not engineering sprints.
  5. The raw data must always be accessible. No matter how good your mapping is, preserving the original API response (including all __c fields) in a remote_data field ensures nothing is ever permanently lost.

The __c suffix is, in many ways, the ultimate integration litmus test. If your integration infrastructure can handle arbitrary custom fields and custom objects across hundreds of Salesforce orgs without writing per-tenant code, it can handle anything. If it can't, you'll discover that in the worst possible moment—during an enterprise technical evaluation, with a six-figure deal on the line.

For B2B SaaS product managers and engineering leaders, this architecture changes the math of enterprise sales. You no longer have to tell a prospect that their custom Deal_Registration__c object is "on the roadmap." You do not have to pull engineers off core product features to write custom SOQL queries.

You can adapt to infinite schema variations dynamically, close the enterprise deal, and keep your engineering team focused on your actual product.

Frequently Asked Questions

Why do unified APIs break on Salesforce custom objects?
Unified APIs use a lowest-common-denominator schema that only includes fields shared across all supported CRMs. Salesforce custom fields (ending in __c) and custom objects are unique to each org and fall outside this standardized model, so they get silently dropped.
What does the __c suffix mean in the Salesforce API?
The __c suffix is Salesforce's naming convention to identify custom fields and custom objects. It differentiates them from standard platform fields and is required in all SOQL queries and API integrations.
What is a passthrough endpoint in an integration?
A passthrough endpoint acts as a proxy, forcing developers to write raw, provider-specific queries (like SOQL for Salesforce) to access custom objects, completely bypassing the benefits of a unified API.
How does Truto handle API rate limits from Salesforce?
Truto does not automatically retry or apply backoff on 429 or 403 errors. Instead, it normalizes rate limit data into standard headers (ratelimit-limit, ratelimit-remaining, ratelimit-reset) so the caller can implement precise, context-aware backoff logic.
What is the best way to map Salesforce custom fields without writing code?
Use declarative mapping with a functional transformation language like JSONata, combined with a multi-level override hierarchy (platform, environment, account). This lets you dynamically detect __c fields and configure per-tenant mappings without code deployments.

More from our Blog