---
title: How to Handle Custom Fields and Custom Objects in Salesforce via API
slug: how-to-handle-custom-fields-and-custom-objects-in-salesforce-via-api
date: 2026-03-20
author: Nachi Raman
categories: [Engineering, Guides]
excerpt: "Learn how to programmatically handle Salesforce custom fields (__c) and custom objects via API without writing per-customer integration code. Covers SOQL, Metadata API, and data-driven mapping."
tldr: Handling Salesforce custom fields (__c) at scale requires data-driven mapping and a config override hierarchy — letting you extract and map custom objects dynamically without tenant-specific code.
canonical: https://truto.one/blog/how-to-handle-custom-fields-and-custom-objects-in-salesforce-via-api/
---

# How to Handle Custom Fields and Custom Objects in Salesforce via API


If you're building a B2B SaaS product, you will eventually integrate with Salesforce. And the moment you do, you will hit the wall of custom fields and custom objects.

Customer A has `Industry_Vertical__c` on their Account object. Customer B calls it `Sector__c`. Customer C has a completely custom object called `Deal_Registration__c` with 47 fields that don't exist anywhere else. Your integration code, which worked perfectly in your test org, breaks the moment it encounters a real enterprise deployment.

This is not an edge case. Salesforce holds a roughly 21% share of the global CRM market, and 80% of Fortune 500 companies rely on it. Nearly every one of those enterprise deployments is heavily customized. Large organizations routinely have hundreds of custom objects, complex permission structures, and strict data governance requirements. One customer might have 50 custom fields on their Account object while another has 12 entirely custom objects tracking a proprietary sales process.

If your integration only supports standard objects like `Account` and `Contact`, you are ignoring the actual data your enterprise customers care about—and failing to deliver [the integrations your B2B sales team actually asks for](https://truto.one/blog/how-to-build-integrations-your-b2b-sales-team-actually-asks-for/).

This guide breaks down exactly how to interact with Salesforce's custom fields (`__c`) programmatically, why rigid unified APIs and code-first integration platforms fail at scale, and how to use data-driven mapping to handle infinite per-customer schema variations without writing a single line of integration-specific code.

## The `__c` Problem: Why Salesforce API Integrations Break

Whenever a user creates a custom field in Salesforce, the system appends `__c` at the end of the field name. Custom objects get the same treatment. This suffix is required when writing SOQL queries or using API integrations, as it ensures that the system correctly identifies custom fields versus standard ones.

When you query a standard `Contact` object, you might expect a predictable JSON response:

```json
{
  "Id": "0035e00000abc12AAA",
  "FirstName": "Jane",
  "LastName": "Doe",
  "Email": "jane@example.com"
}
```

But in a real enterprise environment, the response looks more like this:

```json
{
  "Id": "0035e00000abc12AAA",
  "FirstName": "Jane",
  "LastName": "Doe",
  "Email": "jane@example.com",
  "LTV_Cohort__c": "Enterprise_Tier_1",
  "Churn_Risk_Score__c": 12.5,
  "Last_NPS_Date__c": "2025-11-04",
  "Billing_System_ID__c": "cus_987654321"
}
```

If your application's data model expects a strict schema, it will either drop these custom fields entirely or fail to deserialize the payload.

To make matters worse, Customer A might store their billing ID in `Billing_System_ID__c`, while Customer B stores it in `Stripe_Customer_ID__c`. Hardcoding these mappings in your application logic means you are no longer building a product — you are running a custom development agency for your users.

Changes to automation rules or custom fields in Salesforce can easily break real-time API integrations if not architected correctly. Salesforce documentation explicitly warns that long run times on custom Apex triggers or Flow automations triggered by field updates can cause timeout issues, breaking external callouts during API syncs.

## Standard Objects vs. Custom Objects in the Salesforce API

Before you can map custom data, you need to understand how to extract it.

**Standard objects** are the ones Salesforce ships out of the box: `Account`, `Contact`, `Lead`, `Opportunity`, `Case`. They have predictable API names and well-documented fields.

**Custom objects** are created by admins or developers to model business-specific data. They always end in `__c` — for example, `Invoice__c`, `Product_Configuration__c`, or `Warranty_Claim__c`.

The API mechanics are identical for both — that's the good news. The bad news is that you can't know the schema ahead of time. The big numbers to keep in your head: 3,000 total custom objects per org, a ceiling of 800 custom fields on any single object, and a firm cap of 40 total relationships per object. Enterprise orgs routinely push these limits.

### REST API vs. SOQL

If you know the exact API name of the custom object, you can hit the standard sObject REST endpoints:

```bash
# Standard object
GET /services/data/v59.0/sobjects/Contact/003XXXXXXXXXXXX

# Custom object
GET /services/data/v59.0/sobjects/Invoice__c/a01XXXXXXXXXXXX
```

This works for simple CRUD operations on a single record. But it falls apart when you need to query lists of records based on custom criteria, or when you need to traverse relationships.

For enterprise integrations, SOQL is mandatory. SOQL allows you to specify exactly which standard and custom fields you want to return, and it supports relationship queries using the `__r` suffix:

```sql
SELECT Id, Amount__c, Status__c, Account__r.Name, Account__r.ARR__c
FROM Invoice__c
WHERE Status__c = 'Unpaid'
```

Executing this via the API requires a URL-encoded GET request:

```http
GET /services/data/v59.0/query/?q=SELECT+Id,+Amount__c...+FROM+Invoice__c
```

Building these SOQL queries dynamically based on what custom fields a specific customer has configured is the hardest part of [integrating Salesforce](https://truto.one/blog/salesforce-integration-is-not-as-hard-as-you-think/). You have to inspect the customer's schema, validate the field names, construct the SOQL string, and handle pagination via the `nextRecordsUrl` pointer.

> [!WARNING]
> **Watch out for API Limits:** Salesforce enforces strict resource constraints at every level. Synchronous Apex transactions are limited to 100 SOQL queries and 50,000 retrieved records. API call limits start at 100,000 requests per 24-hour period for paid editions. Inefficient queries or aggressive polling for custom object changes will burn through a customer's daily quota, causing the integration to shut down entirely until the next rolling window.

### Discovering Custom Fields with the sObject Describe API

The single most important API call for handling custom fields is `describe`. The sObject Describe resource retrieves all the metadata for an object, including information about each field, URLs, and child relationships.

```bash
GET /services/data/v59.0/sobjects/Contact/describe/
Authorization: Bearer <access_token>
```

The response includes every field on the object — standard and custom — with metadata like type, label, length, and whether the field is required:

```json
{
  "fields": [
    {
      "name": "FirstName",
      "label": "First Name",
      "type": "string",
      "custom": false
    },
    {
      "name": "NPS_Score__c",
      "label": "NPS Score",
      "type": "double",
      "custom": true
    },
    {
      "name": "Preferred_Language__c",
      "label": "Preferred Language",
      "type": "picklist",
      "custom": true
    }
  ]
}
```

You can filter for custom fields by checking the `custom: true` flag, or by pattern-matching on the `__c` suffix. This describe call should be your first step when connecting any new Salesforce integrated account — run it once, cache the schema, and use it to drive your field mapping logic.

:::tip
Call `describe` when a customer first connects their Salesforce account to your app. Store the schema locally and use it to present a field-mapping UI or to auto-configure your data pipeline. Re-fetch it periodically to catch schema changes.
:::

### Validating Field Names Before Building SOQL

A `describe` call returns every field the connected user can access. Before constructing a SOQL query, you need to validate your requested fields against this cached schema. SOQL does not support `SELECT *` - you must explicitly name every field, and referencing a field that doesn't exist (or that the user can't access) returns an `INVALID_FIELD` error.

Here's a practical pattern in Python:

```python
import requests

def get_field_metadata(instance_url, access_token, object_name):
    """Fetch field metadata via sObject Describe and return a lookup dict."""
    url = f"{instance_url}/services/data/v59.0/sobjects/{object_name}/describe/"
    headers = {"Authorization": f"Bearer {access_token}"}
    resp = requests.get(url, headers=headers)
    resp.raise_for_status()

    fields = resp.json()["fields"]
    return {
        f["name"]: {
            "type": f["type"],
            "custom": f["custom"],
            "filterable": f["filterable"],
            "createable": f["createable"],
            "updateable": f["updateable"],
            "nillable": f["nillable"],
            "permissionable": f.get("permissionable", True),
        }
        for f in fields
        if not f.get("deprecatedAndHidden", False)
    }


def validate_fields(requested_fields, field_metadata):
    """Return only the fields that exist and are queryable. Log any that were skipped."""
    safe = []
    skipped = []
    for field in requested_fields:
        if field in field_metadata:
            safe.append(field)
        else:
            skipped.append(field)
    if skipped:
        print(f"Warning: skipping unknown or inaccessible fields: {skipped}")
    return safe
```

The `deprecatedAndHidden` flag catches fields that Salesforce has marked as deprecated. Including them in a query may work today but will break in a future API version - filter them out proactively. The `filterable` property is equally important: if you need a field in a `WHERE` clause, confirm `filterable` is `true` first, or Salesforce will reject the query.

### Building Dynamic SOQL Safely

Once you've validated the field list, you need to assemble the SOQL string and send it to the REST query endpoint. The query string must be URL-encoded in the `q` parameter. The safest approach is to let your HTTP library handle the encoding rather than manually constructing `+` and `%XX` sequences:

```python
def build_soql(object_name, fields, where_clause=None, limit=None):
    """Build a SOQL query string from validated field names."""
    field_list = ", ".join(fields)
    soql = f"SELECT {field_list} FROM {object_name}"
    if where_clause:
        soql += f" WHERE {where_clause}"
    if limit:
        soql += f" LIMIT {limit}"
    return soql


def execute_soql(instance_url, access_token, soql):
    """Execute a SOQL query via the REST API with proper URL encoding."""
    url = f"{instance_url}/services/data/v59.0/query/"
    headers = {"Authorization": f"Bearer {access_token}"}
    # Pass raw SOQL as a param — requests handles URL-encoding
    resp = requests.get(url, headers=headers, params={"q": soql})

    if resp.status_code == 400:
        error = resp.json()[0]
        if error.get("errorCode") == "INVALID_FIELD":
            raise ValueError(f"Invalid field in query: {error['message']}")
        if error.get("errorCode") == "MALFORMED_QUERY":
            raise ValueError(f"SOQL syntax error: {error['message']}")
    resp.raise_for_status()
    return resp.json()
```

The resulting HTTP request looks like:

```http
GET /services/data/v59.0/query/?q=SELECT+Id%2C+FirstName%2C+NPS_Score__c+FROM+Contact+WHERE+NPS_Score__c+%3E+8 HTTP/1.1
Authorization: Bearer <access_token>
```

> [!WARNING]
> **Never interpolate user input directly into SOQL strings.** If you accept filter values from end users, always sanitize them aggressively. SOQL injection is a real attack vector - a malicious value in a WHERE clause can exfiltrate records the user should not see. Treat SOQL construction with the same paranoia you'd apply to SQL.

### Handling Pagination with `nextRecordsUrl`

Salesforce returns a maximum of 2,000 records per SOQL query response by default. For any non-trivial dataset, you need to handle cursor-based pagination. The response includes a `done` boolean and, when there are more records, a `nextRecordsUrl` pointer that you follow to fetch the next page.

A typical first-page response:

```json
{
  "totalSize": 4750,
  "done": false,
  "nextRecordsUrl": "/services/data/v59.0/query/01gRM00000EbcW1YAJ-2000",
  "records": [
    { "Id": "003xx000004TmiUAAS", "FirstName": "Jane", "NPS_Score__c": 9.2 },
    { "Id": "003xx000004TmiVAAS", "FirstName": "Carlos", "NPS_Score__c": 7.8 }
  ]
}
```

To collect all records, follow the `nextRecordsUrl` until `done` is `true`:

```python
def fetch_all_records(instance_url, access_token, soql):
    """Execute a SOQL query and paginate through all results."""
    all_records = []
    result = execute_soql(instance_url, access_token, soql)
    all_records.extend(result["records"])

    while not result["done"]:
        next_url = f"{instance_url}{result['nextRecordsUrl']}"
        headers = {"Authorization": f"Bearer {access_token}"}
        resp = requests.get(next_url, headers=headers)
        resp.raise_for_status()
        result = resp.json()
        all_records.extend(result["records"])

    return all_records
```

Two things to watch for. First, each page request counts against the org's daily API call limit - a query returning 10,000 records consumes 5 API calls just for pagination. Second, the `nextRecordsUrl` cursor expires after 15 minutes in most editions. If your processing takes longer than that between page fetches, you'll need to re-run the query from scratch.

> [!TIP]
> Don't use SOQL's `OFFSET` clause as a pagination strategy for large datasets. `OFFSET` is capped at 2,000 rows and gets progressively slower. The `nextRecordsUrl` cursor is the only reliable way to paginate through large result sets via the REST API.

### Field-Level Security and Graceful Degradation

When you call the `describe` endpoint using an OAuth token, Salesforce filters the response based on the connected user's field-level security (FLS) settings. If a custom field is hidden from the API user's profile, it won't appear in the `fields` array at all. This means your cached schema from `describe` is already your source of truth for what the connected user can read.

But things get tricky in practice:

**Write operations need extra checks.** A field appearing in `describe` means the user can read it, but not necessarily write to it. Always check the `createable` property before inserts and `updateable` before updates. Attempting to write to a read-only field returns a `FIELD_NOT_UPDATEABLE` error.

**Cached schemas go stale.** A Salesforce admin can change field-level security at any time. If you cached the schema on Monday and a field's read permission is revoked on Tuesday, your SOQL query referencing that field will return an `INVALID_FIELD` error on Wednesday. Schedule periodic `describe` refreshes - once every 24 hours is a reasonable default, with an on-demand refresh when you encounter an unexpected error.

**Fields can be deleted entirely.** If a Salesforce admin deletes a custom field, any SOQL query referencing it fails. Your error handling should catch this and automatically remove the stale field from your cached schema:

```python
def safe_query_with_fallback(instance_url, access_token, object_name, fields, where=None):
    """Query with automatic retry if a field has been deleted or revoked."""
    soql = build_soql(object_name, fields, where)
    try:
        return fetch_all_records(instance_url, access_token, soql)
    except ValueError as e:
        if "INVALID_FIELD" in str(e):
            # Re-fetch the schema and retry with only valid fields
            valid = get_field_metadata(instance_url, access_token, object_name)
            safe_fields = [f for f in fields if f in valid]
            if not safe_fields:
                raise
            soql = build_soql(object_name, safe_fields, where)
            return fetch_all_records(instance_url, access_token, soql)
        raise
```

This pattern gives you automatic recovery: if a field disappears, the integration retries with a reduced field list rather than failing outright. Log the dropped fields so your team can investigate and update any dependent mappings.

> [!TIP]
> The `describe` response includes a `permissionable` flag on each field. When `permissionable` is `true`, the field's visibility can be controlled via profiles and permission sets - meaning it could disappear from a user's view at any time. Fields where `permissionable` is `false` (like `Id` and `Name`) are always visible and safe to hardcode in your queries.

## How Legacy Unified APIs and Code-First Platforms Handle Custom Fields

The integration market has attempted to solve the `__c` problem in a few different ways. Most of them shift the burden back onto your engineering team.

### 1. The Rigid Schema + Passthrough Approach

Legacy unified APIs force third-party data into a rigid, lowest-common-denominator schema. They map the standard `FirstName` and `LastName`, but they strip away the custom fields that actually matter to the business.

When you inevitably need to access `Churn_Risk_Score__c`, these platforms tell you to use "Passthrough Requests". This is a polite way of saying the abstraction has failed. You are forced to write raw Salesforce API requests, construct your own SOQL queries, and handle your own pagination and rate limiting. You pay for a unified API, but you still have to build and maintain a native Salesforce integration. See [Your Unified APIs Are Lying to You: The Hidden Cost of Rigid Schemas](https://truto.one/blog/your-unified-apis-are-lying-to-you-the-hidden-cost-of-rigid-schemas/) for a deeper breakdown.

### 2. The Code-First Approach

Code-first platforms position themselves as developer-friendly by giving you a blank canvas. They require you to write custom TypeScript logic to handle custom fields and per-customer schema variations.

If Customer A needs `Industry__c` and Customer B needs `Vertical__c`, you write conditional logic in your sync scripts. This works if you have three customers. It collapses at 50, and it becomes a full-time job at 200. Your engineering team spends more time maintaining customer-specific integration code than building product features. That's the tax you pay for [code-first flexibility](https://truto.one/blog/truto-vs-nango-why-code-first-integration-platforms-dont-scale/).

### 3. The Raw Suffix Approach

Some platforms try to split the difference by requiring developers to append specific query parameters (like `?fields=raw`) to their requests. They dump the unmapped custom fields into a nested `raw` JSON object. While better than dropping the data entirely, this still forces your application layer to parse through unnormalized payloads. Your backend code has to check if the `raw` object exists, search for the `__c` suffix, and handle the type conversions manually.

### 4. The Declarative Deployment Approach

Other platforms require developers to define custom object mappings in declarative YAML files, version-controlled and deployed via CI/CD pipelines. This keeps the configuration out of your application code, but introduces massive friction. If a customer's Salesforce admin adds a new custom field on a Tuesday, your engineering team has to update a YAML file, open a pull request, wait for CI/CD checks, and deploy to production before the integration can recognize the new field. If you have 100 enterprise customers with active Salesforce admins, you're merging YAML pull requests weekly.

## The Architectural Solution: Data-Driven Mapping (No Custom Code)

The only scalable way to [build native CRM integrations](https://truto.one/blog/building-native-crm-integrations-without-draining-engineering-in-2026/) and handle Salesforce custom fields across thousands of tenants is to eliminate integration-specific code entirely. Instead of writing TypeScript or hardcoding SOQL queries, you treat API mapping as configuration data evaluated at runtime.

```mermaid
flowchart LR
    A["Your App<br>(Unified API Call)"] --> B["Generic<br>Execution Engine"]
    B --> C{"Load Config<br>from DB"}
    C --> D["Integration Config<br>(URLs, auth, pagination)"]
    C --> E["Mapping Expressions<br>(JSONata / transforms)"]
    C --> F["Per-Account Overrides<br>(customer-specific)"]
    D --> G["Salesforce API"]
    E --> G
    F --> G
    G --> H["Transform Response<br>via Mapping"]
    H --> I["Unified Response"]
```

The key insight: **the mapping between Salesforce's `__c` fields and your unified schema is a data expression, not application code.** A single generic execution engine evaluates these expressions at runtime for every integration and every customer.

### The Generic Execution Pipeline

When a request comes into a data-driven integration engine, the system does not branch based on the integration name. It follows a generic, five-step pipeline:

1. **Resolve Configuration:** The system queries the database for the integrated account credentials and the integration configuration (base URL, auth scheme, pagination style).
2. **Extract Mapping Expressions:** The system loads JSONata expressions that define how to translate the request and response.
3. **Transform the Request:** The system evaluates the request mapping expression to convert the unified request into the provider's native format (e.g., generating a SOQL query).
4. **Execute the API Call:** The proxy layer makes the HTTP request using the generic configuration.
5. **Transform the Response:** The system evaluates the response mapping expression to normalize the native payload back into the unified format.

### Mapping Custom Fields Dynamically with JSONata

Because the mapping layer relies on JSONata (a lightweight query and transformation language for JSON), you can extract custom fields dynamically without knowing their exact names in advance:

```json
{
  "id": Id,
  "first_name": FirstName,
  "last_name": LastName,
  "email_addresses": [{ "email": Email }],
  "custom_fields": $sift($, function($v, $k) { $k ~> /__c$/i and $boolean($v) })
}
```

Look closely at the `custom_fields` line. It uses the JSONata `$sift` function to iterate over every key in the Salesforce response. It applies a regular expression (`/__c$/i`) to find any key ending in `__c`. If the key matches and the value is not null, it extracts the key-value pair and drops it into a clean `custom_fields` object.

This single line of configuration handles infinite custom fields. If a Salesforce admin adds fifty new `__c` fields tomorrow, this expression will automatically extract them and pass them to your application. No YAML deployments. No TypeScript updates. No pull requests.

This is how Truto handles Salesforce custom fields. The mapping expressions are stored as configuration data, and the execution engine evaluates them generically. The same engine that processes Salesforce Contact mappings also processes HubSpot, Pipedrive, and every other CRM — because it doesn't contain integration-specific code. It just evaluates whatever mapping expression the config provides.

### Constructing SOQL Dynamically

The same concept applies to building requests. Instead of hardcoding SOQL strings, you use a generic mapping function to translate unified query parameters into a valid SOQL `WHERE` clause:

```json
{
  "q": query.search_term
    ? "FIND {" & query.search_term & "} RETURNING Contact(Id, FirstName, LastName)"
    : null,
  "where": $whereClause ? "WHERE " & $whereClause : null
}
```

The mapping engine evaluates the incoming parameters, constructs the SOQL syntax, and passes it to the generic HTTP executor. The core engine never knows it is talking to Salesforce.

## Handling Per-Customer Salesforce Customizations at Scale

Extracting all custom fields into a `custom_fields` object solves the discovery problem, but it does not solve the mapping problem—a classic [schema normalization](https://truto.one/blog/why-schema-normalization-is-the-hardest-problem-in-saas-integrations/) challenge.

Your application expects a field called `billing_id`. Customer A uses `Billing_ID__c`. Customer B uses `Invoice_System_Ref__c`. If you use a code-first platform, you have to write tenant-specific logic to handle this routing. The correct architectural approach is a **Config Override Hierarchy**.

### The Config Override Hierarchy

In a data-driven architecture, mapping configurations are just JSON stored in a database. This means you can override them at the account level without touching the base integration logic.

When the execution pipeline resolves the configuration, it checks for overrides in a specific order:

1. **Base Mapping** — The default Salesforce-to-unified-model mapping that works for most customers. Handles standard fields and captures all `__c` fields into a generic `custom_fields` bucket.
2. **Integration-Level Override** — Adjustments that apply to all Salesforce accounts (e.g., always extracting `Company_Size__c` into a specific unified field).
3. **Account-Level Override** — Per-customer overrides that handle Customer A's specific `notes__c` → `description` mapping without touching the base configuration.

To solve the `billing_id` discrepancy, you simply pass an override payload for Customer B's specific `integrated_account_id`:

```json
{
  "response_mapping": "$merge([$base_mapping, { 'billing_id': Invoice_System_Ref__c }])"
}
```

The engine merges this override with the base mapping at runtime. Customer B's `Invoice_System_Ref__c` is cleanly mapped to your application's `billing_id` property.

You can expose this configuration via your own UI, allowing your customer success team (or the customers themselves) to define their own field mappings. You save the mapping as a JSON override via API, and the integration adapts instantly. No code changes. No deploys. No CI/CD pipeline for a field mapping change.

:::warning
Salesforce admins change things constantly. A renamed field or a new validation rule can silently break your integration. Always build defensive error handling around field access, and consider re-running `describe` calls on a schedule to detect schema drift before your customers report broken syncs.
:::

## Why Real-Time Pass-Through Beats Syncing Custom Objects

Many integration platforms attempt to solve the custom object problem by acting as an ETL pipeline — syncing the customer's entire Salesforce instance into a managed database, normalizing it, and letting you query their database instead of Salesforce.

With custom objects and fields, syncing introduces serious problems.

### Schema Explosion and Drift

You are storing data whose shape you cannot predict. Customer A has 12 custom objects with 200+ fields total. Customer B has 8 completely different custom objects. Your database schema either becomes infinitely flexible (hello, JSON blobs) or infinitely complex (hello, per-tenant tables).

Worse, when a Salesforce admin changes the data type of a custom field from `String` to `Picklist`, or deletes a field entirely, the sync pipeline will often crash or throw silent validation errors. In a real-time pass-through architecture, there is no database schema to maintain. The JSONata expression simply evaluates whatever payload Salesforce returns at that exact moment. If a field is missing, it evaluates to null. If a new field appears, the `$sift` function catches it automatically.

### Data Residency and Compliance Risk

Enterprise Salesforce instances contain highly sensitive data: PII, financial records, healthcare information, and proprietary business metrics. If you use a syncing platform, you are copying all of that into a third-party vendor's database.

If your customer requires data to remain in the EU (GDPR) or requires strict HIPAA compliance, mirroring their custom objects into a multi-tenant sync database will instantly fail their infosec review.

A real-time pass-through architecture acts as a proxy. It fetches the data from Salesforce, evaluates the JSONata mapping in memory, returns the normalized JSON to your application, and drops the payload. The data is never stored at rest in the integration layer.

### Stale Data and Webhook Unreliability

Traditional Salesforce integrations rely on polling — checking for changes on a fixed interval. This introduces 15-60 seconds of staleness at best. Salesforce does support Outbound Messages and Change Data Capture (CDC), but configuring these for custom objects requires administrative setup inside the customer's org. It is not plug-and-play. If a webhook fails to fire, your synced database becomes stale, and your application makes decisions based on outdated custom field values.

By querying the Salesforce API in real-time through a proxy layer, you guarantee that your application always reads the absolute source of truth.

The trade-off is real: pass-through means your app's latency depends on Salesforce's response time. For most read operations, that's 200-800ms. If you need sub-50ms reads on large datasets, a sync architecture may be necessary. But for the majority of B2B SaaS use cases — displaying a customer's contacts, creating a deal, updating a record — pass-through is the simpler, more secure, and more maintainable choice.

## Putting It All Together: A Reference Architecture

Here's what a production-grade Salesforce custom field integration looks like when you combine these patterns:

```mermaid
sequenceDiagram
    participant App as Your SaaS App
    participant Engine as Unified API Engine
    participant Config as Config Store
    participant SF as Salesforce API

    App->>Engine: GET /unified/crm/contacts?integrated_account_id=abc123
    Engine->>Config: Load integration config + mapping + overrides
    Config-->>Engine: Base mapping + account-level overrides
    Engine->>Engine: Build SOQL query with custom field selection
    Engine->>SF: GET /services/data/v59.0/query?q=SELECT Id,FirstName,...,NPS_Score__c
    SF-->>Engine: Raw Salesforce response
    Engine->>Engine: Evaluate mapping expressions (including __c capture)
    Engine->>Engine: Apply account-level overrides
    Engine-->>App: Unified response with custom_fields populated
```

The core steps:

1. **Connection time**: Run `describe` on each object, store the schema, let the customer (or your team) configure field mappings.
2. **Request time**: Load the base mapping plus any per-account overrides from your config store. Build the appropriate SOQL query dynamically.
3. **Response time**: Evaluate the mapping expression against the raw response. Custom fields matching `__c` are captured automatically. Specific overrides are applied on top.
4. **Ongoing**: Re-run `describe` periodically. Surface schema changes. Let non-engineering teams adjust overrides without code changes.

This architecture handles the full spectrum: standard objects with default fields, custom fields on standard objects, fully custom objects, and per-customer schema variations — all without writing integration-specific code.

If you want to see the baseline Salesforce connection setup before tackling custom objects, check out our [step-by-step Salesforce integration guide](https://truto.one/blog/integrate-salesforce/).

## Stop Hardcoding Your Integrations

The `__c` suffix is the ultimate test of an integration architecture. If your system requires an engineer to write code every time a customer introduces a new custom field, your integration strategy will not scale.

The three things to get right:

- **Dynamic schema discovery**: Use the `describe` API. Never hardcode field lists. Treat each Salesforce org as a unique snowflake, because it is.
- **Data-driven mapping, not code-driven**: Keep your field translation logic in configuration, not in application code. Extract custom fields dynamically using JSONata. The maintenance cost difference over 50+ customers is staggering.
- **Per-account overrides without deploys**: Your enterprise customers' Salesforce admins will change things. Your integration needs to adapt without an engineering sprint. Implement a config override hierarchy for per-tenant routing, and let the mapping layer do the heavy lifting.

> Stop writing bespoke integration code for every enterprise customer. See how Truto handles Salesforce custom objects and schema overrides with zero integration-specific code.
>
> [Talk to us](https://cal.com/truto/partner-with-truto)
