Zero Integration-Specific Code: How to Ship API Connectors as Data-Only Operations
Learn how to escape the SaaS integration maintenance trap by replacing hardcoded API adapters with declarative JSONata mappings and data-only configurations.
If you are trying to figure out how to scale your integration roadmap without linearly scaling your engineering headcount, you generally have two options. You can keep writing custom adapter code for every new API—a HubSpotAdapter.ts here, a SalesforceHandler.py there—and watch your maintenance burden compound. Or, you can treat integration logic as data instead of code, shipping new connectors as configuration records without touching your runtime, your CI/CD pipeline, or your deployment calendar.
Building a single third-party API integration typically adds 2 to 8 weeks of development time. For engineering teams at B2B SaaS companies, this means your product roadmap is constantly hijacked by custom glue code. You want to know how to ship new API connectors as data-only operations. The answer lies in abandoning the traditional Adapter pattern and adopting an Interpreter architecture where zero integration-specific code exists in your runtime.
This guide breaks down the architectural shift required to scale SaaS integrations, how declarative JSONata mapping works in practice, the reality of handling rate limits without hardcoded backoff logic, where the trade-offs actually are, and how to evaluate whether this approach fits your backend infrastructure.
The Technical Debt of Code-First Integrations
The default approach to building third-party integrations is the Strategy Pattern (also called the Adapter Pattern). You define a common interface—say, UnifiedContact—and then write a separate adapter class for each provider that implements it.
Inside your codebase, it looks like this:
// The traditional approach: code per integration
if (provider === 'hubspot') {
return hubspotAdapter.getContacts(params);
} else if (provider === 'salesforce') {
return salesforceAdapter.getContacts(params);
} else if (provider === 'pipedrive') {
return pipedriveAdapter.getContacts(params);
}
// ... repeat for every new providerThis works fine at 3 integrations. At 30, it becomes an engineering tax. At 100, it is an organizational bottleneck.
The numbers back this up. Each connection to another system requires custom API development, data mapping, and testing - some integrations are straightforward with well-documented APIs, while others involve legacy systems lacking modern connection capabilities. App development agencies estimate that a standard CRM integration (Salesforce, HubSpot) adds 2 to 4 weeks of development time, while complex ERP integrations (like SAP or NetSuite) can add 4 to 8 weeks. That is per integration, per initial build—before you account for ongoing maintenance.
Ongoing maintenance costs consume a massive portion of integration budgets. According to Forrester, enterprises spend an average of 30-50% of their total integration budget on ongoing maintenance, including API version updates and error resolution. A single integration project can cost around $50,000, covering both engineering efforts and customer success management, with annual maintenance typically running 10% to 20% of that initial development cost. For organizations with 20 or more custom-built integrations, annual maintenance costs can easily exceed $500,000.
The maintenance burden scales linearly with the number of integrations because each adapter is its own code island. A bug fix in the Salesforce adapter does not help the HubSpot adapter. An improvement to HubSpot pagination does not touch Salesforce pagination. Every integration is a separate codebase to test, deploy, and monitor. When an API changes, your engineers have to rewrite code, redeploy, and hope they did not break the other integrations sharing that codebase.
This architectural rigidity is why you wait months for a new endpoint. It is the hidden cost of the Adapter pattern at scale. You can read more about escaping this cycle in our guide on how to reduce technical debt from maintaining dozens of API integrations.
What Does "Zero Integration-Specific Code" Actually Mean?
The alternative is the Interpreter Pattern applied at platform scale. A zero-code integration architecture means your runtime engine contains absolutely no conditional logic branching on a provider's name.
Instead of writing a new adapter class for each provider, you build a generic execution engine that reads a declarative configuration describing how to talk to any third-party API, and a declarative mapping describing how to translate between your normalized schema and the provider's native format. The engine then executes both without any awareness of which integration it is running.
Zero integration-specific code means exactly this: the runtime has no if (hubspot), no switch (provider), no salesforce_contacts table, no hubspot_auth_handler.ts. There are no integration-specific database columns like hubspot_token or zendesk_subdomain.
Here is the conceptual difference:
graph TD
subgraph Traditional Adapter Pattern
A[Unified API Interface] --> B[HubSpotAdapter.ts]
A --> C[SalesforceAdapter.ts]
A --> D[PipedriveAdapter.ts]
end
subgraph Interpreter Pattern
E[Unified API Interface] --> F[Generic Execution Engine]
F --> G[(Integration Config JSON)]
F --> H[(JSONata Data Mappings)]
F --> I[(Customer Overrides)]
endIn the interpreter architecture, integration-specific behavior is defined entirely as data. The integration configurations form a domain-specific language (DSL) for describing API interactions. The runtime engine is an interpreter that executes this DSL. New integrations are simply new "programs" written in this DSL, not new features compiled into the interpreter.
In a system like Truto, the database schema reinforces this philosophy. Out of dozens of tables, not one contains an integration-specific column. The integration table has a generic config column storing a JSON blob. The integrated_account table has a generic context column storing credentials. The same code path that handles a HubSpot CRM contact listing also handles Salesforce, Pipedrive, Zoho, and every other CRM. Adding a new integration is a pure data operation. You can read the full architectural breakdown of how this works in Look Ma, No Code! Why Truto’s Zero-Code Architecture Wins.
Shipping Connectors as Data-Only Operations
To ship an API connector without writing code, you must separate the mechanics of HTTP communication from the logic of data translation. In a zero-integration-code architecture, adding a new integration is a data operation, not a code operation. No pull request. No CI/CD pipeline. No deployment window.
Each integration is fully described by two data artifacts:
1. The Integration Config (API Blueprint)
The first layer is the integration configuration. This is a JSON blob stored in the database that completely describes how to communicate with a third-party API. It defines the base URL, authentication scheme, available endpoints, pagination strategy, and error handling rules.
Here is an example of what an API blueprint looks like for a standard REST API:
{
"base_url": "https://api.hubapi.com",
"credentials": {
"format": "oauth2",
"config": {
"auth": {
"tokenHost": "https://api.hubapi.com",
"tokenPath": "/oauth/v1/token",
"authorizePath": "/oauth/authorize"
},
"scope": ["crm.objects.contacts.read"]
}
},
"authorization": {
"format": "bearer",
"config": {
"path": "oauth.token.access_token"
}
},
"pagination": {
"format": "cursor",
"config": {
"cursor_field": "paging.next.after"
}
},
"resources": {
"contacts": {
"list": {
"method": "get",
"path": "/crm/v3/objects/contacts",
"response_path": "results"
},
"get": {
"method": "get",
"path": "/crm/v3/objects/contacts/{{id}}"
}
}
}
}This schema covers authentication (OAuth2, API key, basic auth), pagination strategy (cursor, page, offset, link header, range, or dynamic), available endpoints, default headers, query parameter serialization, and error handling. The same JSON schema works for every integration—only the values change. The generic HTTP client reads these fields and executes the appropriate strategy. It builds the URL, applies the Bearer token, handles the cursor pagination, and extracts the results array.
2. The Integration Mapping (Data Transformation)
The second layer is data mapping. A set of JSONata expressions describe how to translate between unified and native formats. Each mapping handles request query translation, request body transformation, and response field mapping.
The generic engine reads these two artifacts, executes the pipeline, and produces a normalized response. The engine does not know or care whether it is talking to HubSpot, Salesforce, or a GraphQL API like Linear. It just evaluates whatever configuration and expressions it is given. This means the 101st integration does not require a single line of code to be changed, compiled, or deployed. You add configuration records to the database and the connector is live. For a deeper dive into how this architecture prevents deployments, read our guide on hot-swappable API integrations.
How JSONata Replaces Hardcoded Business Logic
The choice of transformation language is what makes or breaks a declarative integration architecture. You need something expressive enough to handle the wild diversity of real-world APIs, but constrained enough to stay side-effect free and storable as data.
JSONata fits this requirement perfectly. JSONata is a language for querying and transforming JSON data developed by Andrew Coleman of IBM. It is a functional, Turing-complete expression language purpose-built for reshaping JSON objects. It supports conditionals, string manipulation, array transforms, custom functions, date formatting, and recursive expressions—all in a single expression string.
Major cloud providers and integration platforms are heavily adopting this pattern. The latest updates to AWS Step Functions with variables and JSONata improves how you create your workflows, reducing the entry barrier and allowing you to write less code to achieve the same results. AWS adopted JSONata in Step Functions at re:Invent 2024, using it to replace Lambda functions for data transformation. The session covers practical benefits including reduced latency, lower costs, simplified maintenance, and elimination of Lambda runtime deprecation issues.
To see why JSONata matters for integrations, compare how two very different APIs—HubSpot and Salesforce—handle the exact same generic code path using only JSONata expressions.
Mapping Responses
HubSpot's contacts API returns data nested inside a properties object with lowercase field names. Salesforce returns flat PascalCase fields. The unified API engine evaluates a JSONata expression against the raw response to normalize it.
HubSpot JSONata Response Mapping:
response.{
"id": id.$string(),
"first_name": properties.firstname,
"last_name": properties.lastname,
"email_addresses": [
properties.email ? { "email": properties.email, "is_primary": true },
properties.hs_additional_emails
? properties.hs_additional_emails.$split(";").{ "email": $ }
],
"phone_numbers": [
properties.phone ? { "number": properties.phone, "type": "phone" },
properties.mobilephone ? { "number": properties.mobilephone, "type": "mobile" }
],
"created_at": createdAt,
"updated_at": updatedAt
}Salesforce JSONata Response Mapping:
response.{
"id": Id,
"first_name": FirstName,
"last_name": LastName,
"email_addresses": [{ "email": Email }],
"phone_numbers": $filter([
{ "number": Phone, "type": "phone" },
{ "number": MobilePhone, "type": "mobile" },
{ "number": HomePhone, "type": "home" },
{ "number": Fax, "type": "fax" }
], function($v) { $v.number }),
"created_at": CreatedDate,
"updated_at": LastModifiedDate
}Both mappings are just strings stored in a database column. The generic engine evaluates them without knowing what fields the response contains or what the expression does. Both produce the exact same normalized output:
{
"id": "123",
"first_name": "John",
"last_name": "Doe",
"email_addresses": [{ "email": "john@example.com", "is_primary": true }],
"phone_numbers": [{ "number": "+1-555-0123", "type": "phone" }],
"created_at": "2024-01-15T10:30:00Z",
"updated_at": "2024-06-20T14:15:00Z"
}| Aspect | HubSpot | Salesforce | Handled By |
|---|---|---|---|
| ID field | id (string) |
Id (string) |
Response mapping JSONata |
| Name fields | properties.firstname |
FirstName |
Response mapping JSONata |
properties.email + semicolon-separated hs_additional_emails |
Single Email field |
Response mapping JSONata | |
| Phone numbers | 3 fields (phone, mobile, whatsapp) | 6 fields (phone, fax, mobile, home, other, assistant) | Response mapping JSONata |
| Pagination | Cursor-based (after parameter) |
Cursor-based (different format) | Integration config |
Mapping Complex Queries
The real power of JSONata emerges when translating query parameters. If a user searches for a contact by email, the unified API receives ?email_addresses=john@example.com.
HubSpot requires a complex filterGroups array in the request body for searching. The JSONata expression handles this transformation dynamically:
rawQuery.{
"filterGroups": email_addresses ? [{
"filters": [{
"propertyName": "email",
"operator": "EQ",
"value": $firstNonEmpty(email_addresses.email, email_addresses)
}]
}]
}Salesforce, conversely, requires a SOQL query. The JSONata mapping for Salesforce constructs the WHERE clause dynamically based on the input parameters:
(
$whereClause := query ? $convertQueryToSql(
query.{ "email_addresses": email_addresses },
["email_addresses"],
{ "email_addresses": "Email" }
);
{
"q": query.search_term
? "FIND {" & query.search_term & "} RETURNING Contact(Id, FirstName, Email)"
: null,
"where": $whereClause ? "WHERE " & $whereClause
}
)Every difference in API design—pagination, authentication, field naming, complex search syntax—is handled by data. Not a single line of code is written, reviewed, or deployed to support these differences.
Handling Upstream Errors and Rate Limits Without Custom Code
A common architectural mistake when building unified APIs is attempting to magically absorb all upstream constraints, particularly rate limits. Developers often write complex, stateful retry mechanisms and exponential backoff queues specific to each provider's rate limit window.
This approach breaks down at scale. Different APIs use different rate limit headers, different reset window formats (seconds, milliseconds, Unix timestamps), and different penalty mechanisms for repeated violations. HubSpot returns 429 Too Many Requests with a Retry-After header. Salesforce returns a REQUEST_LIMIT_EXCEEDED error body. Some APIs use custom headers like X-RateLimit-Remaining. Others just silently drop requests.
Truto takes a radically honest, generic approach to rate limiting: transparency, not absorption. We do NOT automatically retry, throttle, or apply backoff on rate limit errors. When an upstream API returns a rate-limit error (HTTP 429), the platform passes that error directly back to the caller.
What the generic pipeline DOES do is solve the normalization problem. The IETF RateLimit header spec defines RateLimit-Limit containing the requests quota in the time window, RateLimit-Remaining containing the remaining requests quota in the current window, and RateLimit-Reset containing the time remaining in the current window, specified in seconds.
Truto normalizes the chaotic rate limit information from each upstream provider into standardized response headers following this IETF spec:
ratelimit-limit: The maximum number of requests permitted in the current window.ratelimit-remaining: The number of requests remaining in the current window.ratelimit-reset: The number of seconds until the rate limit window resets.
Architectural Reality: Never trust an integration platform that claims to completely hide rate limits from your application. If a unified API holds connections open while applying backoff, it will exhaust your serverless concurrency limits. The caller or AI agent is always responsible for reading standardized headers and implementing their own retry logic.
Here is what this looks like in practice for the caller:
// Your application code reads Truto's normalized rate limit headers
const response = await fetch('https://api.truto.one/unified/crm/contacts', {
headers: { 'Authorization': `Bearer ${apiToken}` }
});
const remaining = parseInt(response.headers.get('ratelimit-remaining'));
const resetInSeconds = parseInt(response.headers.get('ratelimit-reset'));
if (remaining < 10) {
// Slow down proactively before hitting the limit
await sleep(resetInSeconds * 1000 / remaining);
}
if (response.status === 429) {
// Upstream rate limit hit - implement your own backoff
await sleep(resetInSeconds * 1000);
// Retry the request
}By normalizing the headers rather than absorbing the errors, the platform maintains a stateless, integration-agnostic execution pipeline. Your application code implements one generic exponential backoff strategy that reads the ratelimit-reset header, and it works flawlessly across 100+ different SaaS APIs.
Beyond rate limits, each integration config can define a JSONata error_expression that normalizes third-party error responses into structured, predictable error objects. If a provider returns a 200 status code with an error nested in the response body, the error expression detects it and remaps it to the correct HTTP status code. This is all configuration—no per-provider error-handling code.
For a deeper dive into rate limit strategies across providers, see our guide on handling API rate limits and retries across multiple third-party APIs.
The Cascading Benefits of Declarative Integrations
Shifting from code-first adapters to declarative data configurations creates a cascading effect across your engineering organization. When every integration flows through the same code path, improvements compound instead of fragmenting.
Zero Deployments for New Connectors
When adding a new integration is a database operation, your speed to market changes fundamentally. You no longer wait for sprint planning to add support for a niche ATS or an industry-specific CRM. A product manager or solutions engineer can write the JSON config, validate the JSONata mappings, and push the integration live in hours. This is how you unblock enterprise deals that hinge on specific software ecosystems. We cover the business impact of this in The "Tuesday to Friday" Integration: How Truto Unblocks Sales Deals.
Reliability Through Uniformity
In a code-per-integration architecture, a bug fix in the Salesforce adapter does not help the HubSpot adapter. An improvement to pagination logic has to be manually ported across 50 different files.
When every integration flows through the exact same execution pipeline, bugs get fixed once and every integration benefits. When the generic engine's OAuth token refresh logic is hardened, all 100+ integrations immediately inherit that resilience. The maintenance burden grows linearly with the number of unique API patterns (REST, GraphQL, Cursor Pagination), rather than the number of integrations.
The Three-Level Override Hierarchy
Because integration behavior is defined as data, you can build powerful override systems that are impossible in compiled code. A declarative architecture enables a three-level customization stack:
| Override Level | Scope | Example Use Case |
|---|---|---|
| Platform Base | All customers | Default field mappings for a CRM integration. |
| Environment Override | One customer's tenant | Custom filter parameters for their specific workflow, without affecting other tenants. |
| Account Override | Individual connected accounts | Map a custom Salesforce __c field into the unified response for hyper-specific edge cases. |
These overrides are simply JSON deep-merges applied at runtime. A customer can add their own custom fields to the unified response without your engineering team touching any source code. This is something most unified API platforms architecturally cannot offer, because their per-provider adapter code lacks a generic override mechanism.
Free MCP Tool Generation
Because the integration behavior is entirely data-driven and backed by JSON schemas, the platform can automatically generate Model Context Protocol (MCP) tool definitions. The system reads the integration's resource configurations and documentation records, automatically generating fully-typed tools for AI agents. Every integration that has a valid declarative configuration automatically becomes available as an MCP tool, requiring zero per-integration prompt engineering or tool-binding code.
The Trade-Offs: Where Declarative Hits Its Limits
A declarative, data-driven architecture is not a free lunch. Here is where you will hit friction:
- Debugging complexity shifts: Instead of stepping through TypeScript in a debugger, you are debugging JSONata expressions evaluated against JSON payloads. JSONata's tooling ecosystem is still smaller than mainstream programming languages, though the JSONata Exerciser helps significantly.
- Expression complexity ceiling: Some APIs require multi-step orchestration that pushes JSONata expressions to 50+ lines. At that point, the expression is functionally equivalent to code—just written in a less familiar language. The architecture handles this with
beforeandafterstep pipelines that chain multiple API calls, but the declarative config can get dense. - Learning curve: Your team needs to learn JSONata. It is not hard—JSONata is a lightweight query and transformation language in JSON, it's open source and inspired by XPath. But it is one more thing to onboard people on, and finding engineers with existing JSONata experience is uncommon.
- Schema drift detection: When a vendor silently changes their response shape (it happens more often than you'd hope), a JSONata expression will silently produce
nullfields instead of throwing a hard error. You need robust monitoring that catches this. We cover strategies for handling these upstream changes in our guide on how to survive API deprecations across 50+ SaaS integrations.
These trade-offs are real. But for teams managing 20+ integrations, the massive maintenance cost reduction from a shared execution engine far outweighs the learning curve of a new expression language.
Putting This Into Practice
Code-first API integrations are a legacy architectural pattern. They drain engineering resources, bloat your codebase, and create an unsustainable maintenance burden. If you are evaluating how to scale your integration roadmap, here are the concrete questions to ask:
- How many lines of code does adding a new integration touch? If the answer is "hundreds" or "it requires a new file/module," you are in Strategy Pattern territory and will scale linearly with pain.
- Can a non-engineer ship a field mapping change? If every mapping change requires a code deploy, you have coupled data transformation to your release cycle.
- How many integrations benefit when you fix a pagination bug? If the answer is "one," you are maintaining N separate pagination implementations.
- Can individual customers customize the unified schema for their setup? If not, you will spend engineering cycles on one-off requests from enterprise accounts.
The interpreter pattern—a generic engine executing declarative configuration—is not just an implementation detail. It is the architectural decision that determines whether your integration team is a cost center or a competitive advantage. By adopting tools like JSONata, you turn integration maintenance into a data entry task, you normalize rate limits instead of writing fragile backoff loops, and you empower solutions engineers to unblock deals without writing code.
Frequently Asked Questions
- What does zero integration-specific code mean?
- It means the runtime engine has no conditional branches for specific providers (no `if (hubspot)` or `switch (provider)`). All provider-specific behavior—field mappings, auth flows, pagination, query translation—is defined as declarative configuration data (JSON blobs and JSONata expressions), not compiled code.
- How do you handle API rate limits without custom code?
- Instead of hardcoding retry logic for every API, a generic platform normalizes upstream rate limit data into standard IETF headers (ratelimit-limit, ratelimit-remaining, ratelimit-reset) and passes 429 errors back to the caller. This allows applications to implement a single, unified backoff strategy.
- What is JSONata and why is it used for API data mapping?
- JSONata is an open-source, functional query and transformation language for JSON, originally developed at IBM. It is Turing-complete, side-effect free, and storable as a plain string—making it ideal for declarative data mapping that can be versioned, overridden, and hot-swapped without code deploys.
- What is the difference between the Interpreter Pattern and Strategy Pattern for integrations?
- The Strategy Pattern writes a separate adapter class per provider, scaling linearly with maintenance cost. The Interpreter Pattern builds a generic execution engine that reads declarative configuration, meaning new integrations are new 'programs' in a DSL—not new features in the engine. Only the interpreter approach achieves zero integration-specific code.