How to Migrate from Apideck Without Re-Authenticating End Users
A complete technical guide to exporting OAuth tokens from Apideck Vault and migrating to a new unified API without forcing enterprise users to reconnect.
If you are evaluating an Apideck alternative, you are likely staring down the barrel of a migration cliff. You want to switch infrastructure, but the thought of asking hundreds of enterprise customers to re-authenticate their Salesforce, HubSpot, BambooHR, or QuickBooks accounts is terrifying. Forcing users to click "Reconnect" generates support tickets, introduces immediate churn risk, and burns social capital with your best accounts.
The good news is you do not have to do this. As we've demonstrated in our migration guides for Merge.dev and Finch, you can migrate away from Apideck without asking a single end user to reconnect. The process involves exporting OAuth tokens from Apideck Vault, importing them into your new platform's credential context, mapping the old unified schema to the new one, and flipping the DNS.
This guide breaks down the exact technical strategy to extract your credentials, handle rate limits post-migration, and use declarative mappings to mimic your old API responses so your frontend code does not have to change.
The Vendor Lock-In Trap: Why Teams Migrate from Apideck
Apideck is a well-built product. The docs are clean, the Vault connection UI is polished, and the real-time pass-through architecture that avoids caching customer data is a sound design decision. Integrations are a core revenue lever, and studies show that organizations use anywhere from 100 to more than 300 SaaS applications.
While Apideck helps teams get off the ground quickly, engineering leaders typically hit three specific scaling limits that force a migration conversation:
1. Virtual webhooks default to 24-hour polling. Apideck monitors enabled resources at regular intervals, typically every 24 hours, for providers without native webhook support. For providers like BambooHR and many HRIS platforms that don't support native webhooks, Apideck monitors integrations at an interval of every 24 hours by default. If you are building an applicant tracking system (ATS) integration and need immediate status updates, or a CRM sync syncing employee terminations or deal stage changes, a 24-hour delay is a compliance incident or a stale-pipeline problem.
2. Custom field mapping is hidden behind enterprise paywalls. As your customers scale, they heavily customize their CRMs and HRIS platforms. Apideck restricts Custom Field Mapping to its Scale plan and above. Their Launch plan at $599/month does not include it. You need the Scale tier at $1,299/month to access custom field mapping. The moment your first enterprise customer asks you to map their custom BambooHR employment type field or a custom Salesforce object, you are looking at more than doubling your monthly spend.
3. No auto-generated MCP server support. If your product team is building AI agents, those agents need secure, scoped access to third-party data. Agentic AI is becoming a standard pattern in B2B SaaS—nearly 33% of organizations with at least 1000 full-time employees have already deployed agentic AI. Apideck currently lacks native Model Context Protocol (MCP) server generation, forcing your engineers to manually build and maintain tool definitions for LLMs.
For a deeper dive into these architectural limits, see our technical breakdown in Truto vs Apideck: The Best Alternative for Enterprise SaaS Integrations.
The Migration Cliff: Why Re-Authenticating End Users is Not an Option
Unified API platforms abstract away the pain of dealing with terrible vendor API docs, inconsistent pagination, and undocumented edge cases. But the architecture of most platforms creates a dependency that is easy to miss during evaluation and incredibly painful to untangle later.
Apideck Vault acts as a centralized credential store. When your customer authenticates via the Apideck UI, the resulting OAuth access_token and refresh_token are held by Apideck. Your application only holds an Apideck consumer ID. If you simply switch integration vendors, those consumer IDs become useless. Forcing enterprise users to click "Reconnect" on their integrations is a nuclear option. The math is brutal:
- Support ticket volume: Every reconnection generates at least one support ticket. At 200 linked accounts, that is 200+ tickets in a single week.
- Churn risk: Enterprise buyers who have to re-link core systems of record will question whether your product is stable. Some will simply drop off.
- Coordination overhead: Enterprise clients often require their IT team to approve OAuth grants. Scheduling that across dozens of accounts takes weeks.
The good news: OAuth tokens are just strings with an expiry date. If you can extract them from your current vendor's credential store and insert them into a new one, the end user never knows you switched infrastructure. The token keeps working until it expires, at which point the new platform refreshes it automatically.
Step 1: Exporting OAuth Tokens from Apideck Vault
The first technical hurdle is getting your data out. You need the raw OAuth tokens, the expiration timestamps, the scopes, and any provider-specific metadata (like subdomains or tenant IDs) stored in Apideck Vault.
Apideck's Vault API exposes connection data through its Connections endpoints. However, Apideck does not expose raw access_token or refresh_token values through its standard API responses—these are stored server-side and injected into requests at runtime. Depending on your setup, you have two paths:
Option A: Use Your Own OAuth App Credentials
If you brought your own OAuth app credentials (your own client_id and client_secret) when setting up integrations in Apideck, the tokens were issued to your OAuth application. This is the best-case scenario. You can:
- Contact Apideck support and request a secure, encrypted export of your Vault data. Be specific: you need the raw
access_token,refresh_token,expires_at, and any per-connection metadata for eachconsumer_id. - Negotiate the export as part of your offboarding. Most vendors will cooperate, especially if the OAuth app belongs to you.
Option B: Apideck Managed OAuth Credentials
If you used Apideck's shared OAuth app credentials (Apideck's client_id), the tokens belong to Apideck's OAuth application. Migrating these tokens to a new platform that uses a different OAuth application will not work—the provider (Salesforce, HubSpot, etc.) will reject refresh attempts from mismatched client credentials.
In this case, your options are:
- Re-register a new OAuth app with each provider and use the new platform's connection flow, but stagger this over time so it is not a sudden cliff.
- Use the migration to switch to your own OAuth app, which gives you token portability going forward.
OAuth App Ownership Matters If you do not own the OAuth app that issued the tokens, token migration is not possible without re-authentication. This is the single most important architectural decision for integration vendor portability. We wrote a deep dive on this in OAuth App Ownership: How to Avoid Vendor Lock-In.
The Target Export Payload
Whichever method you use, you are looking to build a dataset that maps your internal identifiers to the raw provider credentials. The structure you need to extract for each connection looks like this:
// Structure each exported connection for import
interface ExportedConnection {
consumer_id: string; // Your internal user/account ID
service_id: string; // e.g., 'salesforce', 'bamboohr'
unified_api: string; // e.g., 'crm', 'hris'
credentials: {
access_token: string;
refresh_token: string;
expires_at: string; // ISO 8601 timestamp
token_type: string; // Usually 'Bearer'
scope?: string; // Original scopes granted
};
metadata: Record<string, any>; // Provider-specific: instance_url, realm_id, etc.
}Beware of Token Rotation Race Conditions OAuth 2.0 refresh tokens are often single-use. If Apideck's background workers refresh a token after you have exported the data but before you have switched your routing, the exported refresh token becomes invalid. You must coordinate a hard cutover or pause Apideck syncs during the migration window.
Step 2: Importing Credentials into Truto's Generic Context
Most unified APIs maintain separate code paths for each integration. Adding custom credentials requires mapping them to rigid, integration-specific database columns.
Truto takes a radically different approach. The entire platform contains zero integration-specific code. Integration behavior is defined entirely as declarative JSON configurations. This makes importing historical credentials incredibly straightforward.
In Truto, every connected account is an IntegratedAccount, and its credentials live in a flexible, provider-agnostic JSON context object:
{
"context": {
"oauth": {
"token": {
"access_token": "eyJhbGciOiJSUzI1NiIs...",
"refresh_token": "dGhpcyBpcyBhIHJlZnJlc2g...",
"expires_at": "2026-04-10T14:30:00.000Z",
"token_type": "Bearer",
"scope": "read write"
},
"scope": "read write"
},
"instance_url": "https://yourcompany.my.salesforce.com"
}
}Provider-specific metadata like Salesforce's instance_url or QuickBooks' realm_id goes into the context root. The platform resolves these using JSONata expressions in the integration configuration—no custom code needed.
Import via API
For each connection you exported from Apideck, create an integrated account in Truto by passing the raw tokens directly into the context:
const response = await fetch('https://api.truto.one/integrated-accounts', {
method: 'POST',
headers: {
'Authorization': `Bearer ${TRUTO_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
integration_id: 'salesforce',
environment_id: 'env_prod_123',
external_id: exportedConnection.consumer_id,
authentication_method: 'oauth2',
context: {
oauth: {
token: {
access_token: exportedConnection.credentials.access_token,
refresh_token: exportedConnection.credentials.refresh_token,
expires_at: exportedConnection.credentials.expires_at,
token_type: exportedConnection.credentials.token_type,
scope: exportedConnection.credentials.scope
}
},
// Provider-specific metadata
instance_url: exportedConnection.metadata.instance_url
}
})
});When Truto receives this payload, it automatically encrypts sensitive fields (like access_token and refresh_token) at rest using AES-GCM encryption.
Proactive Token Refresh Architecture
One of the primary reasons integrations fail in production is reactive token refreshing—waiting until an API call returns a 401 Unauthorized before attempting to refresh the token. This creates latency spikes and often results in dropped webhooks if the refresh fails.
Truto eliminates this via proactive scheduling. The moment you import that OAuth token payload, Truto reads the expires_at timestamp. The platform schedules a durable background task to fire exactly 60 to 180 seconds before the token expires.
When the scheduled task fires, Truto uses your OAuth app's client_id and client_secret to execute the standard OAuth 2.0 refresh flow, updates the encrypted context with the new tokens, and schedules the next refresh.
If the refresh succeeds, the account stays active and your application never experiences a 401 Unauthorized. If it fails (e.g., invalid_grant because the refresh token was revoked by the user in Salesforce), the account is marked as needs_reauth and a webhook event is fired so you can prompt the specific user to reconnect.
sequenceDiagram
participant App as Your Application
participant Truto as Truto Platform
participant Provider as SaaS Provider (e.g., Salesforce)
App->>Truto: Import Apideck OAuth Tokens
Truto->>Truto: Encrypt tokens at rest
Truto->>Truto: Schedule proactive refresh (T-minus 180s)
Note over Truto: ... time passes ...
Truto->>Provider: Execute Refresh Grant (Background)
Provider-->>Truto: New Access & Refresh Tokens
Truto->>Truto: Update encrypted context
App->>Truto: API Request (e.g., GET Contacts)
Truto->>Provider: Authenticated Request (Always Valid)
Provider-->>Truto: 200 OK
Truto-->>App: Normalized ResponseStep 3: Handling Rate Limits Post-Migration (The Standardized Approach)
Here is where you need to be honest about what changes post-migration. When you were on Apideck, rate limit handling was opaque—Apideck managed the upstream API calls and handled (or didn't handle) rate limits internally. Many unified APIs attempt to "absorb" rate limits by holding requests in memory and applying exponential backoff.
This is a massive anti-pattern for enterprise engineering. Absorbing 429s hides backpressure from your application. Your workers stay open, waiting for HTTP responses that take 45 seconds to resolve, eventually causing cascading timeouts across your own infrastructure.
Truto does NOT retry, throttle, or apply backoff on rate limit errors.
When an upstream API returns a rate-limit error, Truto passes that 429 directly back to your caller. This keeps the platform transparent and gives you full control over your retry strategy.
However, dealing with 50 different rate limit headers across 50 different APIs is a nightmare. Some use X-HubSpot-RateLimit-Daily, others use Sforce-Limit-Info, and some just put it in the response body.
What Truto does do is normalize the rate limit information from every upstream provider into standardized response headers based on the IETF RateLimit specification:
| Header | Meaning |
|---|---|
ratelimit-limit |
Maximum requests allowed in the current window |
ratelimit-remaining |
Requests remaining in the current window |
ratelimit-reset |
Seconds until the rate limit window resets |
Building Your Backoff Logic
Because Truto normalizes the headers, your engineering team can build a single, unified retry queue on your side of the architecture. When your worker receives a 429, it simply reads the ratelimit-reset header, pauses the job, and safely retries.
async function callTrutoWithBackoff(
url: string,
options: RequestInit,
maxRetries = 3
): Promise<Response> {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status === 429) {
// Read the standardized header provided by Truto
const resetSeconds = parseInt(
response.headers.get('ratelimit-reset') || '60',
10
);
// Add jitter to prevent thundering herd
const waitMs = resetSeconds * 1000 + Math.random() * 1000;
console.warn(
`Rate limited. Waiting ${resetSeconds}s before retry ${attempt + 1}`
);
await new Promise((resolve) => setTimeout(resolve, waitMs));
continue;
}
// Proactively slow down when remaining quota is low
const remaining = parseInt(
response.headers.get('ratelimit-remaining') || '100',
10
);
if (remaining < 10) {
console.warn(`Low rate limit quota: ${remaining} remaining`);
}
return response;
}
throw new Error('Max retries exceeded for rate-limited request');
}This is more work than having the platform absorb 429s for you. But it is also more honest. You know exactly when you are being rate limited, which provider is throttling you, and how long to wait. No black-box retry logic hiding upstream failures. For a comprehensive guide on architecting this, read our Best Practices for Handling API Rate Limits.
Step 4: Declarative Mappings to Mimic Apideck's Unified Schema
Migrating credentials is only half the battle. If you switch to a new unified API, the JSON response shapes will change. A Contact in Apideck looks different than a Contact in Truto.
Normally, this means rewriting your entire frontend UI and backend business logic to handle the new schema. With Truto, your frontend code currently expects Apideck's unified response format, and you do not have to touch it.
Truto relies on declarative JSONata expressions to map data between the provider's native format and the unified format. Because these mappings are exposed and editable, you can write a custom JSONata expression in Truto that outputs the exact JSON shape your application currently expects from Apideck.
Example: Mimicking Apideck's Contact Schema
Let's say your frontend expects Apideck's specific nested structure for contacts. Your JSONata mapping in Truto can reproduce that exact shape field-for-field:
{
"id": $.id,
"first_name": $.properties.firstname,
"last_name": $.properties.lastname,
"company_name": $.properties.company,
"emails": $.properties.emails.{
"email": value,
"type": type
},
"phone_numbers": [
{
"number": $.properties.phone,
"type": "primary"
}
],
"custom_mappings": {
"apideck_legacy_id": $.properties.hs_object_id,
"employee_band": $.properties.Employee_Band__c
},
"updated_at": $.properties.lastmodifieddate
}This mapping is defined as configuration data, not code. You can adjust it per-provider, per-customer, or per-environment without deploying anything. If your biggest Salesforce customer has a custom Employee_Band__c field that needs to appear as custom_mappings.employee_band in the response, you add a single line to the JSONata expression for that customer's account.
By deploying this mapping at the Truto layer, your backend receives the exact payload it is used to. You achieve a complete infrastructure migration without a single breaking change to your application logic.
Testing the Migration and Flipping the Switch
A migration of this scale requires strict operational discipline. Do not attempt a "big bang" cutover. Use a phased approach.
Pre-Migration Checklist
- Verify OAuth app ownership: Confirm you own the
client_id/client_secretfor every provider. If not, plan your re-auth strategy for those providers. - Import tokens into a staging environment: Truto supports environment-level credential overrides, so you can test with production tokens in a sandboxed context.
- Validate token refresh: For each provider, trigger a manual token refresh and confirm the new
access_tokenworks against the provider's API. - Shadow Reads: Configure your application to perform "shadow reads." Fetch data from Apideck (to serve the user) and simultaneously fetch it from Truto (in the background). Diff the JSON payloads to ensure your JSONata mappings are perfectly mimicking the Apideck schema.
- Test rate limit header normalization: Make enough requests to see rate limit headers appear. Verify your backoff logic reads them correctly.
- Verify webhook delivery: If you are using Apideck's virtual webhooks, set up equivalent webhook subscriptions in Truto and confirm events arrive.
Cutover Sequence
sequenceDiagram
participant App as Your Application
participant GW as API Gateway / Feature Flag
participant Old as Apideck
participant New as Truto
App->>GW: API Request (integration call)
alt Feature flag: canary group
GW->>New: Route to Truto
New-->>GW: Unified response
else Feature flag: default
GW->>Old: Route to Apideck
Old-->>GW: Unified response
end
GW-->>App: Response (same shape)- Canary a single low-risk integration (e.g., a file storage connector with low traffic). Route 5% of traffic through Truto.
- Monitor for 48 hours. Watch for token refresh failures, response shape mismatches, and rate limit behavior.
- Expand to 100% for that integration. Then repeat for the next provider.
- Deprovision Apideck connections only after the new platform has successfully refreshed each token at least once.
The Trade-Offs You Should Know About
Let's be direct about what this migration costs you:
- Engineering time: Plan for 2 to 4 weeks of dedicated engineering effort for a team with 50+ linked accounts across 5+ providers. The token import is fast; writing and testing JSONata mappings to match Apideck's exact response shape is the slow part.
- Rate limit responsibility shifts to you. On Apideck, rate limit handling was opaque. On Truto, you get standardized headers and must implement your own backoff. This is more control, but also more code.
- Provider-specific quirks don't disappear. Salesforce's SOQL-based filtering, HubSpot's association API, QuickBooks'
realm_idrequirement—these all still exist regardless of which unified API sits in front of them. A new platform doesn't make bad vendor APIs better.
The upside: you get custom field mappings on every plan, proactive token refresh that catches failures before users do, and a declarative architecture where adding a new integration is a data operation, not a code deployment.
What Comes Next
If you are at the point where Apideck's constraints are blocking enterprise deals or causing compliance gaps, the migration is worth doing. The average company uses over 100 SaaS apps, and your customers expect every one of them to integrate with your product.
Check your OAuth app ownership first, export your tokens before deprovisioning anything, and use declarative schema mappings to preserve your frontend contract. By understanding the mechanics of OAuth token portability and taking control of your own rate limit queues, you can upgrade your integration infrastructure without ever asking a customer to hit "Reconnect."
FAQ
- Can I export raw OAuth tokens from Apideck Vault?
- Apideck Vault does not expose raw tokens via its standard API. If you own the OAuth app credentials, you can request a secure export from Apideck support. If you used Apideck's managed OAuth app, you must re-authenticate connections with your own app.
- Does switching unified APIs require users to reconnect their accounts?
- Not if you own the OAuth app that issued the credentials. Tokens are portable strings. You can extract them and import them into a new platform's credential store transparently, so the end user never sees the infrastructure switch.
- How does Truto handle API rate limits compared to Apideck?
- Truto does not absorb or retry 429 errors. It normalizes upstream rate limits into standard IETF headers (ratelimit-limit, ratelimit-remaining, ratelimit-reset), giving your application the data needed to manage its own backpressure and queues.
- Will migrating away from Apideck break my frontend code?
- No. By using declarative JSONata mappings in Truto, you can transform upstream API responses into the exact JSON schema your application currently expects from Apideck, eliminating the need to rewrite frontend logic.