Migrating Beyond Finch: Expanding to a Multi-Category Unified API Without the Re-Authentication Cliff
Learn how to migrate from a single-category HRIS API to a multi-category unified API platform without forcing enterprise customers to re-authenticate.
If you built your initial HRIS integrations on Finch and your product roadmap now demands CRM, ATS, ticketing, or accounting integrations, you are likely hitting the architectural limits of a single-category integration strategy. Finch does one thing exceptionally well—employment data—but it does not extend beyond that. When your enterprise buyers demand native integrations with their broader software ecosystem, a single-category unified API becomes a roadblock.
The engineering reality sets in: you either integrate a second unified API platform, build point-to-point connections in-house, or migrate to a multi-category provider. (If you are currently evaluating your options, we recently broke down the best alternatives to Finch). Most engineering leaders delay this migration because of the "re-authentication cliff." Moving to a new integration infrastructure usually means emailing hundreds of enterprise customers, asking their IT admins to log back in and re-authorize OAuth permissions.
It does not have to be this way. If you approach the migration as a data operation rather than a code rewrite, you can port your existing integrations to a multi-category unified API without your end users ever knowing.
This guide breaks down the exact technical strategy to export OAuth tokens, import them into a generic credential context, handle rate limits post-migration, and use declarative mappings to mimic your old API responses so your frontend code remains untouched.
The SaaS Sprawl: Why HRIS-Only Unified APIs Eventually Hit a Wall
Finch's unified API was built specifically for the employment ecosystem. That specificity is a strength when all you need is employee directories, pay runs, and benefits enrollment. Finch only supports employment data: HRIS, payroll, benefits, and employment verification. If your product only touches HR workflows, that is fine. But product roadmaps rarely stay in one lane.
Companies have an average of 106 different SaaS tools, down from 112 in 2023, and organizations use 5% fewer tools year over year. Even with consolidation trending downward, the average enterprise stack is enormous. Smaller companies (1-500 employees) spend an average of $11.5M on SaaS and use 152 apps, while large enterprises (10,000+ employees) spend an average of $284M and use 660 apps. If you zoom into research from Salesforce's connectivity benchmarks, the picture is even more complex: the average enterprise uses 897 applications, and only 29% are connected.
Those applications do not sit neatly inside the HRIS box. They span learning management systems, CI/CD tools, payment gateways, instant messaging platforms, and knowledge bases.
Here is how the wall typically appears for a growing B2B SaaS company:
- Your sales team closes an enterprise deal that requires syncing employee data from BambooHR (covered by Finch) and syncing deal data from Salesforce (not covered by Finch).
- Product requests a ticketing integration—Jira, Zendesk, or ServiceNow—for your customer success workflow. Finch cannot help.
- Finance needs accounting data from QuickBooks or Xero for reconciliation features. You are on your own.
Suddenly you are managing Finch for HRIS, building direct integrations for CRM, and evaluating yet another vendor for accounting. Three integration strategies, three sets of auth management, three billing relationships. Maintaining separate webhook ingestion pipelines, distinct pagination normalizers, and different error-handling logic across disjointed systems is an operational nightmare. The overhead compounds fast.
For a deeper breakdown of how category coverage differs across unified API platforms, see our comparison: Which Integration Platform Handles the Most API Categories?.
The Migration Cliff: The Fear of Re-Authenticating Users
The re-authentication cliff is the single biggest reason product teams stay on a limited vendor longer than they should. Forcing hundreds of enterprise customers to click "Reconnect" on their core systems of record is a massive friction point. It means:
- Spike in support volume: Enterprise IT admins will open tickets questioning why a previously working integration is suddenly broken. Every confused HR admin who sees a broken connection will file a ticket.
- Security reviews: Re-authorizing an OAuth application often triggers a fresh security review from the client's InfoSec team.
- Customer drop-off and churn risk: Enterprise customers do not appreciate being told to re-link their systems of record because you switched infrastructure vendors. High friction increases drop-off, and authentication failures significantly increase the cost of customer support.
- Weeks of coordination: You must coordinate with enterprise clients who have strict change management processes and approval chains.
The fear is rational. When a user hits a re-authentication prompt they did not expect, a significant percentage will abandon the flow entirely. For enterprise accounts paying six figures, that is a conversation nobody wants to have.
But here is the thing: you do not have to re-authenticate anyone.
By removing the need to re-authenticate, platforms prevent user drop-off and maintain a frictionless experience. You already have the authorization grants. You already hold the OAuth access and refresh tokens. The technical strategy below explains exactly how to move that state from one vendor's database to another without breaking the token lifecycle.
Step 1: Exporting and Importing OAuth Tokens
The core idea: OAuth access tokens and refresh tokens are just strings. Whether you are holding a Salesforce access token or a Workday refresh token, the underlying OAuth 2.0 mechanics are identical. If you can export them from your current vendor and import them into a new platform's credential store, the upstream provider (BambooHR, Gusto, etc.) never knows the difference. From the provider's perspective, the same OAuth application is making the same API calls with the same tokens.
For a full walkthrough of this pattern applied to another vendor migration, see our guide to migrating from Merge.dev without re-authenticating customers.
What you need to export from Finch
For each Finch connection, you need to extract the raw credentials:
access_token- the current OAuth access tokenrefresh_token- used to obtain new access tokens when the current one expiresexpires_at- the token expiry timestampscope- the granted OAuth scopes- Provider identifier - which HRIS/payroll system (e.g.,
bamboohr,gusto,adp_workforce_now)
Note on Assisted Integrations: Finch manages tokens internally. Depending on your Finch plan and integration type, you may need to work with Finch's support team or use their API to retrieve the raw OAuth tokens for each connection. For "assisted" integrations (which make up a large portion of Finch's 250+ provider catalog), credential export may not be possible since those connections use screen-scraping or credential-based access rather than standard OAuth. Those specific connections will require re-authentication.
The OAuth App Ownership Question
This is the part most teams miss. For the token migration to work, the OAuth application (the client_id and client_secret) must be the same on both sides. There are two paths:
- You own the OAuth app. If you registered your own OAuth application with BambooHR, Gusto, etc., and configured Finch to use it, you can point the new unified API platform at the same OAuth app. Tokens stay valid because the
client_idhas not changed. - Finch owns the OAuth app. If Finch's own OAuth application was used (common for smaller customers), you will need to register your own OAuth apps with each provider and re-authenticate those connections. There is no way around this—tokens are bound to the
client_idthat issued them.
This is why OAuth app ownership matters so much when selecting any integration vendor. If you do not own the OAuth app, you do not own the relationship.
flowchart LR
A["Export tokens<br>from Finch"] --> B{"Who owns the<br>OAuth app?"}
B -->|You own it| C["Import tokens<br>into new platform"]
B -->|Finch owns it| D["Register your own<br>OAuth apps first"]
D --> E["Re-auth required<br>for these connections"]
C --> F["Tokens work<br>immediately"]
F --> G["Platform handles<br>refresh going forward"]Importing into a Generic Credential Store
Most unified APIs struggle with importing tokens because they use hardcoded, integration-specific database columns. If a platform has a finch_token column or a workday_auth table, importing arbitrary third-party tokens requires custom engineering work from the vendor.
A multi-category unified API platform like Truto uses a fundamentally different architecture. Truto's database contains zero integration-specific columns. Connected accounts (often referred to as linked accounts) are stored in an integrated_account table, and the credentials live inside a generic context JSON object on each connected account record.
To import your existing users, you simply format your exported tokens into Truto's expected JSON structure and write them to the API:
{
"context": {
"oauth": {
"token": {
"access_token": "eyJhbGciOiJSUzI1NiIs...",
"refresh_token": "dGhpcyBpcyBhIHJlZnJl...",
"expires_at": "2026-04-09T14:30:00.000Z",
"token_type": "bearer",
"scope": "employees:read payroll:read"
}
}
},
"authentication_method": "oauth2",
"integration_name": "bamboohr"
}Proactive Token Refresh After Import
Once tokens are injected, the new platform needs to keep them alive. Truto takes over the lifecycle management and does not wait for tokens to expire. The platform schedules a proactive refresh alarm 60 to 180 seconds before the expires_at timestamp (with a randomized buffer to spread load).
A background worker acts as a distributed lock, ensuring that concurrent API requests or sync jobs do not trigger race conditions while the token is being refreshed. If the token is already expired at import time, the platform will attempt an immediate refresh using the refresh token. If that fails—because the refresh token was revoked or the OAuth app changed—the account gets flagged as needs_reauth and a webhook fires so you can notify the customer.
The key point: most OAuth refresh tokens do not expire when you move them between platforms. A refresh token issued by BambooHR to client_id=abc123 will work from any server that presents the same client_id and client_secret. The provider does not know or care which unified API vendor is calling.
Step 2: Handling Rate Limits Post-Migration
After migrating tokens, your application will be making API calls through a different intermediary. Rate limit behavior changes, and you need to plan for it. When migrating integration platforms, engineering teams often misunderstand how rate limiting should be handled at the proxy layer. Many developers want their unified API to magically absorb rate limits, queue requests indefinitely, and hide HTTP 429 errors.
This is a dangerous architectural anti-pattern. If a proxy layer holds HTTP connections open while applying exponential backoff, it quickly exhausts connection pools. It hides backpressure from the client, leading to cascading failures across your infrastructure.
Radical honesty is required here: Truto does not retry, throttle, or absorb rate limit errors.
When an upstream provider returns HTTP 429 (Too Many Requests), Truto passes that error directly back to your application. This is a deliberate design choice—your application knows its own traffic patterns and priorities better than any intermediary can.
Instead, Truto normalizes rate limit information from terrible, inconsistent upstream APIs into standardized response headers based on the IETF RateLimit header fields specification.
Every upstream provider formats rate limit information differently. Salesforce uses Sforce-Limit-Info. HubSpot uses X-HubSpot-RateLimit-Daily. BambooHR gives you almost nothing. The IETF spec defines a uniform way to present this data. Truto normalizes all of these into the same three headers regardless of which provider you are calling:
| Header | Meaning |
|---|---|
ratelimit-limit |
Maximum requests allowed in the current window |
ratelimit-remaining |
Requests remaining before you hit the limit |
ratelimit-reset |
Seconds until the rate limit window resets |
Your application is responsible for reading these headers and implementing its own retry logic. This gives you absolute control over how backpressure is handled. Your backoff logic then becomes provider-agnostic:
// Example: Implementing caller-controlled backoff using Truto's normalized headers
async function callWithBackoff(url: string, options: RequestInit, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status === 429) {
const resetSeconds = parseInt(
response.headers.get('ratelimit-reset') || '60', 10
);
// Add jitter to prevent thundering herd
const jitter = Math.random() * 2;
console.warn(`Rate limited. Sleeping for ${resetSeconds + jitter} seconds.`);
await new Promise(resolve => setTimeout(resolve, (resetSeconds + jitter) * 1000));
continue;
}
// Check remaining quota proactively
const remaining = parseInt(
response.headers.get('ratelimit-remaining') || '100', 10
);
if (remaining < 5) {
// Slow down before hitting the wall
await new Promise(resolve => setTimeout(resolve, 1000));
}
return response;
}
throw new Error('Rate limit retries exhausted');
}By passing the 429 directly back to the caller alongside standardized headers, Truto ensures you have the exact metadata needed to implement resilient queueing on your end.
If you were previously relying on Finch to handle rate limiting internally (Finch batches and caches data on a daily refresh cycle), switching to a real-time API model means your application is now directly exposed to upstream rate limits. Build your backoff logic before you flip the switch.
Step 3: Using Declarative Mappings to Mimic Old Responses
This is where most migration guides stop, and most migrations fail. You have moved the tokens. You have handled rate limits. But your frontend code still expects Finch's specific response schema—field names like individual_id, first_name, last_name, department.name, and Finch-specific envelope structures.
If your new unified API returns a different schema, the JSON shapes will change. A field that used to be called employee_status might now be called status. You have two choices: rewrite every frontend component that consumes HRIS data, or reshape the new API's responses to match the old format.
Rewriting your entire frontend to accommodate a new schema defeats the purpose of a fast migration.
The declarative mapping approach avoids frontend rewrites entirely. Truto uses JSONata—a functional query and transformation language purpose-built for reshaping JSON objects—to define how responses are shaped. Every field mapping is a declarative expression stored as configuration, not compiled code.
graph TD
A[Third-Party API Raw Response] -->|Proxy Layer| B(Truto Engine)
B --> C{JSONata Mapping Expression}
C -->|Transforms Data| D[Unified Schema]
C -->|Customer Override| E[Legacy Finch Schema Mimic]<br>No Frontend Changes NeededHere is a concrete example. Suppose Finch returned employee data in this shape:
{
"individual_id": "finch-id-123",
"first_name": "Jane",
"last_name": "Doe",
"department": { "name": "Engineering" },
"manager": { "id": "finch-id-456" },
"is_active": true
}And Truto's default unified HRIS schema returns:
{
"id": "truto-id-123",
"first_name": "Jane",
"last_name": "Doe",
"department": "Engineering",
"manager_id": "truto-id-456",
"employment_status": "active"
}Because Truto's mappings are just JSONata strings stored in the database, you can write an override expression that takes Truto's standard output and reshapes it to exactly mimic the legacy Finch structure:
/* Example JSONata mapping override to mimic a legacy Finch HRIS schema */
response.{
"individual_id": id,
"first_name": first_name,
"last_name": last_name,
"department": { "name": department },
"manager": { "id": manager_id },
"is_active": employment_status = "active"
}Truto supports a three-level override hierarchy, each merged on top of the previous:
- Platform base - the default mapping that works for most customers.
- Environment override - your environment's custom mapping (this is where the Finch-compatibility shim lives).
- Account override - per-customer overrides for edge cases (e.g., a customer with highly specific custom fields).
You can apply this legacy-mimicking JSONata mapping at the environment level. When a request comes in, Truto fetches the raw data from the third-party API, evaluates the JSONata expression, and returns the exact JSON shape your frontend was already built to consume.
No code deployments. No frontend rewrites. You achieve a complete infrastructure migration entirely through data configuration. Once your frontend is fully migrated to the new schema over time, you simply remove the environment override and start consuming the standard unified format. The compatibility layer was always meant to be temporary scaffolding.
What This Migration Does Not Solve
Honesty matters here. Migrating to a multi-category unified API is not a free lunch. There are a few constraints to be aware of:
- Assisted integrations may not migrate: If you rely on Finch's "assisted" connections (screen-scraping or credential-based access for providers without APIs), those cannot be token-migrated. By default, Finch requests fresh data from providers each day, with assisted integrations refreshing every 7 days. Some of those providers simply do not have OAuth APIs. You will need to evaluate whether the new platform covers those providers natively or if you need a fallback strategy.
- Schema differences are real: Even with the JSONata compatibility layer, there will be edge cases where Finch's schema exposed data in ways the new platform does not. Audit your Finch API usage before starting the migration, not after.
- Rate limit exposure increases: Finch's daily sync model means you were never hitting upstream APIs in real-time. A real-time unified API means your application now directly contends with provider rate limits. This is more flexible but requires more careful engineering.
Expanding Your Roadmap: From HRIS to CRM, Ticketing, and Beyond
The token migration and response mapping are the hard parts. Once you have successfully migrated your OAuth tokens and mapped your response schemas, you have effectively eliminated the re-authentication cliff. Your existing users remain connected, and your application continues to function normally.
But the real value of migrating to a multi-category unified API is what happens next. The actual expansion into new categories is surprisingly fast.
You are no longer restricted to employment data. Because Truto uses the same generic execution pipeline for every integration, the infrastructure you just implemented for HRIS automatically supports over 27 distinct verticals.
Here is what becomes possible immediately after migration:
| Category | Example Integrations | Use Case |
|---|---|---|
| CRM | Salesforce, HubSpot, Pipedrive | Sync customer records with your product |
| ATS | Greenhouse, Lever, Ashby | Pull candidate data into your HR workflow |
| Ticketing | Jira, Zendesk, Linear | Surface support tickets in your product |
| Accounting | QuickBooks, Xero, NetSuite | Reconcile payroll data with financial records |
| Knowledge Base | Notion, Confluence | Feed company docs into RAG pipelines |
| Directory | Okta, Microsoft Entra ID, Google Workspace | Sync user provisioning across tools |
Each of these categories follows the same pattern: one API call, one schema, and the same credential management you already set up. If your sales team requests a Salesforce integration, your engineering team does not need to learn SOQL, figure out Salesforce's pagination quirks, or build a new webhook ingestion pipeline. They simply query Truto's CRM unified model. If a customer needs a Zendesk integration, you query the Ticketing model.
For teams building HRIS integrations specifically, the unified HRIS model covers employees, employments, compensations, time-off requests, time-off balances, locations, groups, job roles, and company benefits—with the same depth as dedicated HRIS-only vendors, but within a broader platform.
By migrating beyond a single-category provider, you transform integrations from a constant engineering bottleneck into a predictable, scalable data operation.
Making the Move: A Practical Checklist
If you are ready to execute this migration, follow these precise steps:
- Audit your Finch connections: Categorize each connection by auth type: OAuth (migratable) vs. assisted/credential-based (requires re-auth).
- Verify OAuth app ownership: If Finch's OAuth app was used, register your own apps with each provider before starting.
- Export tokens: Pull the raw tokens for all OAuth-based connections. Store them securely.
- Set up the new platform: Create integrated accounts in your new multi-category API, importing tokens into the generic credential context.
- Apply response mapping overrides: Write JSONata configurations to match Finch's response schema during the transition.
- Run both systems in parallel: Maintain a validation period. Compare responses. Fix mapping edge cases.
- Switch traffic: Route requests to the new platform. Monitor for
needs_reauthwebhook events. - Retire the Finch dependency: Remove the response mapping compatibility layer once your frontend migration is complete.
The entire process for a team with 100-200 Finch connections typically takes two to four weeks of engineering time. The payoff is immediate access to a massive integration catalog through the exact same API you already know.
Architectural Takeaway: Migrating unified APIs does not require starting from scratch. By treating OAuth tokens as portable strings and using JSONata to mimic legacy schemas, you can swap out integration infrastructure without impacting your enterprise users.
Frequently Asked Questions
- Can I migrate from Finch without re-authenticating my customers?
- Yes, if you own the OAuth applications registered with each HRIS provider. OAuth tokens are portable—you export them from Finch, import them into the new platform's generic credential store, and the upstream provider never knows the difference. If Finch owns the OAuth app, re-authentication is required.
- Does Finch support CRM, ATS, or accounting integrations?
- No. Finch is built exclusively for the employment ecosystem, including HRIS, payroll, benefits, and employment verification. To add CRM, ATS, ticketing, or accounting integrations, you need a multi-category unified API platform.
- How does Truto handle rate limits differently from Finch?
- Truto passes HTTP 429 rate limit errors directly back to the caller without silently retrying or applying hidden backoff. It normalizes upstream rate limit data into standardized IETF headers (ratelimit-limit, ratelimit-remaining, ratelimit-reset) so your application can implement reliable, custom backoff logic.
- Will my frontend code break when I switch from Finch to another unified API?
- Not if you use declarative response mappings. You can use JSONata expressions in the new unified API to reshape outgoing API responses so they exactly mimic the legacy JSON schema your frontend already expects, acting as a temporary compatibility layer.