---
title: "Migrating Beyond Finch: Expanding to a Multi-Category Unified API Without the Re-Authentication Cliff"
slug: migrating-beyond-finch-expanding-to-a-multi-category-unified-api-without-the-re-authentication-cliff
date: 2026-04-08
author: Yuvraj Muley
categories: [Engineering, Guides, By Example]
excerpt: "Learn how to migrate from Finch or Merge.dev to a multi-category unified API without re-authenticating customers, with a reconnect playbook and rollout plan."
tldr: "Migrate OAuth tokens from Finch or Merge.dev without re-authenticating users. Covers token export, partner credential portability, JSONata schema mapping, reconnect automation with webhook listeners, and a phased rollout playbook."
canonical: https://truto.one/blog/migrating-beyond-finch-expanding-to-a-multi-category-unified-api-without-the-re-authentication-cliff/
---

# Migrating Beyond Finch: Expanding to a Multi-Category Unified API Without the Re-Authentication Cliff


If you built your initial HRIS integrations on Finch and your product roadmap now demands CRM, ATS, ticketing, or accounting integrations, you are likely hitting the architectural limits of a single-category integration strategy. Finch does one thing exceptionally well—employment data—but it does not extend beyond that. When your enterprise buyers demand native integrations with their broader software ecosystem, a single-category unified API becomes a roadblock.

The engineering reality sets in: you either integrate a second unified API platform, build point-to-point connections in-house, or migrate to a multi-category provider. (If you are currently evaluating your options, we recently broke down the [best alternatives to Finch](https://truto.one/blog/what-are-the-best-alternatives-to-finch-for-employment-data-in-2026/)). Most engineering leaders delay this migration because of the "re-authentication cliff." Moving to a new integration infrastructure usually means emailing hundreds of enterprise customers, asking their IT admins to log back in and re-authorize OAuth permissions. 

It does not have to be this way. If you approach the migration as a data operation rather than a code rewrite, you can port your existing integrations to a multi-category unified API without your end users ever knowing.

This guide breaks down the exact technical strategy to export OAuth tokens, import them into a generic credential context, handle rate limits post-migration, and use declarative mappings to mimic your old API responses so your frontend code remains untouched.

While this guide uses Finch as the primary example, the same migration strategy applies to any unified API vendor - including multi-category providers like Merge.dev. If you are evaluating Merge.dev alternatives because you have outgrown its customization limits, sync-and-store architecture, or credential ownership model, the token portability and response mapping techniques below work identically. We cover Merge.dev-specific migration considerations in a [dedicated section below](#migrating-from-mergedev-what-changes), and our full [guide to migrating from Merge.dev](https://truto.one/blog/how-to-migrate-from-mergedev-without-re-authenticating-customers/) walks through the process end-to-end.

## The SaaS Sprawl: Why HRIS-Only Unified APIs Eventually Hit a Wall

Finch's unified API was built specifically for the employment ecosystem. That specificity is a strength when all you need is employee directories, pay runs, and benefits enrollment. Finch only supports employment data: HRIS, payroll, benefits, and employment verification. If your product only touches HR workflows, that is fine. But product roadmaps rarely stay in one lane.

Companies have an average of 106 different SaaS tools, down from 112 in 2023, and organizations use 5% fewer tools year over year. Even with consolidation trending downward, the average enterprise stack is enormous. Smaller companies (1-500 employees) spend an average of $11.5M on SaaS and use 152 apps, while large enterprises (10,000+ employees) spend an average of $284M and use 660 apps. If you zoom into research from Salesforce's connectivity benchmarks, the picture is even more complex: the average enterprise uses 897 applications, and only 29% are connected.

Those applications do not sit neatly inside the HRIS box. They span learning management systems, CI/CD tools, payment gateways, instant messaging platforms, and knowledge bases. 

Here is how the wall typically appears for a growing B2B SaaS company:

1. **Your sales team closes an enterprise deal** that requires syncing employee data from BambooHR (covered by Finch) *and* syncing deal data from Salesforce (not covered by Finch).
2. **Product requests a ticketing integration**—Jira, Zendesk, or ServiceNow—for your customer success workflow. Finch cannot help.
3. **Finance needs accounting data** from QuickBooks or Xero for reconciliation features. You are on your own.

Suddenly you are managing Finch for HRIS, building direct integrations for CRM, and evaluating yet another vendor for accounting. Three integration strategies, three sets of auth management, three billing relationships. Maintaining separate webhook ingestion pipelines, distinct pagination normalizers, and different error-handling logic across disjointed systems is an operational nightmare. The overhead compounds fast.

For a deeper breakdown of how category coverage differs across unified API platforms, see our comparison: [Which Integration Platform Handles the Most API Categories?](https://truto.one/blog/which-integration-platform-handles-the-most-api-categories-2026/).

## The Migration Cliff: The Fear of Re-Authenticating Users

The re-authentication cliff is the single biggest reason product teams stay on a limited vendor longer than they should. Forcing hundreds of enterprise customers to click "Reconnect" on their core systems of record is a massive friction point. It means:

*   **Spike in support volume:** Enterprise IT admins will open tickets questioning why a previously working integration is suddenly broken. Every confused HR admin who sees a broken connection will file a ticket.
*   **Security reviews:** Re-authorizing an OAuth application often triggers a fresh security review from the client's InfoSec team.
*   **Customer drop-off and churn risk:** Enterprise customers do not appreciate being told to re-link their systems of record because you switched infrastructure vendors. High friction increases drop-off, and authentication failures significantly increase the cost of customer support.
*   **Weeks of coordination:** You must coordinate with enterprise clients who have strict change management processes and approval chains.

The fear is rational. When a user hits a re-authentication prompt they did not expect, a significant percentage will abandon the flow entirely. For enterprise accounts paying six figures, that is a conversation nobody wants to have.

But here is the thing: **you do not have to re-authenticate anyone.** 

By removing the need to re-authenticate, platforms prevent user drop-off and maintain a frictionless experience. You already have the authorization grants. You already hold the OAuth access and refresh tokens. The technical strategy below explains exactly how to move that state from one vendor's database to another without breaking the token lifecycle.

## Migrating from Merge.dev: What Changes

If you are migrating from Merge.dev rather than a single-category vendor like Finch, the core strategy is identical - export tokens, import them, map responses - but the details differ in important ways.

**Partner credentials determine token portability.** Merge.dev allows you to configure your own OAuth application credentials (called "partner credentials") for each integration. If you set up partner credentials with a provider like Salesforce or HubSpot, you own the OAuth app and your tokens are potentially portable. If you never configured partner credentials, Merge's own OAuth app was used, and those tokens are bound to Merge's `client_id`. Those connections will require re-authentication.

There is an extra wrinkle: even if you added partner credentials to Merge after initial setup, existing Linked Accounts that were created using Merge's default credentials stay bound to those defaults. Only connections established *after* you configured partner credentials use your OAuth app. Check your Merge dashboard under Integrations to confirm which connections actually use your credentials vs. Merge's.

**Merge's `account_token` is not the raw OAuth token.** Each Merge Linked Account has an `account_token` that authenticates requests to Merge's unified API. This is Merge's internal abstraction - not the OAuth access token from the upstream provider like BambooHR or Salesforce. To migrate, you need the actual OAuth access and refresh tokens that Merge holds on your behalf for each connection. You will likely need to coordinate with Merge's team to export these raw credentials.

**Update the OAuth redirect URI.** When you configured partner credentials in Merge, the redirect URI was set to Merge's callback URL. Before new authentications can flow through the new platform, update the redirect URI in your OAuth app configuration with each provider to point to the new platform's callback URL. This does not invalidate existing tokens - redirect URIs are only used during the initial OAuth authorization flow, not during token refresh.

**Avoid triggering RELINK_NEEDED during migration.** When a Merge Linked Account's auth cannot be refreshed, Merge flags it as `RELINK_NEEDED`, pausing data syncs. During migration, import and validate tokens on the new platform *before* disabling syncs on Merge so there is never a gap in token lifecycle management.

**Schema mapping from Merge's Common Model.** Merge uses its own Common Model schema for each category (HRIS, ATS, CRM, Accounting, Ticketing). If your frontend consumes Merge's field names and envelope structures, you will need JSONata overrides (described in Step 3 below) to reshape the new platform's unified schema to match Merge's format during the transition.

## Step 1: Exporting and Importing OAuth Tokens

**The core idea:** OAuth access tokens and refresh tokens are just strings. Whether you are holding a Salesforce access token or a Workday refresh token, the underlying OAuth 2.0 mechanics are identical. If you can export them from your current vendor and import them into a new platform's credential store, the upstream provider (BambooHR, Gusto, etc.) never knows the difference. From the provider's perspective, the same OAuth application is making the same API calls with the same tokens.

For a full walkthrough of this pattern applied to another vendor migration, see our [guide to migrating from Merge.dev without re-authenticating customers](https://truto.one/blog/how-to-migrate-from-mergedev-without-re-authenticating-customers/).

### What you need to export from Finch

For each Finch connection, you need to extract the raw credentials:

*   **`access_token`** - the current OAuth access token
*   **`refresh_token`** - used to obtain new access tokens when the current one expires
*   **`expires_at`** - the token expiry timestamp
*   **`scope`** - the granted OAuth scopes
*   **Provider identifier** - which HRIS/payroll system (e.g., `bamboohr`, `gusto`, `adp_workforce_now`)

> [!WARNING]
> **Note on Assisted Integrations:** Finch manages tokens internally. Depending on your Finch plan and integration type, you may need to work with Finch's support team or use their API to retrieve the raw OAuth tokens for each connection. For "assisted" integrations (which make up a large portion of Finch's 250+ provider catalog), credential export may not be possible since those connections use screen-scraping or credential-based access rather than standard OAuth. Those specific connections will require re-authentication.

### The OAuth App Ownership Question

This is the part most teams miss. For the token migration to work, the **OAuth application** (the `client_id` and `client_secret`) must be the same on both sides. There are two paths:

1.  **You own the OAuth app.** If you registered your own OAuth application with BambooHR, Gusto, etc., and configured Finch to use it, you can point the new unified API platform at the same OAuth app. Tokens stay valid because the `client_id` has not changed.
2.  **Finch owns the OAuth app.** If Finch's own OAuth application was used (common for smaller customers), you will need to register your own OAuth apps with each provider and re-authenticate those connections. There is no way around this—tokens are bound to the `client_id` that issued them.

This is why [OAuth app ownership](https://truto.one/blog/oauth-app-ownership-how-to-avoid-vendor-lock-in-when-choosing-a-unified-api-provider/) matters so much when selecting any integration vendor. If you do not own the OAuth app, you do not own the relationship.

```mermaid
flowchart LR
    A["Export tokens<br>from Finch"] --> B{"Who owns the<br>OAuth app?"}
    B -->|You own it| C["Import tokens<br>into new platform"]
    B -->|Finch owns it| D["Register your own<br>OAuth apps first"]
    D --> E["Re-auth required<br>for these connections"]
    C --> F["Tokens work<br>immediately"]
    F --> G["Platform handles<br>refresh going forward"]
```

### Importing into a Generic Credential Store

Most unified APIs struggle with importing tokens because they use hardcoded, integration-specific database columns. If a platform has a `finch_token` column or a `workday_auth` table, importing arbitrary third-party tokens requires custom engineering work from the vendor.

A multi-category unified API platform like Truto uses a fundamentally different architecture. Truto's database contains zero integration-specific columns. Connected accounts (often referred to as [linked accounts](https://truto.one/blog/what-is-a-linked-account-in-unified-apis-architecture-guide/)) are stored in an `integrated_account` table, and the credentials live inside a generic `context` JSON object on each connected account record. 

To import your existing users, you simply format your exported tokens into Truto's expected JSON structure and write them to the API:

```json
{
  "context": {
    "oauth": {
      "token": {
        "access_token": "eyJhbGciOiJSUzI1NiIs...",
        "refresh_token": "dGhpcyBpcyBhIHJlZnJl...",
        "expires_at": "2026-04-09T14:30:00.000Z",
        "token_type": "bearer",
        "scope": "employees:read payroll:read"
      }
    }
  },
  "authentication_method": "oauth2",
  "integration_name": "bamboohr"
}
```

### Proactive Token Refresh After Import

Once tokens are injected, the new platform needs to keep them alive. Truto takes over the lifecycle management and does not wait for tokens to expire. The platform schedules a proactive refresh alarm 60 to 180 seconds before the `expires_at` timestamp (with a randomized buffer to spread load). 

A background worker acts as a distributed lock, ensuring that concurrent API requests or sync jobs do not trigger race conditions while the token is being refreshed. If the token is already expired at import time, the platform will attempt an immediate refresh using the refresh token. If that fails—because the refresh token was revoked or the OAuth app changed—the account gets flagged as `needs_reauth` and a webhook fires so you can notify the customer.

The key point: **most OAuth refresh tokens do not expire when you move them between platforms.** A refresh token issued by BambooHR to `client_id=abc123` will work from any server that presents the same `client_id` and `client_secret`. The provider does not know or care which unified API vendor is calling.

## Step 2: Handling Rate Limits Post-Migration

After migrating tokens, your application will be making API calls through a different intermediary. Rate limit behavior changes, and you need to plan for it. When migrating integration platforms, engineering teams often misunderstand how rate limiting should be handled at the proxy layer. Many developers want their unified API to magically absorb rate limits, queue requests indefinitely, and hide HTTP 429 errors.

This is a dangerous architectural anti-pattern. If a proxy layer holds HTTP connections open while applying exponential backoff, it quickly exhausts connection pools. It hides backpressure from the client, leading to cascading failures across your infrastructure.

Radical honesty is required here: **Truto does not retry, throttle, or absorb rate limit errors.**

When an upstream provider returns HTTP 429 (Too Many Requests), Truto passes that error directly back to your application. This is a deliberate design choice—your application knows its own traffic patterns and priorities better than any intermediary can.

Instead, Truto normalizes rate limit information from terrible, inconsistent upstream APIs into standardized response headers based on the [IETF RateLimit header fields specification](https://truto.one/docs/api-reference/overview/rate-limits). 

Every upstream provider formats rate limit information differently. Salesforce uses `Sforce-Limit-Info`. HubSpot uses `X-HubSpot-RateLimit-Daily`. BambooHR gives you almost nothing. The IETF spec defines a uniform way to present this data. Truto normalizes all of these into the same three headers regardless of which provider you are calling:

| Header | Meaning |
|--------|--------|
| `ratelimit-limit` | Maximum requests allowed in the current window |
| `ratelimit-remaining` | Requests remaining before you hit the limit |
| `ratelimit-reset` | Seconds until the rate limit window resets |

Your application is responsible for reading these headers and implementing its own retry logic. This gives you absolute control over how backpressure is handled. Your backoff logic then becomes provider-agnostic:

```typescript
// Example: Implementing caller-controlled backoff using Truto's normalized headers
async function callWithBackoff(url: string, options: RequestInit, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch(url, options);

    if (response.status === 429) {
      const resetSeconds = parseInt(
        response.headers.get('ratelimit-reset') || '60', 10
      );
      // Add jitter to prevent thundering herd
      const jitter = Math.random() * 2;
      console.warn(`Rate limited. Sleeping for ${resetSeconds + jitter} seconds.`);
      
      await new Promise(resolve => setTimeout(resolve, (resetSeconds + jitter) * 1000));
      continue;
    }

    // Check remaining quota proactively
    const remaining = parseInt(
      response.headers.get('ratelimit-remaining') || '100', 10
    );
    if (remaining < 5) {
      // Slow down before hitting the wall
      await new Promise(resolve => setTimeout(resolve, 1000));
    }

    return response;
  }
  throw new Error('Rate limit retries exhausted');
}
```

By passing the 429 directly back to the caller alongside standardized headers, Truto [ensures you have the exact metadata needed](https://truto.one/blog/best-practices-for-handling-api-rate-limits-and-retries-across-multiple-third-party-apis/) to implement resilient queueing on your end. 

> [!NOTE]
> If you were previously relying on Finch to handle rate limiting internally (Finch batches and caches data on a daily refresh cycle), switching to a real-time API model means your application is now directly exposed to upstream rate limits. Build your backoff logic before you flip the switch.

## Step 3: Using Declarative Mappings to Mimic Old Responses

This is where most migration guides stop, and most migrations fail. You have moved the tokens. You have handled rate limits. But your frontend code still expects Finch's specific response schema—field names like `individual_id`, `first_name`, `last_name`, `department.name`, and Finch-specific envelope structures.

If your new unified API returns a different schema, the JSON shapes will change. A field that used to be called `employee_status` might now be called `status`. You have two choices: rewrite every frontend component that consumes HRIS data, or reshape the new API's responses to match the old format.

Rewriting your entire frontend to accommodate a new schema defeats the purpose of a fast migration.

**The declarative mapping approach avoids frontend rewrites entirely.** Truto uses JSONata—a functional query and transformation language purpose-built for reshaping JSON objects—to define how responses are shaped. Every field mapping is a declarative expression stored as configuration, not compiled code.

```mermaid
graph TD
  A[Third-Party API Raw Response] -->|Proxy Layer| B(Truto Engine)
  B --> C{JSONata Mapping Expression}
  C -->|Transforms Data| D[Unified Schema]
  C -->|Customer Override| E[Legacy Finch Schema Mimic]<br>No Frontend Changes Needed
```

Here is a concrete example. Suppose Finch returned employee data in this shape:

```json
{
  "individual_id": "finch-id-123",
  "first_name": "Jane",
  "last_name": "Doe",
  "department": { "name": "Engineering" },
  "manager": { "id": "finch-id-456" },
  "is_active": true
}
```

And Truto's default unified HRIS schema returns:

```json
{
  "id": "truto-id-123",
  "first_name": "Jane",
  "last_name": "Doe",
  "department": "Engineering",
  "manager_id": "truto-id-456",
  "employment_status": "active"
}
```

Because Truto's mappings are just JSONata strings stored in the database, you can write an override expression that takes Truto's standard output and reshapes it to exactly mimic the legacy Finch structure:

```jsonata
/* Example JSONata mapping override to mimic a legacy Finch HRIS schema */
response.{
  "individual_id": id,
  "first_name": first_name,
  "last_name": last_name,
  "department": { "name": department },
  "manager": { "id": manager_id },
  "is_active": employment_status = "active"
}
```

Truto supports a three-level override hierarchy, each merged on top of the previous:

1.  **Platform base** - the default mapping that works for most customers.
2.  **Environment override** - your environment's custom mapping (this is where the Finch-compatibility shim lives).
3.  **Account override** - per-customer overrides for edge cases (e.g., a customer with highly specific custom fields).

You can apply this legacy-mimicking JSONata mapping at the environment level. When a request comes in, Truto fetches the raw data from the third-party API, evaluates the JSONata expression, and returns the exact JSON shape your frontend was already built to consume. 

No code deployments. No frontend rewrites. You achieve a complete infrastructure migration entirely through data configuration. Once your frontend is fully migrated to the new schema over time, you simply remove the environment override and start consuming the standard unified format. The compatibility layer was always meant to be temporary scaffolding.

## What This Migration Does Not Solve

Honesty matters here. Migrating to a multi-category unified API is not a free lunch. There are a few constraints to be aware of:

*   **Assisted integrations may not migrate:** If you rely on Finch's "assisted" connections (screen-scraping or credential-based access for providers without APIs), those cannot be token-migrated. By default, Finch requests fresh data from providers each day, with assisted integrations refreshing every 7 days. Some of those providers simply do not have OAuth APIs. You will need to evaluate whether the new platform covers those providers natively or if you need a fallback strategy.
*   **Schema differences are real:** Even with the JSONata compatibility layer, there will be edge cases where Finch's schema exposed data in ways the new platform does not. Audit your Finch API usage before starting the migration, not after.
*   **Rate limit exposure increases:** Finch's daily sync model means you were never hitting upstream APIs in real-time. A real-time unified API means your application now directly contends with provider rate limits. This is more flexible but requires more careful engineering.

## Expanding Your Roadmap: From HRIS to CRM, Ticketing, and Beyond

The token migration and response mapping are the hard parts. Once you have successfully migrated your OAuth tokens and mapped your response schemas, you have effectively eliminated the re-authentication cliff. Your existing users remain connected, and your application continues to function normally.

But the real value of migrating to a multi-category unified API is what happens next. The actual expansion into new categories is surprisingly fast.

You are no longer restricted to employment data. Because Truto uses the same generic execution pipeline for every integration, the infrastructure you just implemented for HRIS automatically supports over 27 distinct verticals. 

Here is what becomes possible immediately after migration:

| Category | Example Integrations | Use Case |
|----------|---------------------|----------|
| **CRM** | Salesforce, HubSpot, Pipedrive | Sync customer records with your product |
| **ATS** | Greenhouse, Lever, Ashby | Pull candidate data into your HR workflow |
| **Ticketing** | Jira, Zendesk, Linear | Surface support tickets in your product |
| **Accounting** | QuickBooks, Xero, NetSuite | Reconcile payroll data with financial records |
| **Knowledge Base** | Notion, Confluence | Feed company docs into RAG pipelines |
| **Directory** | Okta, Microsoft Entra ID, Google Workspace | Sync user provisioning across tools |

Each of these categories follows the same pattern: one API call, one schema, and the same credential management you already set up. If your sales team requests a Salesforce integration, your engineering team does not need to learn SOQL, figure out Salesforce's pagination quirks, or build a new webhook ingestion pipeline. They simply query Truto's CRM unified model. If a customer needs a Zendesk integration, you query the Ticketing model.

For teams building [HRIS integrations specifically](https://truto.one/blog/what-are-hris-integrations-the-2026-guide-for-b2b-saas-pms/), the unified HRIS model covers employees, employments, compensations, time-off requests, time-off balances, locations, groups, job roles, and company benefits—with the same depth as dedicated HRIS-only vendors, but within a broader platform.

By migrating beyond a single-category provider, you transform integrations from a constant engineering bottleneck into a predictable, scalable data operation.

## Making the Move: A Practical Checklist

If you are ready to execute this migration, follow these precise steps:

1.  **Audit your Finch connections:** Categorize each connection by auth type: OAuth (migratable) vs. assisted/credential-based (requires re-auth).
2.  **Verify OAuth app ownership:** If Finch's OAuth app was used, register your own apps with each provider before starting.
3.  **Export tokens:** Pull the raw tokens for all OAuth-based connections. Store them securely.
4.  **Set up the new platform:** Create integrated accounts in your new multi-category API, importing tokens into the generic credential context.
5.  **Apply response mapping overrides:** Write JSONata configurations to match Finch's response schema during the transition.
6.  **Run both systems in parallel:** Maintain a validation period. Compare responses. Fix mapping edge cases.
7.  **Switch traffic:** Route requests to the new platform. Monitor for `needs_reauth` webhook events.
8.  **Retire the Finch dependency:** Remove the response mapping compatibility layer once your frontend migration is complete.

The entire process for a team with 100-200 Finch connections typically takes two to four weeks of engineering time. The payoff is immediate access to a massive integration catalog through the exact same API you already know.

> [!TIP]
> **Architectural Takeaway:** Migrating unified APIs does not require starting from scratch. By treating OAuth tokens as portable strings and using JSONata to mimic legacy schemas, you can swap out integration infrastructure without impacting your enterprise users.

## Operational Playbook: Customer Communication During Migration

Even with token portability eliminating re-authentication for most connections, some subset of your customers will need to reconnect. Connections using vendor-owned OAuth apps, assisted integrations, and expired refresh tokens all produce accounts that cannot be silently migrated. This playbook gives product, CS, and engineering teams a concrete plan for handling those reconnections with minimal friction.

### One-Page Migration Runbook

| Phase | Timeline | Owner | Key Actions |
|-------|----------|-------|-------------|
| **Audit** | Week 1 | Engineering | Categorize connections: silent-migratable (your OAuth app) vs. reconnect-required (vendor OAuth app, assisted). Export token inventory. |
| **Prepare** | Week 1-2 | Engineering + Product | Set up new platform. Import migratable tokens. Write JSONata response mappings. Build reconnect UI flow. |
| **Notify** | Week 2 | CS + Product | Send advance notice to customers requiring reconnection. Set up in-app banners for affected accounts. |
| **Canary** | Week 2-3 | Engineering | Migrate 5-10% of silent-migratable connections. Run both platforms in parallel. Compare responses field-by-field. |
| **Full migration** | Week 3-4 | Engineering | Migrate remaining silent connections. Route production traffic to new platform. |
| **Reconnect** | Week 3-5 | CS + Engineering | Monitor webhook events for auth failures. Trigger automated reconnect outreach. Provide one-click reconnect links. |
| **Cleanup** | Week 5-6 | Engineering | Remove legacy response mappings. Retire old vendor dependency. |

### Sample In-App and Email Messages for Reconnect

Not every customer needs the same message. Tailor communication based on the reconnect reason and urgency.

**Pre-migration email (sent 1-2 weeks before cutover):**

> Subject: Action needed: Re-authorize your [Provider Name] integration by [Date]
>
> Hi [Name],
>
> We are upgrading our integration infrastructure to support more tools and faster data syncs. Your [Provider Name] connection needs a quick re-authorization to continue working after [Date].
>
> This takes about 30 seconds - click the link below and sign in to [Provider Name] when prompted.
>
> **[Reconnect Now →]**
>
> Nothing changes about what data we access or how we use it. The OAuth permissions are identical. If you have questions, reply to this email or contact [support email].

**In-app banner (for accounts flagged as needing re-auth):**

> ⚠️ Your [Provider Name] integration needs re-authorization. **[Reconnect now →]** This takes about 30 seconds and does not change your data permissions.

**Post-failure automated email (triggered by authentication_error webhook):**

> Subject: Your [Provider Name] integration was disconnected
>
> Hi [Name],
>
> We noticed your [Provider Name] connection stopped syncing. This usually means the authorization token expired during our infrastructure upgrade.
>
> Click below to reconnect in one step:
>
> **[Reconnect Now →]**
>
> If you do not reconnect within 14 days, syncing remains paused. No data is lost - syncing resumes from where it left off once you reconnect.

### Code Snippets: Webhook Listener and One-Click Reconnect Link

**Listening for authentication errors**

When a migrated token fails to refresh, Truto emits an `integrated_account:authentication_error` webhook. Use this to trigger automated customer outreach:

```typescript
// Webhook handler for Truto authentication error and reactivation events
import express from 'express';

const app = express();
app.use(express.json());

app.post('/webhooks/truto', async (req, res) => {
  const { event, data } = req.body;

  if (event === 'integrated_account:authentication_error') {
    const { integrated_account_id, integration_name } = data;

    // Look up the customer associated with this connected account
    const customer = await db.customers.findByIntegratedAccount(integrated_account_id);
    if (!customer) {
      console.warn(`No customer found for account ${integrated_account_id}`);
      return res.sendStatus(200);
    }

    // Generate a one-click reconnect link
    const reconnectUrl = buildReconnectUrl(integrated_account_id, integration_name);

    // Trigger outreach: email + in-app banner
    await sendReconnectEmail({
      to: customer.email,
      customerName: customer.name,
      providerName: integration_name,
      reconnectUrl,
    });

    await db.notifications.create({
      customerId: customer.id,
      type: 'integration_reauth_required',
      metadata: { integrated_account_id, integration_name, reconnectUrl },
    });
  }

  // Handle successful reconnections - clear banners and confirm
  if (event === 'integrated_account:reactivated') {
    const { integrated_account_id } = data;
    await db.notifications.dismissByAccountId(integrated_account_id);

    const customer = await db.customers.findByIntegratedAccount(integrated_account_id);
    if (customer) {
      await sendConfirmationEmail({
        to: customer.email,
        message: 'Your integration has been reconnected. Data syncing has resumed.',
      });
    }
  }

  res.sendStatus(200);
});
```

**Generating one-click reconnect deep links**

Build a signed, time-limited URL that drops the user directly into the re-authorization flow for the specific provider:

```typescript
import jwt from 'jsonwebtoken';

function buildReconnectUrl(integratedAccountId: string, integrationName: string): string {
  const baseUrl = process.env.APP_BASE_URL; // e.g., https://app.yourproduct.com

  // Create a short-lived token so the link cannot be reused indefinitely
  const reconnectToken = jwt.sign(
    { integratedAccountId, integrationName, action: 'reconnect' },
    process.env.RECONNECT_SECRET,
    { expiresIn: '7d' }
  );

  return `${baseUrl}/integrations/reconnect?token=${reconnectToken}`;
}

// Frontend route handler: verify token and kick off OAuth re-auth
app.get('/integrations/reconnect', async (req, res) => {
  try {
    const payload = jwt.verify(
      req.query.token as string,
      process.env.RECONNECT_SECRET
    ) as { integratedAccountId: string; integrationName: string };

    // Initiate the re-auth flow through Truto's Link component,
    // targeting the specific connected account
    const linkSession = await trutoClient.createLinkSession({
      integrated_account_id: payload.integratedAccountId,
      integration_name: payload.integrationName,
    });

    res.redirect(linkSession.url);
  } catch {
    res.status(400).send('This reconnect link has expired. Please contact support.');
  }
});
```

### Rollout Plan and Rollback Considerations

**Phased rollout**

Do not migrate all connections at once. A phased approach limits blast radius:

1. **Canary group (5-10% of connections):** Pick a mix of providers and customer sizes. Run both old and new platforms in parallel for 3-5 days. Compare API responses field-by-field.
2. **Early adopters (25%):** Expand to customers who have expressed interest in new integration categories - they benefit immediately from the expanded catalog.
3. **General availability (remaining 75%):** Migrate the rest once canary and early adopter phases show stable token refresh rates and response parity.

**What to monitor during rollout**

*   **Token refresh success rate:** Track the percentage of imported tokens that successfully refresh on the new platform. Anything below 95% in the canary phase warrants investigation before expanding.
*   **`needs_reauth` webhook volume:** A spike in `integrated_account:authentication_error` events indicates a systemic issue - possibly a misconfigured OAuth app or missing scopes.
*   **Response diff rate:** Compare a sample of responses from both platforms. Field mismatches point to gaps in your JSONata mapping overrides.
*   **Support ticket volume:** If reconnect-related tickets exceed your normal baseline by more than 20%, pause the rollout and investigate.

**Rollback strategy**

Keep the old vendor active (read-only) throughout the migration window. If something goes wrong:

1. **Revert traffic routing:** Point your application back to the old vendor's API. Since you have not deleted connections on the old platform, this is a configuration change - not a re-migration.
2. **Pause token refresh on the new platform:** Prevent the new platform from refreshing tokens while traffic routes through the old vendor. Concurrent refreshes from two platforms against the same OAuth app can cause token invalidation on some providers.
3. **Investigate before retrying:** Determine whether the issue is token-related (wrong `client_id`, expired refresh tokens), schema-related (mapping gaps), or infrastructure-related (rate limits, timeouts). Fix the root cause before re-attempting.

The safest rollback window is 7-14 days. After that, token drift between the two platforms - one refreshing, one stale - makes a clean revert progressively harder. Plan your migration with this window in mind.

> Ready to expand your integration roadmap beyond HRIS? Talk to our engineering team about migrating your existing OAuth tokens to Truto's multi-category unified API. We have done this before and will walk you through token export, credential import, and response mapping configuration.
>
> [Talk to us](https://cal.com/truto/partner-with-truto)
