Skip to content

How to Integrate Multiple ATS Platforms (Greenhouse, Lever, Workable)

Learn how to integrate Greenhouse, Lever, and Workable APIs simultaneously. Covers schema normalization, auth differences, rate limits, and using a unified ATS API to ship faster.

Roopendra Talekar Roopendra Talekar · · 19 min read
How to Integrate Multiple ATS Platforms (Greenhouse, Lever, Workable)

Your product doesn't integrate with Greenhouse. The $200K ARR deal that was "basically closed" just died in legal review because the prospect's talent acquisition team won't adopt a tool that doesn't plug into their ATS. Your engineering lead says it's a two-week project. They're wrong — and you've probably heard this before if you've dealt with HRIS integrations.

Building the initial HTTP request to fetch a candidate is easy. Managing OAuth token lifecycles, translating inconsistent data models, handling brutal rate limits, and maintaining webhooks when a vendor pushes a breaking change is where the real time goes.

This guide breaks down what it actually takes to integrate with Greenhouse, Lever, and Workable simultaneously — the schema mismatches, auth headaches, pagination traps, and webhook edge cases — and how to ship these integrations without burning your team's entire quarter.

The Enterprise Deal Blocker: Why You Need Greenhouse, Lever, and Workable Integrations

The ATS market is not slowing down. According to MarketsandMarkets, the applicant tracking system market is projected to grow from $3.28 billion in 2025 to $4.88 billion by 2030, at a CAGR of 8.2%. That growth means more companies buying more ATS platforms, and more prospects expecting your product to work with whichever one they've chosen.

Your prospects aren't all on the same ATS. Greenhouse dominates structured hiring at scale — companies like DoorDash and Wayfair use it. Lever combines ATS and CRM into LeverTRM, popular with mid-market sourcing-heavy teams. Workable appeals to growing companies that want something faster to deploy. Your sales pipeline probably has prospects on all three.

HR tech stacks are notoriously disjointed. According to HR.com's 2025 State of HR Technology report, the majority of organizations use multiple paid HR solutions, with roughly 62% running two to four paid systems and only about 15% relying on a single platform. The same report found that only 10% of organizations say their HR solutions integrate extremely well, while about 21% describe integration as poor or very poor.

If your product is one more system in that stack, the customer expects you to be the part that doesn't make the mess worse. Recruiting teams don't buy your product in isolation. They buy it as a node in a bigger workflow that already includes an ATS, an HRIS, interview scheduling, assessments, and reporting. If your tool can't ingest jobs, attach candidates to applications, react to stage movement, or push outcomes back where they're needed, you're asking them to create a parallel system by hand.

The business case is binary: either you support the ATS platforms your prospects use, or you lose deals. The question is how you get there without destroying your engineering velocity.

The Hidden Costs of Building ATS API Integrations In-House

The two-week estimate your engineer gave you covers the happy path for one integration. That's roughly 10% of the actual work.

Initial build cost per integration: Development costs for a well-scoped SaaS API integration range from roughly $5,000 to $15,000 for standard workflows, but complex integrations with bi-directional sync, custom field mapping, and webhook handling regularly push into the $20,000–$50,000+ range. Multiply that by three ATS platforms and you're looking at a significant chunk of your engineering budget before you've written a line of core product code.

The maintenance multiplier: The number that kills you isn't the build cost — it's the ongoing maintenance. Annual upkeep typically runs 10–20% of the initial development investment. API versioning changes, deprecated endpoints, authentication flow updates, and undocumented behavioral changes mean someone on your team is permanently assigned to babysitting these integrations. Greenhouse recently shipped a v3 Harvest API with a completely different pagination model (cursor-based instead of their older page-number approach). If you'd built against v1, that's a rewrite.

Opportunity cost is the real killer: Every sprint spent wrestling with ATS API pagination quirks is a sprint not spent on the features that differentiate your product. If you're building an interview intelligence platform, your competitive advantage is in AI analysis of interview transcripts — not in figuring out why Greenhouse returns a 429 at 50 requests per 10 seconds while Workable uses a completely different rate limit header scheme.

If someone tells you ATS integration is just a few API calls, they're pricing the demo, not the system. The system is everything around the call:

Workstream What gets estimated What production actually needs
Jobs sync List jobs Department and office mapping, archived states, pagination, caching, visibility rules
Candidate sync List candidates Candidate vs. application separation, attachments, custom fields, re-hydration, dedupe
Realtime updates Add webhooks Signature verification, retries, replay protection, queueing, dead-letter handling
Write actions Move stage Permissions, actor identity, auditability, provider-specific required IDs
Support Read the docs Surviving when the docs are wrong, stale, or too vague to matter

There's also a nasty organizational cost. Once you build one ATS connector, sales immediately asks for a second. The moment you build a second, product assumes a third is cheap. By the time you have three, you don't have three integrations — you have the start of an integration platform whether you meant to build one or not. As we've seen in countless integration horror stories, the build vs. buy calculation almost always favors buy for non-core integrations.

Schema Normalization: The Hardest Part of ATS Integration

Schema normalization means taking vendor-specific objects and translating them into one model your product can trust — without erasing the relationships that make the data meaningful. In the ATS domain, this is notoriously difficult because every vendor has a different philosophy on how recruiting data should be structured. If you want the full argument, Why Schema Normalization is the Hardest Problem in SaaS Integrations goes deeper.

The Candidate vs. Application Data Model Split

The most fundamental divergence is how these platforms model the relationship between a person and a job application.

Concept Greenhouse Lever Workable
Person record Candidate (separate from applications) Opportunity (person + job context merged) Candidate (nested under job)
Job application Application (links Candidate to Job) Part of the Opportunity object Candidate record scoped to a job
Pipeline stages JobInterviewStages per Job Stages on the Opportunity Stages per job pipeline
Moving between stages PATCH on Application with stage_id Stage change on Opportunity Move action on Candidate
Rejection Reject Application with RejectReason Archive Opportunity with reason Disqualify Candidate

This isn't a cosmetic difference. Greenhouse treats the recruitment process as highly structured and requisition-driven: a Candidate represents the person, an Application is the transaction linking that person to a specific Job, and a single candidate can have multiple applications across different roles over time. When you query the Greenhouse API for a candidate, you receive an array of application IDs — if you want to know what stage of the interview process the candidate is in, you must make a secondary request to fetch the specific application data.

Lever was built with a CRM-first philosophy (bringing many of the same challenges we've discussed regarding native CRM integrations). Its Opportunity object conflates what Greenhouse treats as two distinct entities (Candidate and Application). Lever's API heavily centers on this Opportunity — if you want to move a candidate through a pipeline or add a note, you're almost always interacting with the Opportunity, not the core Contact record.

Workable takes a flatter approach where candidates are often tied directly to jobs in specific endpoints. Candidate creation is job-scoped on /jobs/:shortcode/candidates, and fetching candidates frequently requires filtering by job shortcode or specific pipeline stages. Workable also notes that the IDs exposed in the API are not the same as the ones users see in the application UI — a detail that trips up support debugging.

If your product's data model assumes a Greenhouse-style separation, bolting on Lever support means rethinking your relational assumptions from the ground up.

Response Shape Differences

Here's a concrete example of how the same concept — a candidate's name and email — looks across all three APIs:

// Greenhouse Harvest API response
{
  "id": 12345,
  "first_name": "Jane",
  "last_name": "Doe",
  "emails": [
    { "value": "jane@example.com", "type": "personal" }
  ]
}
 
// Lever API response
{
  "id": "abc-def-123",
  "name": "Jane Doe",
  "emails": ["jane@example.com"],
  "phones": [{"value": "+15551234567"}]
}
 
// Workable API response
{
  "id": "ABCD1234",
  "firstname": "Jane",
  "lastname": "Doe",
  "email": "jane@example.com"
}

Three vendors, three ID formats (integer, UUID, alphanumeric string), three approaches to names (split fields, single field, differently-cased split fields), and three approaches to email (array of objects, flat array, single string). Every one of these differences is a mapping you need to build, test, and maintain.

Custom Fields: Where Enterprise Deals Get Complicated

Every enterprise customer has custom fields in their ATS. Greenhouse exposes custom fields through their API, but there's a catch: custom fields on the Application object are gated behind Enterprise-tier accounts. If your customer is on a lower Greenhouse plan, those fields simply won't appear in the API response, and your integration needs to handle that gracefully instead of breaking.

Lever and Workable handle custom fields differently again — different naming conventions, different data types, different nesting structures. Normalizing company_size from Greenhouse's custom field format, Lever's tag-based system, and Workable's custom question format into a single schema is exactly the kind of work that eats sprints alive.

The Normalization Nightmare in Practice

If your application needs to display a simple list of "Active Candidates and their Current Interview Stage," your backend logic looks like this:

  • For Greenhouse: Fetch candidates → Extract application IDs → Fetch applications → Extract stage IDs → Fetch stages → Map to UI.
  • For Lever: Fetch opportunities → Extract stage IDs → Map to UI.
  • For Workable: Fetch candidates filtered by job → Read stage directly from the candidate object → Map to UI.

Writing if (provider === 'greenhouse') logic throughout your codebase creates a fragile, unmaintainable mess. A sane canonical ATS model starts with these entities: Jobs, Candidates, Applications, JobInterviewStages, Interviews, Scorecards, Offers, RejectReasons, Activities, and Attachments.

Do not collapse candidate and application into one table just because one vendor makes it tempting. A person can exist before they apply, apply to multiple jobs, get moved, get rejected, re-enter the funnel, and eventually receive an offer — all as separate events and relationships.

A few implementation rules save a lot of pain later:

  1. Keep canonical fields small and opinionated. First name, last name, emails, phones, current stage, applied date, offer date — top level.
  2. Preserve raw provider payloads. If you throw away vendor-native data too early, custom enterprise requirements come back as emergency tickets.
  3. Treat custom fields as a typed bag, not a landfill. Keep metadata like provider field ID, type, and label alongside the value.
  4. Map stage IDs, not just stage names. Stage names get renamed by customers constantly. IDs are what your write actions need.
  5. Expect hydration passes. Some list endpoints are discovery endpoints, not final truth. You list first, then fetch full detail where necessary.

Authentication, Pagination, and Rate Limits Across Recruiting Platforms

Beyond data models, each ATS has its own authentication scheme, pagination strategy, and rate limiting policy. Getting any of these wrong means your integration silently fails in production.

Authentication: Three Platforms, Three Approaches

Greenhouse uses HTTP Basic Auth for its Harvest API. Your API key is the username, the password is blank. You Base64-encode your_api_key: (note the trailing colon) and send it as an Authorization: Basic header. Greenhouse also requires that API keys be granted specific endpoint permissions individually — access is binary per endpoint.

Lever supports OAuth 2.0, which means managing the full authorization code flow: redirect URLs, authorization codes, token exchange, and refresh token rotation. When a token expires, you need the refresh token to get a new pair. If the refresh fails, you need to prompt the customer to re-authenticate. As we've written in OAuth at Scale, token refresh is one of those problems that looks solved until you hit 500 connected accounts.

Workable offers both bearer token authentication and OAuth 2.0 for partners. The OAuth flow requires pre-authorization by Workable to receive client_id and client_secret. Scopes are explicit and additive — r_jobs, r_candidates, w_candidates — meaning you request exactly the permissions you need. Workable also has no CORS support, which matters if anyone on your team was planning to shortcut with browser-side calls.

Pagination: The Silent Complexity Multiplier

Pagination sounds simple until you're debugging why your sync job stopped halfway through a customer's 10,000 candidates.

Greenhouse Harvest API v3 uses cursor-based pagination with opaque cursors returned in the Link header (RFC 5988). The cursor value is Base64-encoded, and Greenhouse explicitly warns you not to parse or construct it yourself. When you pass a cursor, it must be the only query parameter — you can't combine it with filters. Greenhouse v1/v2 uses the older Link header approach with next, prev, and last URLs plus page and per_page query parameters (max 500 per page). If you built against v1 and need to migrate to v3, your pagination logic changes completely.

Lever uses cursor-based pagination with a next token in the response body rather than in headers. A paginated list response includes a next attribute containing a token, which you pass as the offset parameter in your subsequent request.

Workable paginates differently again, using limit and since_id (or offset depending on the endpoint) with pagination metadata inline in the response.

Three pagination strategies means three sets of iteration logic, three sets of rate-limit-aware retry policies, and three different ways your sync can silently stop returning data.

Rate Limits: No Two Vendors Agree

Platform Rate Limit Header Format
Greenhouse ~50 requests per 10 seconds X-RateLimit-Limit, X-RateLimit-Remaining
Lever ~10 req/sec steady state, bursts to 20 Response headers (varies by endpoint)
Workable ~10 requests per 10 seconds X-Rate-Limit-Limit, X-Rate-Limit-Remaining, X-Rate-Limit-Reset

Note the inconsistency: Greenhouse uses X-RateLimit-Limit (no hyphens between words), while Workable uses X-Rate-Limit-Limit (with hyphens). This kind of trivial-but-breaking inconsistency is the daily reality of multi-vendor API integration. If you write a simple while loop to fetch all candidates without backoff logic, you will get throttled almost immediately — and Workable's punishing 10-requests-per-10-seconds limit means your sync jobs will fail outright without an exponential backoff queue.

How to Handle ATS Webhooks Without Losing Events

Polling APIs for changes is inefficient and eats your rate limits. The correct approach is webhooks — you subscribe to events like candidate_created or application_stage_changed, and the ATS pushes data to your server. But webhooks introduce their own set of engineering challenges.

Signature verification varies by vendor. Greenhouse uses HMAC-style signature verification and retries up to 7 attempts over roughly 15 hours on failures. Lever signs webhook requests using SHA-256 HMAC over the request token and timestamp, and requires HTTPS endpoints. Workable exposes webhook subscription endpoints with its own verification scheme. Each vendor uses a different hashing algorithm and header format.

Event ordering is not guaranteed. You might receive an application_rejected event before the candidate_created event due to network latency. Your database logic must handle upserts and missing foreign keys gracefully.

Events will get dropped. If your server goes down during a deployment and returns 500 errors, some vendors will retry a few times and then permanently drop the event. You need a dedicated message queue (SQS, Kafka, or similar) sitting in front of your application to ingest webhooks instantly, returning a 200 OK before any processing happens.

The best practice here is boring, and boring is good: webhooks should reduce latency, not carry correctness on their backs. Your webhook pipeline should follow this pattern:

  1. Verify signature.
  2. Reject replays.
  3. Put the event on a queue.
  4. Re-fetch the affected object from the vendor API.
  5. Upsert into your own store.
  6. Run periodic reconciliation anyway.

A tiny dedupe check saves you from replay storms and vendor retries:

const inserted = await redis.set(`ats:${vendor}:${deliveryId}`, '1', {
  NX: true,
  EX: 86400,
})
 
if (!inserted) return
await queue.publish('ats-sync', payload)

Treat every webhook as a hint that something changed, not as the only source of truth.

How a Unified ATS API Solves the N-to-1 Integration Problem

The pattern should be clear by now: building point-to-point integrations with each ATS vendor means maintaining N separate connectors with N different authentication flows, N different data models, N different pagination strategies, and N different webhook formats. When N is 3, it's painful. When your sales team needs Ashby, iCIMS, and JazzHR next quarter, it becomes unsustainable.

A Unified ATS API collapses this N-to-1. Instead of integrating with each ATS directly, you integrate once with a standardized interface that normalizes all ATS data into a common schema. The unified API handles the per-vendor translation behind the scenes.

graph TD
    A[Your Application] -->|GET /unified/ats/candidates| B(Unified API Router)
    B --> C{Resolve Account}
    C -->|Load Credentials| D[Fetch Integration Config]
    D --> E[Map Request via JSONata]
    E --> F[Proxy Layer HTTP Client]
    F -->|Basic Auth / Link Headers| G((Greenhouse))
    F -->|OAuth / Offset Token| H((Lever))
    F -->|Bearer Token / Limit-Offset| I((Workable))
    G -->|Raw JSON| J[Response Mapper]
    H -->|Raw JSON| J
    I -->|Raw JSON| J
    J -->|JSONata Transform| K[Normalized Candidate Object]
    K --> A

With this approach:

  • One data model — Candidates, Applications, Jobs, Interview Stages, Offers, and Scorecards all follow a single schema regardless of the source ATS.
  • One auth integration — You authenticate with the unified API once; the platform manages per-vendor OAuth tokens, API key lifecycles, and refresh flows.
  • One pagination interface — Consistent cursor-based pagination with a next_cursor parameter, regardless of whether the underlying ATS uses Link headers, body cursors, or offset pagination.
  • One webhook format — Real-time events normalized into a consistent payload structure.

A practical canonical interface might look like this:

type UnifiedApplication = {
  id: string
  candidateId: string
  jobId: string
  stageId?: string
  status?: string
  appliedAt?: string
  rejectedAt?: string
  hiredAt?: string
  customFields?: Record<string, unknown>
  remoteData?: unknown
}

The remoteData field is critical. It preserves the raw vendor response so you can always access vendor-specific fields that the unified schema doesn't cover.

The Honest Trade-offs

Unified APIs are not magic bullets. The trade-offs are real, and you should understand them before committing.

You lose some vendor-specific features. A unified schema is inherently a lowest-common-denominator view. Greenhouse's structured scorecard system is richer than what most unified schemas expose. Lever's CRM-specific nurture campaign data might not map cleanly. Any good unified API should preserve the raw vendor response alongside the normalized data.

You add a dependency. Your integration now depends on a third party. If the unified API provider has an outage, all your ATS integrations go down simultaneously. Evaluate uptime SLAs and architectural resilience before committing.

Schema coverage varies by resource and method, not by logo. Not every field from every ATS will be mapped. Enterprise customers with heavily customized ATS instances will need per-account mapping overrides. If a unified API vendor talks only in terms of supported logos, ask for support by resource and method. "Jobs list" is not the same as "applications move-stage." Candidate reads are not the same as attachment uploads.

For a deeper comparison of integration approaches, see 3 Models for Product Integrations: A Choice Between Control and Velocity.

Integrating Multiple ATS Platforms at Once with Truto

Truto's Unified ATS API covers the full recruiting data model: Organizations, Departments, Offices, Jobs, Candidates, Applications, Interview Stages, Interviews, Scorecards, Offers, Reject Reasons, EEOC data, Activities, Attachments, Tags, and Users. Every entity follows a standardized schema that works identically whether the underlying ATS is Greenhouse, Lever, Workable, or Ashby.

Here's what a request to list candidates looks like:

curl -X GET "https://api.truto.one/unified/ats/candidates?integrated_account_id=YOUR_ACCOUNT_ID&limit=10" \
  -H "Authorization: Bearer YOUR_API_TOKEN"

That same request works for any supported ATS. The integrated_account_id determines which ATS and which customer account the request targets. The response shape is identical regardless of source.

Zero Integration-Specific Code

Most unified API platforms maintain separate code paths per integration — essentially if (provider === 'greenhouse') { ... } branches throughout their codebase. Every new integration requires new code, new deployments, and new opportunities for regressions.

Truto takes a different approach. Integration-specific behavior — how to authenticate with Greenhouse's Basic Auth, how to parse Lever's cursor pagination, how to map Workable's flat candidate object into the normalized schema — is defined entirely as data: JSON configuration for API communication patterns, and JSONata transformation expressions for field mapping. The runtime is a generic execution engine that loads the appropriate mapping and evaluates it.

Here's how a JSONata expression maps a raw Greenhouse response into a unified candidate object:

response.{
  "id": $string(id),
  "first_name": first_name,
  "last_name": last_name,
  "name": first_name & ' ' & last_name,
  "emails": email_addresses.{
    "email": value,
    "type": type
  },
  "applications": applications.{
    "id": $string(id),
    "job_id": $string(job_id)
  },
  "custom_fields": custom_fields
}

While a Lever mapping for the same unified output looks like:

response.{
  "id": $string(id),
  "first_name": $substringBefore(name, " "),
  "last_name": $substringAfter(name, " "),
  "emails": emails.[{"email": $}],
  "created_at": createdAt,
  "updated_at": updatedAt
}

Both produce the same unified output shape. The same code path handles both — it just evaluates different transformation expressions. Adding a new ATS is a data operation, not a code deployment. When a bug in the pagination logic gets fixed, every ATS integration benefits immediately. There's no risk that fixing Greenhouse pagination breaks Lever.

Per-Customer Customization Without Code Changes

Enterprise ATS instances are heavily customized. Your customer at a Fortune 500 might have 30 custom fields in Greenhouse that are critical to their hiring workflow. A rigid unified schema that ignores custom fields is useless to them.

Truto handles this with a three-level override system:

  1. Platform-level mappings — The default schema mapping that works for most customers.
  2. Environment-level overrides — Your product can customize mappings for specific deployment environments.
  3. Account-level overrides — Individual connected accounts can have their own mapping customizations.

If one customer's Greenhouse instance uses a custom field called hiring_urgency that's critical to your product, you can map it into the unified response for that specific account without affecting any other customer's integration and without waiting for a code deployment.

The raw vendor response is always preserved alongside the normalized data (in a remote_data field), so you can always access the full original payload when the unified schema doesn't cover a specific field.

Syncing ATS Data at Scale with RapidBridge

If you're building an AI sourcing agent, you can't query the ATS API in real-time for every prompt. You need to sync the entire candidate database to your own vector store or relational database.

Truto provides RapidBridge, a pipeline engine that pulls data from third-party APIs and syncs it to your datastores via webhooks. You can configure a sync job to incrementally fetch candidates updated since the last run, automatically handling rate limits and pagination quirks.

curl -X POST "https://api.truto.one/sync-job" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "integration_name": "greenhouse",
    "resources": [
      { "resource": "ats/candidates", "method": "list" },
      { "resource": "ats/jobs", "method": "list" },
      {
        "resource": "ats/applications",
        "method": "list",
        "query": {
          "updated_at": { "gt": "{{previous_run_date}}" }
        }
      }
    ]
  }'

The previous_run_date placeholder automatically tracks when the last successful sync completed. If Greenhouse throws a rate limit error, Truto's proxy layer catches it, reads the Retry-After header, pauses, and resumes without any intervention from your engineering team. Swap integration_name to lever or workable and the rest stays the same.

You can schedule syncs on a cron, run them on-demand, or trigger them automatically when a new customer connects their ATS account. Error handling defaults to resilient mode — individual record errors are reported via webhook events without stopping the entire sync.

Post-Connection Configuration with RapidForm

Different customers want to sync different subsets of their ATS data. A staffing agency might have 500 active jobs but only want to sync candidates for specific departments. Truto's RapidForm lets you present a dynamic form to customers immediately after they connect their ATS account. The form fetches live data from the connected ATS — departments, offices, job lists — and lets the customer choose exactly what to sync. Those selections are stored as context variables and automatically used in subsequent sync jobs, with no code changes required per customer.

AI Agent Integration via MCP

If you're building LLM-powered recruiting tools (using frameworks like LangChain or LangGraph), giving your agent access to an ATS is traditionally very difficult. You'd normally need to write custom tool definitions for every API endpoint.

Truto solves this by automatically exposing connected integrations as Model Context Protocol (MCP) servers. When a customer connects their Lever account, Truto dynamically generates tool definitions based on the API schemas. Your AI agent can immediately call tools like list_all_lever_opportunities or create_a_lever_candidate through a standardized JSON-RPC 2.0 endpoint. You can filter tools to read-only operations (safe for autonomous agents) or scope them to specific resource groups using tags. The same connected account serves as the foundation for both your app integration and your AI agent — no need to build two separate integration surfaces.

A Practical Rollout Plan

If you're trying to ship multi-ATS support this quarter, here's the order that works:

  1. Quantify the revenue at risk. Count the deals in your pipeline that require Greenhouse, Lever, or Workable integrations. Attach dollar amounts. This is the number you bring to your engineering prioritization meeting.

  2. Normalize the read path first. Ship jobs, candidates, applications, and stages before touching writes. Get data flowing into your product and validate the integration value with customers before investing in write operations.

  3. Store raw payloads and provider IDs. Future you will need them for support and custom enterprise asks. Don't throw away vendor-native data.

  4. Add webhook ingestion plus reconciliation. Low latency from webhooks, correctness from periodic sync. Don't trust webhooks alone — treat them as hints, not sources of truth.

  5. Hydrate detail selectively. Lists are for discovery. Full object fetches are for correctness. Some vendors (including Workable) return partial responses on list endpoints that require follow-up reads.

  6. Externalize customer-specific mapping. Don't fork code because one enterprise account added custom requisition fields. Use per-account overrides instead.

  7. Add writes after you understand actor identity and permissions. Stage movement, rejection, and offer flows are where vendor differences get expensive. Lever needs candidate_id for stage moves, Workable requires user_id, Greenhouse wants application_id. Ship reads first, add writes once you've mapped the requirements.

  8. Instrument everything. Track webhook lag, sync lag, 429 rates, hydration failures, and per-vendor error counts. If you can't observe it, you can't fix it.

If you're a PM trying to get engineering buy-in for adopting an external integration tool, The PM's Playbook walks through exactly how to frame that conversation.

The ATS integration problem doesn't get easier by waiting. Every quarter you delay is another set of enterprise deals that go to competitors who already support the platforms your prospects use. Stop arguing about whether your team can build a Greenhouse connector. They can. The real question is whether maintaining OAuth tokens, tracking API deprecations, and writing JSON parsing logic for a growing list of ATS platforms is the best use of their sprint capacity.

FAQ

How much does it cost to build an ATS API integration?
A single well-scoped ATS integration costs $5,000–$15,000 for standard workflows, but complex integrations with bi-directional sync, custom field mapping, and webhook handling regularly push into the $20,000–$50,000+ range. Annual maintenance adds 10–20% of the initial cost. Multiply by the number of ATS platforms you need to support.
What is a unified ATS API?
A unified ATS API is a single standardized interface that normalizes data from multiple applicant tracking systems (Greenhouse, Lever, Workable, etc.) into a common schema. You integrate once instead of building separate connectors for each platform. The unified API handles per-vendor authentication, pagination, rate limiting, and schema normalization behind the scenes.
What are the main differences between Greenhouse and Lever APIs?
Greenhouse uses HTTP Basic Auth and Link header pagination, while Lever uses OAuth 2.0 and body-cursor pagination. Their data models differ fundamentally: Greenhouse separates Candidates and Applications as distinct entities, while Lever merges them into an Opportunity object. Greenhouse uses integer IDs; Lever uses UUIDs.
Are webhooks enough to keep ATS data in sync?
No. Use webhooks for lower latency, but keep periodic reconciliation jobs and hydration reads in place. Greenhouse documents webhook retries up to 7 attempts, Lever signs webhook payloads and expects verification, and Workable exposes subscription endpoints — but none of that removes the need for periodic re-sync. Treat webhooks as hints, not as your only source of truth.
How do you handle custom fields across different ATS platforms?
Each ATS exposes custom fields differently — Greenhouse gates some behind Enterprise-tier plans, Lever uses tags, Workable uses custom questions. A unified API with per-account override capabilities lets you map vendor-specific custom fields into your normalized schema without code changes for each customer.

More from our Blog