---
title: "Truto Agent Skills: Stop AI Hallucinations When Building Integrations"
slug: truto-agent-skills-stop-ai-hallucinations-when-building-integrations
date: 2026-04-28
author: Uday Gajavalli
categories: [Product Updates, Engineering]
excerpt: "AI coding assistants hallucinate API endpoints, auth flows, and pagination patterns. Learn how Truto Skills injects verified SKILL.md context into Cursor and Claude Code."
tldr: "Truto Skills use the open SKILL.md standard to teach AI assistants exact API conventions, JSONata syntax, and caller-managed rate limits, eliminating integration code hallucinations."
canonical: https://truto.one/blog/truto-agent-skills-stop-ai-hallucinations-when-building-integrations/
---

# Truto Agent Skills: Stop AI Hallucinations When Building Integrations


If you are looking for AI agent skills for integrations, stop trying to out-prompt a generic model. The reliable fix is to give the assistant vendor-specific context before it writes code. Your AI coding assistant has no idea how Truto actually works under the hood. It will confidently generate API calls to endpoints that don't exist, fabricate authentication flows, and invent pagination parameters. 

This isn't a bug in the AI - it's a context problem. The model's training data is inherently stale, generic, or both when it comes to specific platforms. **Agent Skills solve this by injecting accurate, structured knowledge directly into your AI's context** so it writes correct integration code on the first try.

Truto Skills packages Truto's unified API, proxy API, CLI, Link SDK, JSONata mapping patterns, and API conventions into a public `SKILL.md`-compatible repository for Cursor, Claude Code, and other compatible agents. This post covers what Agent Skills are, how to install the official Truto skills, and why a declarative integration platform is the perfect match for AI-assisted development.

## The Hallucination Problem in API Integrations

AI coding tools are now standard kit, not a toy. The 2025 Stack Overflow Developer Survey (n = 49,000+) revealed that AI tool usage in developer workflows has climbed to 80%. However, that same survey found that 66% of developers say their biggest frustration with AI tools is solutions that are "almost right, but not quite," and only 29% of developers trust AI outputs to be completely accurate.

Faster code is easy. Correct integration code is the part that still hurts. A CodeRabbit analysis of 470 open-source pull requests from December 2025 found roughly **1.7x more issues in AI-coauthored pull requests** compared to human-only PRs when the AI lacks proper domain context.

**Integration hallucination** is code that looks plausible but is wrong for the actual API contract you are calling. Ask Claude Code or Cursor to "build a Truto integration that syncs HubSpot contacts" without any additional context, and you'll get something that looks right but is wrong in subtle, time-wasting ways. 

What this looks like in practice:
- It guesses that the base URL is `https://api.truto.io` (it's actually `https://api.truto.one`).
- It invents a `/v2/contacts` endpoint when the actual path is `/unified/crm/contacts`.
- It writes `page` or `offset` loops where the caller should use `next_cursor`.
- It assumes the platform will magically absorb HTTP 429 rate limit errors.
- It drops provider-specific write fields that should be passed through the `remote_data` object.
- It mixes Truto Unified API assumptions with raw provider fields.

These are not abstract problems. Every SaaS platform has its own quirks: different auth flows, non-standard error codes, unique pagination schemes, and undocumented rate limit behaviors. An AI model trained on generic web data doesn't know that [Salesforce returns `PascalCase` field names while HubSpot nests everything under a `properties` object](https://truto.one/blog/architecting-ai-agents-langgraph-langchain-and-the-saas-integration-bottleneck/). 

When building integrations manually, engineers spend hours reading vendor documentation to figure out these edge cases. When an AI agent writes the code, it skips the reading phase and goes straight to hallucinating. Prompting harder is not a strategy. When the API rules are specific and changing, long chats without structured context usually just produce more confident nonsense.

## What Are Agent Skills (SKILL.md)?

**Agent Skills are portable, version-controlled packages of instructions that teach AI coding assistants how to perform domain-specific tasks correctly.** 

The format was created by Anthropic in October 2025 and released as an open standard on December 18, 2025. It has since been adopted by Claude Code, Cursor, GitHub Copilot, OpenAI Codex CLI, Databricks, and others. Rather than dumping a 50-page API specification into your prompt window - which burns through your context window and degrades the model's instruction-following capabilities - Agent Skills use **progressive disclosure**.

At its core, a skill is a folder containing a `SKILL.md` file with YAML frontmatter (name and description) and Markdown instructions:

```text
my-skill/
├── SKILL.md          # Required: metadata + instructions
├── scripts/          # Optional: executable code
├── references/       # Optional: documentation
└── assets/           # Optional: templates, resources
```

The three-level loading system keeps token usage incredibly lean:
1. **Metadata only** - At startup, the agent reads just the name and description of every installed skill (~50-100 tokens each). This is enough for the AI to know *when* to activate each skill.
2. **Full instructions** - When the AI decides a skill is relevant to the current task, it loads the complete `SKILL.md` into context.
3. **Reference files** - Scripts, docs, and templates load only if the instructions explicitly reference them.

This means you can install dozens of skills without bloating every prompt. The pattern is catching on fast. Pulumi released Agent Skills to teach AI assistants infrastructure-as-code best practices. Auth0 published skills covering 20+ framework-specific implementations. Databricks provides skills for their apps, notebooks, and ML workflows. The market has already voted: if your product has tricky conventions, a generic coding assistant is not enough.

## Introducing Truto Skills

**Truto Skills** is Truto's official skills repository for teaching coding assistants how to build against Truto without guessing. The official repository packages everything an AI coding assistant needs to build integrations on Truto correctly.

The repository includes five main capabilities:

| Skill | What it teaches the model | Common failure it prevents |
| --- | --- | --- |
| **`truto`** | Core Truto integration building workflow, making unified and proxy API calls, and setting up sync jobs. | Made-up API surface and wrong product abstractions. |
| **`truto-api-conventions`** | Base URL, auth header patterns, URL structure, pagination, and idempotency rules. | Subtle platform misuse that is annoying to debug. |
| **`truto-cli`** | Installing, authenticating, and using the Truto CLI for local debugging, resource management, and bulk exports. | Broken shell commands and bad bulk export choices. |
| **`truto-jsonata`** | Writing JSONata expressions for Truto config using custom `$functions` from `@truto/truto-jsonata`. | Invalid transforms and low-signal mapping code. |
| **`truto-link-sdk`** | Embedding the Truto connection flow in your frontend using `@truto/truto-link-sdk`. | Broken connect flows and UI guesswork. |

That coverage matters because Truto is not just one API. It is a unified API platform with 200+ third-party tools and multiple surfaces: Unified APIs, Proxy APIs, Sync Jobs, Webhooks, Custom APIs, MCP servers, CLI workflows, and frontend connection flows. If the assistant only knows one slice, it will keep reaching for the wrong tool.

> [!NOTE]
> **Agent Skills vs. Agent Toolsets (MCP Servers)**
> Do not confuse Agent Skills with Agent Toolsets. **Agent Skills** teach your AI coding assistant (like Cursor) how to *write code* for your application. **Agent Toolsets** (using the Model Context Protocol) give your *application's AI agents* runtime access to execute actions against third-party APIs. If you want your application's chatbot to read a Zendesk ticket, [sync Affinity contacts](https://truto.one/blog/connect-affinity-to-ai-agents-sync-contacts-enrich-profiles/), or [automate Pylon support workflows](https://truto.one/blog/connect-pylon-to-ai-agents-streamline-helpdesk-ops-data-sync/), you want [Truto Agent Toolsets](https://truto.one/blog/introducing-truto-agent-toolsets/). Skills and toolsets solve different problems, and using both together is usually the right move.

## How to Install Truto Skills in Cursor and Claude Code

Getting your AI assistant equipped with Truto skills takes less than a minute. The fastest path depends on your environment.

### Claude Code

If you are using Anthropic's Claude Code CLI, add the Truto skills repository as a plugin marketplace and install it directly. Anthropic's plugin docs support adding marketplaces from GitHub repos:

```bash
/plugin marketplace add trutohq/truto-skills
/plugin install truto@truto-skills
```

The skills will be automatically namespaced in your environment (e.g., `truto:truto-api-conventions`, `truto:truto-jsonata`) so they do not collide with local project skills.

### Cursor

Cursor supports fetching remote rules directly from GitHub repositories. This ensures your AI always has the latest API conventions without you needing to manually update local markdown files.

1. Open **Cursor Settings**.
2. Navigate to **Rules**.
3. Click **Add Rule** under the **Project Rules** section.
4. Select **Remote Rule (GitHub)**.
5. Enter the repository URL: `https://github.com/trutohq/truto-skills`

This automatically pulls in the skills and applies the `truto-api` rule to your project. Cursor keeps the rules synced with the source repository, so updates are reflected automatically.

### Any Agent (via `npx skills`)

For other compatible AI agents, or if you want a tool-agnostic path, you can use the open-source Skills CLI to install the skills locally into your project directory:

```bash
npx skills add trutohq/truto-skills
```

> [!TIP]
> **Prompting Tip:** A good first prompt after install is not 'build me a Salesforce integration'. Be specific. Tell the model which Truto skill to use, which API surface you want, and which behaviors it must respect. Example: *"Use `truto:truto-api-conventions` and `truto:truto-cli` to create a Node.js worker that syncs `crm/contacts`, paginates with `next_cursor`, and handles upstream rate limits in caller code."*

## Teaching the AI Truto API Conventions That Actually Matter

Providing an AI agent with API documentation is not just about showing it the endpoints. It is about teaching the AI the operational realities of the platform. The point of the `truto-api-conventions` skill is simple: kill the expensive, boring mistakes before they show up in review. Here is exactly what the AI learns.

### Unified API vs Proxy API

Truto gives you two data-plane surfaces. The Unified API transforms provider data into a common schema. The Proxy API returns data in the provider's native format. The skill teaches the AI that if you are building cross-provider business logic, it should default to Unified. If you need provider-only fields or endpoints the common model does not cover, the AI should drop to Proxy APIs (`/proxy/{resource}`) purposefully, not by accident.

### Handling Rate Limits and Retries (Caller-Managed)

This is the single most common mistake AI assistants make with Truto. They assume the platform absorbs rate limits and retries automatically behind the scenes. It doesn't.

When an upstream API returns an HTTP 429, Truto passes that error directly to the caller. What Truto *does* provide is normalized rate limit metadata in standardized headers per the IETF specification (`ratelimit-limit`, `ratelimit-remaining`, `ratelimit-reset`, and `Retry-After`). Your code is responsible for reading these headers and implementing retry and exponential backoff logic.

With the Truto skill loaded, the AI generates the retry logic correctly, rather than writing a naive `while` loop that crashes your application:

```typescript
async function fetchWithBackoff(
  run: () => Promise<Response>,
  attempt = 0
): Promise<Response> {
  const res = await run();

  if (res.status !== 429) {
    return res;
  }

  const retryAfter = Number(res.headers.get('Retry-After') || 0);
  const reset = Number(res.headers.get('ratelimit-reset') || 0);
  const waitSeconds = Math.max(retryAfter, reset, Math.min(2 ** attempt, 60));

  await new Promise(resolve => setTimeout(resolve, waitSeconds * 1000));
  return fetchWithBackoff(run, attempt + 1);
}
```

### Normalizing Cursor-Based Pagination

Third-party APIs use wildly different pagination strategies - offset, page numbers, cursor-based, or link headers. Truto normalizes all of these into a standard cursor-based format. 

The AI learns that every list response from Truto contains a `next_cursor` field. It knows never to append `?page=2` to a Truto unified API call. It will write clean, predictable pagination loops that work identically whether you are pulling data from Jira, Greenhouse, or QuickBooks:

```typescript
type Page<T> = { result: T[]; next_cursor?: string | null };

async function listAllContacts(fetchPage: (nextCursor?: string) => Promise<Page<any>>) {
  const out: any[] =[];
  let nextCursor: string | undefined;

  do {
    const page = await fetchPage(nextCursor);
    out.push(...page.result);
    nextCursor = page.next_cursor ?? undefined;
  } while (nextCursor);

  return out;
}
```

### Unified Writes Still Need an Escape Hatch

Another easy miss is write behavior. Truto's Unified API lets you send common fields and merge provider-specific fields through the `remote_data` object. That is a clean compromise: keep the shared model for most of the request, but leave room for the vendor oddities every real integration eventually needs. The skill explicitly teaches the AI to use `remote_data` when custom provider fields are requested, preventing the model from giving up on the Unified API too early.

### Writing Complex JSONata Transformations

One of Truto's most powerful features is the ability to map custom API data with JSONata. Writing complex JSONata expressions from scratch can be tedious. The `truto-jsonata` skill provides the AI with detailed examples of Truto's extended JSONata functions, such as `$mapValues`, `$firstNonEmpty`, and specific date formatters from the `@truto/truto-jsonata` package.

A response mapping that normalizes HubSpot contacts into Truto's unified schema becomes a single expression string generated effortlessly by the AI:

```jsonata
response.{
  "id": $string(id),
  "first_name": properties.firstname,
  "last_name": properties.lastname,
  "email": properties.email,
  "created_at": properties.createdate
}
```

If you prompt Cursor with: *"Write a Truto mapping that takes a Salesforce response, extracts the custom `Industry__c` field, and maps it to `company_sector`, defaulting to 'Unknown' if missing,"* the AI will instantly generate the correct expression using the `$firstNonEmpty` function.

## Building with the Truto CLI and Truto Skills

Writing code is only half the battle. You still need to test it, inspect the data, and manage platform resources. The `truto-cli` skill bridges the gap between writing integration logic and actually running it.

The [Truto CLI](https://truto.one/cli) covers the full admin API, data-plane APIs, and power-user features like bulk export and diffing (read our [CLI blog post](https://truto.one/blog/truto-cli/) for a deep dive). Without context, an AI will guess the CLI syntax. It will invent flags like `--account-id` instead of the required `-a`, or it will try to write a custom Python script to paginate through records when a single CLI command would do the job.

With the skill loaded, the AI knows the exact command structures. You can prompt your assistant to scaffold resources directly from your terminal.

**Managing Admin Resources**
Instead of clicking through a dashboard, tell the AI to create an integration. It knows the required fields and optimistic locking rules:

```bash
truto integrations create -b '{"name":"slack","config":{"label":"Slack","auth_type":"oauth2"}}'
```

**Testing Data-Plane APIs**
When you need to verify a unified model or proxy endpoint, the AI knows how to construct the right call. It understands that unified paths contain a slash (`crm/contacts`) while proxy paths do not (`tickets`), and that the `-a` flag is mandatory:

```bash
truto unified crm contacts -m search -a <account-id> -b '{"query":"Jane"}'
```

**Bulk Exports and Data Piping**
If you need to extract data for analysis, the AI won't write a custom pagination script. It knows the CLI has an `export` command that handles auto-pagination natively. It also knows to use `ndjson` for streaming large datasets instead of buffering everything in memory:

```bash
truto export crm/contacts -a <account-id> -o ndjson | jq '.email'
```

By combining the CLI with Truto Skills, your AI assistant becomes an operator, not just a code generator. It can scaffold the configuration, test the endpoints, and export the results without leaving the editor.

## Why Declarative Platforms Are the Best Match for AI Agents

There's a broader point here that goes beyond Truto specifically. The platforms that benefit most from AI coding assistants are the ones where "writing code" means authoring configuration and expressions rather than imperative logic.

Consider the contrast. On a code-heavy integration platform, asking the AI to add a new CRM connector means generating hundreds of lines across multiple files - an HTTP client class, authentication handler, pagination logic, response serializers, error mappers, and tests for all of it. The surface area for hallucination is enormous.

As detailed in our guide to [shipping API connectors as data operations](https://truto.one/blog/zero-integration-specific-code-how-to-ship-new-api-connectors-as-data-only-operations/), Truto uses zero integration-specific code. Everything is handled via declarative JSON configurations and JSONata mapping expressions. The runtime engine - which handles auth, pagination, error normalization, and everything else - is the same generic pipeline for every integration. No per-integration code to hallucinate.

This architectural choice wasn't made with AI in mind; it was made for extensibility and reliability. But it turns out that a platform designed around declarative data rather than imperative code is exactly the kind of system where AI assistants excel. The skill just needs to teach the AI the shape of the config and the JSONata expression syntax. The platform handles the rest.

## Stop Fighting the AI, Start Shipping Integrations

The gap between AI coding assistant adoption and trust in their output tells a clear story: developers use these tools because the speed gains are real, but they spend too much time fixing the output. A generic coding assistant is fast but generic. A skill-equipped assistant is still not infallible, but now it knows your platform rules, your escape hatches, and the difference between a real abstraction and an invented one.

If your team is already using Cursor or Claude Code to build B2B SaaS integrations, the next step is obvious: stop pasting the same Truto docs into chat threads and install the context once. You stop fighting the AI over outdated documentation and start treating it like a senior integration engineer who has memorized the entire platform specification.

> Want to stop maintaining brittle API integrations? Partner with Truto to normalize your third-party data, handle OAuth lifecycle management, and ship customer-facing integrations in days, not months.
>
> [Talk to us](https://cal.com/truto/partner-with-truto)
