Skip to content

Connect Kandji to AI Agents: Automate MDM Workflows & Audits

Learn how to bypass Kandji API rate limits and complex pagination by connecting your MDM to AI agents using Truto's /tools endpoint and LangChain.

Uday Gajavalli Uday Gajavalli · · 7 min read
Connect Kandji to AI Agents: Automate MDM Workflows & Audits

IT and DevOps teams sit on a massive repository of device telemetry inside Kandji. When an incident occurs - say, a zero-day vulnerability requires you to audit installed applications across your entire macOS fleet - querying that data means either clicking through the Kandji dashboard manually or writing custom Python scripts against their API.

Building an AI agent to handle these queries autonomously sounds like the perfect solution. You could simply ask an LLM (like ChatGPT or Claude) to "find all Macs missing the latest OS update and list their assigned users." However, connecting an LLM directly to a Mobile Device Management (MDM) platform exposes the harsh realities of API physics.

Gartner's 2026 strategic technology trends report notes that automation is no longer just surrounding the business - it is the business. But for IT teams, that automation infrastructure is heavily bottlenecked by fragile API integrations. You cannot just paste a Kandji API key into a prompt and expect a LangChain agent to audit 5,000 devices successfully.

This guide breaks down the architectural constraints of the Kandji API, explains why hand-coding agent tools is a maintenance nightmare, and shows you how to use Truto's /tools endpoint to instantly generate paginated, rate-limited, and safe tools for your AI agents.

The Architectural Constraints of the Kandji API

Before wiring up an AI agent to Kandji, you have to understand the shape of the underlying API. LLMs are incredibly smart at reasoning, but they are notoriously bad at adhering to strict API constraints without explicit guidance.

Rate Limits and the N+1 Problem

The Kandji API enforces a strict tenant-level rate limit of 10,000 requests per hour and 50 requests per second.

If you unleash a naive AI agent on your Kandji instance and ask it to "audit all applications on all devices," the LLM will likely fall into the N+1 query trap. It will call the list devices endpoint once, receive a batch of devices, and then attempt to call the list applications endpoint individually for every single device. If you have 15,000 devices in your fleet, the agent will exhaust your hourly rate limit in less than five minutes, resulting in 429 Too Many Requests errors and a crashed workflow.

The Pagination Trap

Kandji handles pagination using limit and offset parameters. If your agent does not explicitly understand how to pass the exact offset cursor back into the next request, it will get stuck in an infinite loop reading the first 300 devices over and over.

Writing custom JSON schemas to teach an LLM how to handle these pagination cursors for every single Kandji endpoint takes days. Maintaining those schemas when Kandji updates their API takes even longer.

The Truto Architecture: Dynamic Tool Generation

Instead of hand-coding tool schemas and pagination logic, you can use Truto to handle the integration layer. Truto sits between your agent framework (like LangChain or LangGraph) and Kandji.

Every integration on Truto maps underlying API endpoints to standardized Resources and Methods. These are exposed as Proxy APIs, where Truto handles the authentication lifecycle, standardizes the pagination, and normalizes the rate limit headers.

For AI agents, Truto takes this a step further. The /tools endpoint dynamically reads the configuration for your connected Kandji account and generates a structured array of tools ready to be consumed by an LLM.

graph TD
    A[LLM Agent<br>LangGraph] -->|Tool Call| B(Truto Proxy API<br>Auth & Pagination)
    B -->|Fetch Schema| C{Truto /tools Endpoint}
    C -->|Dynamic Tool| B
    B -->|REST Request| D[Kandji API<br>Rate Limited]
    D -->|Raw JSON| B
    B -->|Normalized Data| A

Injected Pagination Instructions

When Truto generates a tool definition for a list method, it automatically injects limit and next_cursor properties into the JSON Schema. The description for next_cursor explicitly instructs the LLM to pass the cursor value back unchanged. This prevents the model from trying to guess or increment the offset manually, solving one of the most common failure modes in custom AI agents.

Real-Time Prompt Engineering

A tool's effectiveness depends entirely on how well its description matches the user's prompt. If your LLM fails to call a specific Kandji endpoint, you do not need to update your codebase. You can log into the Truto UI, edit the description for that specific API method, and hit save. The /tools endpoint updates the schema in real-time, instantly improving your agent's routing logic.

The Kandji Tool Inventory

When you connect Kandji to Truto, the /tools endpoint automatically exposes the following core operations for your LLM. The full list of available tools and their descriptions is documented on the Kandji integration page. You must provide these definitions to your agent to enable comprehensive MDM audits:

  • list_devices: List all devices in Kandji with details like serial number, model, and OS version. This is the entry point for almost all fleet-wide queries.
  • get_device: Retrieve detailed information for a specific device including security settings and MDM status.
  • list_applications: List all applications installed on a specific device managed by Kandji. Essential for software compliance checks.
  • list_device_activity: Get a timeline of recent activities and status changes for a specific device. Useful for debugging offline devices.
  • list_parameters: List all parameters and library items assigned to a specific device to verify configuration profiles.

Step-by-Step: Building the Kandji AI Agent

Let's build an autonomous workflow using LangGraph and the Truto LangChain SDK. This agent will be able to answer complex natural language questions about your Kandji fleet.

Prerequisites and Setup

First, install the necessary packages for LangChain, LangGraph, and Truto.

npm install @langchain/openai @langchain/langgraph @trutohq/truto-langchainjs-toolset

Fetching and Binding Tools

We will use the TrutoToolManager to fetch the schemas from the /tools endpoint and bind them to an OpenAI model.

Warning

Never give an AI agent unrestricted write access to your MDM on day one. Always start by filtering your toolset to read-only methods, ensuring the model can audit device states without accidentally issuing a remote wipe command.

import { ChatOpenAI } from "@langchain/openai";
import { TrutoToolManager } from "@trutohq/truto-langchainjs-toolset";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { MemorySaver } from "@langchain/langgraph";
 
async function runKandjiAudit() {
  // 1. Initialize the tool manager for the Kandji integrated account
  const toolManager = new TrutoToolManager({
    trutoApiKey: process.env.TRUTO_API_KEY,
    integratedAccountId: "your_kandji_account_id"
  });
 
  // 2. Fetch read-only tools to prevent accidental device modifications
  const tools = await toolManager.getTools({
    methods: ["read"] 
  });
 
  // 3. Initialize the LLM
  const model = new ChatOpenAI({
    modelName: "gpt-4o",
    temperature: 0,
  });
 
  // 4. Create the LangGraph agent
  const checkpointer = new MemorySaver();
  const agent = createReactAgent({
    llm: model,
    tools: tools,
    checkpointSaver: checkpointer,
  });
 
  // 5. Execute the workflow
  const config = { configurable: { thread_id: "audit_thread_1" } };
  const result = await agent.invoke({
    messages:[
      {
        role: "user",
        content: "Find all MacBooks missing the latest OS update and list their serial numbers."
      }
    ]
  }, config);
 
  console.log(result.messages[result.messages.length - 1].content);
}
 
runKandjiAudit();

How the Agent Executes

When you run this code, the LangGraph orchestrator takes over.

  1. The LLM analyzes the prompt and recognizes it needs a list of devices.
  2. It issues a tool_call for list_devices.
  3. The LangGraph executor intercepts this call and routes it to the Truto Proxy API.
  4. Truto handles the authentication, translates the request into Kandji's required format, and returns the raw JSON.
  5. The executor wraps the JSON in a ToolMessage and feeds it back to the LLM.
  6. The LLM parses the payload, identifies the outdated macOS versions, and generates the final natural language summary.

If the response exceeds Kandji's pagination limit, the LLM reads the injected next_cursor instruction and automatically fires a subsequent tool_call to fetch the next page of devices.

Real-World Agentic Workflows

Once you have the integration layer abstracted, you can start building highly complex, multi-step IT workflows.

The Zero-Day Vulnerability Audit

When a critical CVE drops for a specific application (e.g., an outdated version of Zoom), IT teams need answers immediately. You can instruct your agent to cross-reference the list_devices tool with the list_applications tool.

By adding a system prompt that explicitly tells the agent to filter devices by operating system before querying applications, you mitigate the N+1 rate limit problem. The agent autonomously narrows down the search space, checks the application versions on the vulnerable subset, and outputs a formatted CSV of affected users directly into a Slack channel.

Automated Offboarding Triage

Offboarding is a highly manual process prone to human error. You can connect a LangGraph workflow to listen for webhook events from your HRIS. When an employee is marked as terminated, the agent triggers automatically. It uses list_devices to find hardware assigned to that user, checks list_device_activity to ensure the device is currently online, and flags the asset for the IT team to process a remote wipe.

Cross-Platform Fleet Visibility

If your organization uses Kandji for macOS and Microsoft Intune for Windows, you can bind tools from both platforms to the exact same agent. The LLM acts as the ultimate normalization layer, allowing you to ask, "Give me a breakdown of all offline devices across the entire company," without having to write separate aggregation logic for Apple and Microsoft endpoints.

Next Steps for IT Automation

Connecting an LLM to an MDM is entirely an exercise in translating natural language into safe, rate-limited, paginated API calls. Writing the underlying integration code yourself forces you to maintain complex schemas and handle vendor-specific API quirks indefinitely.

By leveraging Truto's /tools endpoint, you decouple your AI logic from the underlying API physics. Your agents get immediate access to real-time device telemetry, and your engineering team avoids the maintenance burden of building custom connectors.

FAQ

Can I use these Kandji tools with Vercel AI SDK instead of LangChain?
Yes. The Truto /tools endpoint returns standard JSON schemas that can be adapted for any agent framework, including Vercel AI SDK, CrewAI, or raw OpenAI API calls.
How do I prevent the AI agent from wiping a device?
When fetching tools from the Truto API, pass the methods=["read"] query parameter. This ensures the LLM only has access to list and get endpoints, physically preventing it from executing destructive actions.
Does Truto cache the Kandji device data?
No. The Proxy API passes the request directly to Kandji in real-time, ensuring your agent always has the most up-to-date device telemetry without storing sensitive MDM data at rest.
How does the agent handle Kandji's API rate limits?
Truto manages the rate limit headers at the proxy layer, while injected JSON schema descriptions instruct the LLM on exactly how to handle pagination cursors to minimize unnecessary API calls.

More from our Blog

Introducing Truto Agent Toolsets
AI & Agents/Product Updates

Introducing Truto Agent Toolsets

Newest offering of Truto SuperAI. It helps teams using Truto convert the existing integrations endpoints into tools usable by LLM agents.

Nachi Raman Nachi Raman · · 2 min read