Connect Kandji to ChatGPT: Automate Device Audits
Learn how to connect Kandji to ChatGPT using a managed MCP server. A step-by-step technical guide to automating MDM device audits, app compliance, and security logs.
IT and DevOps teams sit on a goldmine of device telemetry inside Kandji. When an incident occurs - say, a zero-day vulnerability requires you to audit installed applications across your entire macOS fleet - querying that data means either clicking through the Kandji dashboard manually or writing custom Python scripts against their API.
With OpenAI's rollout of Model Context Protocol (MCP) client support in ChatGPT's Developer Mode, you can give an LLM direct access to your Kandji tenant. You can literally ask ChatGPT to "find all Macs missing the latest OS update and list their assigned users."
Connecting ChatGPT to Kandji isn't as simple as pasting an API key into a prompt. You need an MCP server to translate the LLM's natural language requests into structured REST API calls. You could build and host a custom MCP server yourself, but then you are on the hook for managing Kandji's strict 10,000 requests-per-hour rate limit, handling pagination cursors, and writing comprehensive JSON schemas for every endpoint.
This guide shows you how to use Truto's SuperAI to instantly generate a managed MCP server for Kandji, connect it to ChatGPT, and execute complex device health and compliance audits using five core tools: list_devices, get_device, list_applications, list_device_activity, and list_parameters.
The Architectural Reality of the Kandji API
Before wiring up an AI agent to a Mobile Device Management (MDM) platform, you have to understand the shape of the underlying API.
Kandji's API is highly capable but heavily structured. It enforces a strict tenant-level rate limit of 10,000 requests per hour and 50 requests per second. If you unleash a naive AI agent on your Kandji instance and ask it to "audit all 5,000 devices," the LLM will likely attempt to fire off thousands of concurrent GET requests, instantly hitting the 429 Too Many Requests ceiling and crashing your integration.
Kandji paginates its list endpoints using limit and offset parameters. LLMs are notoriously bad at managing integer-based pagination state across multiple tool calls. If the model loses track of the offset, it will either skip devices or enter an infinite loop of fetching the same page.
A production-grade MCP server must abstract these quirks away from the model. Native LLM connectors are convenient until they hit these edge cases. A pre-built connector typically exposes a tiny fraction of a vendor's API surface - usually just the basic read endpoints. If you need your AI agent to query custom Kandji parameters, parse deep nested activity logs, or execute complex device actions, you are out of luck.
How Truto's Managed MCP Server Fixes the Boilerplate
Instead of forcing a unified data model or limiting your access, a managed MCP server exposes the raw vendor APIs directly to the LLM, but makes them intelligent.
graph TD
A[ChatGPT Developer Mode] -->|JSON-RPC 2.0 via HTTPS| B[Truto MCP Router]
B -->|Tool Call: list_devices| C[Truto Proxy API]
C -->|Handles Rate Limits & Auth| D[Kandji REST API]
D -->|Raw JSON Response| C
C -->|Transforms limit/offset to next_cursor| B
B -->|Structured Tool Result| ATruto sits between ChatGPT and Kandji. When you connect your Kandji account to Truto, our system dynamically generates an MCP server endpoint. We read the underlying integration definitions and compile them into an array of highly descriptive MCP tools.
The LLM never sees your raw Kandji API token. It connects to a unique, cryptographically hashed Truto MCP URL. When ChatGPT calls a tool, all arguments arrive as a single flat object. The Truto MCP router splits them into query parameters and body parameters using the schemas' property keys via an internal mapping function. Truto maps the flat JSON arguments into the correct query parameters and HTTP body, executes the request against Kandji, and returns the data.
Truto normalizes Kandji's pagination automatically. We inject a next_cursor property into the tool's query schema with explicit instructions telling the LLM to pass the cursor back unchanged. This prevents the model from hallucinating offsets and ensures reliable data retrieval.
Step-by-Step: Connecting Kandji to ChatGPT
Step 1: Generate a Kandji API Token
You need a scoped API token from your Kandji tenant.
- Log in to your Kandji admin console.
- Navigate to Settings -> Access.
- Scroll down to the API Token section and click Add Token.
- Name the token (e.g., "Truto MCP Integration") and click Create.
- Copy the token immediately - you will not be able to view it again.
- Click Configure to set permissions. For this audit workflow, grant
GETaccess to Device details, Device list, Device Parameters, and Application list.
Step 2: Create the MCP Server in Truto
Securely store that token in Truto and generate the MCP URL.
- In your Truto dashboard, navigate to Integrations and select Kandji.
- Connect your account using the API token and your Kandji tenant URL (e.g.,
https://<your-subdomain>.api.kandji.io). - Once the account is linked, generate an MCP server:
- Via the UI: Go to the integrated account page, click the MCP Servers tab, click Create MCP Server, and select your desired configuration (name, allowed methods, expiry, etc.).
- Via the API: Make a POST request to
https://api.truto.one/integrated-account/<integrated_account_id>/mcpwith your desired config.
- Truto will output a unique endpoint URL:
https://api.truto.one/mcp/<secure-token>.
Security Note: The Truto MCP URL contains a hashed token that acts as the authentication mechanism for the MCP client. Treat this URL like a password. Do not commit it to version control. For higher-security scenarios, Truto allows you to require additional API token authentication, forcing the caller to provide a valid Truto API token as a Bearer token.
Step 3: Configure ChatGPT Developer Mode
OpenAI restricts custom MCP connectors to their Developer Mode, which is available on Pro, Plus, Business, Enterprise, and Edu plans.
- Open ChatGPT on the web.
- Go to Settings -> Apps -> Advanced settings and toggle Developer mode on.
- Click Create app (or Add MCP Server) under the custom connectors section.
- Name your connector (e.g., "Kandji MDM").
- Paste your Truto MCP URL into the Server URL field.
- Set Authentication to No Authentication (Truto handles the API auth via the token embedded in the URL).
- Save the configuration. ChatGPT will immediately perform an MCP handshake, call the
tools/listendpoint, and register the available Kandji operations.
The Kandji Tool Inventory for AI Agents
When ChatGPT connects to the Truto MCP server, it gains access to specific tools derived from Kandji's API endpoints. The full list of available tools and their descriptions is documented on the Kandji integration page. Here are the five essential tools you need for device and application auditing, along with how the LLM interacts with them.
1. list_devices
Description: Retrieves a paginated list of all devices enrolled in your Kandji tenant.
When the LLM uses it: To get a high-level overview of the fleet, find a specific device by name, or identify devices running outdated operating systems.
Schema highlights: Accepts query parameters for filtering. Truto automatically appends limit and next_cursor to handle pagination safely.
{
"name": "list_devices",
"description": "List all devices in Kandji with details like serial number, model, and OS version.",
"inputSchema": {
"type": "object",
"properties": {
"platform": { "type": "string", "description": "Filter by Mac, iPad, iPhone, etc." },
"limit": { "type": "string", "description": "The number of records to fetch" },
"next_cursor": { "type": "string", "description": "The cursor to fetch the next set of records. Always send back exactly the cursor value you received without decoding, modifying, or parsing it." }
}
}
}2. get_device
Description: Fetches the granular details of a specific device using its unique Kandji ID.
When the LLM uses it: After identifying a suspect device via list_devices, the model calls this tool to inspect its FileVault status, IP address, serial number, and exact enrollment state.
Schema highlights: Requires the id parameter. The response is a deep JSON object containing the full device posture.
3. list_applications
Description: Lists all applications installed on a specific managed device.
When the LLM uses it: To verify if a required security agent (like CrowdStrike or Tailscale) is installed, or to hunt for unauthorized shadow IT software.
Schema highlights: Requires the device id. Returns an array of app names, bundle identifiers, and version numbers.
4. list_device_activity
Description: Retrieves a chronological timeline of recent activities, commands, and status changes for a specific device.
When the LLM uses it: To investigate why a device fell out of compliance. The LLM can read the activity log to see if a user manually removed a profile or if an MDM command failed to execute.
Schema highlights: Requires the device id and supports pagination for deep historical audits.
5. list_parameters
Description: Lists all parameters and library items (profiles, scripts, custom apps) assigned to a specific device.
When the LLM uses it: To confirm that a device is receiving the correct baseline configurations from its assigned Kandji Blueprint.
Executing Real-World Audits with ChatGPT
Now that the plumbing is in place, you can interact with your MDM data conversationally. Because the LLM understands the schemas of the tools provided by the MCP server, it can chain multiple API calls together to answer complex questions.
Scenario A: The Zero-Day App Audit
Imagine a critical vulnerability is announced for a specific version of a developer tool. You need to know exactly who is running it.
Your Prompt:
"Audit all Mac devices in Kandji. Find any device that has 'Docker Desktop' installed. For those devices, tell me the version of Docker Desktop they are running, the device name, and the assigned user."
How ChatGPT executes this:
- Calls
list_deviceswith a filter for the Mac platform. - Iterates through the returned device IDs.
- For each ID, calls
list_applicationsto pull the software inventory. - Filters the application arrays in memory for "Docker Desktop".
- Synthesizes the final report and presents it to you in a clean markdown table.
Watch your rate limits: If you have 5,000 devices, asking the LLM to loop through every single one and call list_applications will burn through Kandji's 10,000 requests-per-hour limit rapidly. For massive fleets, instruct the LLM to filter the initial list_devices call as narrowly as possible before iterating.
Scenario B: Investigating a Non-Compliant Machine
When an alert fires in your SIEM indicating a machine is missing its endpoint protection, you can use ChatGPT to investigate the root cause.
Your Prompt:
"Look up the device assigned to 'sarah.connor@company.com'. Check its current device parameters to see if the CrowdStrike profile is installed. Then, pull the recent device activity to see if there were any failed MDM commands in the last 48 hours."
How ChatGPT executes this:
- Calls
list_devicesfiltering by the user's email to get the device ID. - Calls
list_parametersusing that ID to verify the presence of the CrowdStrike configuration profile. - Calls
list_device_activityto pull the recent logs. - Analyzes the JSON response for errors, such as "MDM command rejected" or "Profile installation failed".
- Summarizes the exact failure reason in plain English.
Architecting for Reliability
Exposing raw APIs to LLMs is inherently risky. Models hallucinate parameters, misinterpret nested JSON objects, and fail to handle network timeouts gracefully.
By using a managed MCP server like Truto, you enforce a strict contract between the AI and the API. The tools exposed to ChatGPT are gated by explicit query_schema and body_schema definitions derived directly from our internal documentation records. If an endpoint lacks proper documentation, Truto intentionally omits it from the MCP server. This acts as a quality gate, ensuring the LLM only attempts operations that are guaranteed to work.
Truto's proxy architecture means that every tool call benefits from enterprise-grade infrastructure. If Kandji returns a 503 Service Unavailable or a transient network error, the proxy handles the retry logic before the LLM even knows something went wrong. This prevents the model from generating confusing apologies and abandoning the workflow.
Strategic Next Steps for IT Automation
Connecting ChatGPT to Kandji transforms your MDM platform from a static dashboard into an interactive, agentic system. You no longer have to write custom Python scripts for one-off audits or manually cross-reference spreadsheets to find non-compliant devices.
Read-only audits are just the beginning. Once you are comfortable with the read operations, you can expose write methods - allowing the LLM to automatically move devices to quarantine Blueprints, trigger remote locks, or force OS updates based on natural language commands.
The bottleneck to AI automation is no longer the reasoning capability of the models; it is the integration layer. Stop wasting engineering cycles building and maintaining custom API wrappers.
FAQ
- What are the Kandji API rate limits?
- The Kandji API enforces a strict limit of 10,000 requests per hour per tenant, and 50 requests per second. When building AI agents, you must implement pagination and careful filtering to avoid hitting this ceiling.
- How do I enable Developer Mode in ChatGPT?
- ChatGPT Developer Mode is available on Pro, Plus, Business, Enterprise, and Edu plans. You can enable it by navigating to Settings -> Apps -> Advanced settings -> Developer mode.
- Do I need to write code to build a Kandji MCP server?
- No. Using a managed infrastructure layer like Truto's SuperAI, you can dynamically generate an MCP server from your Kandji API token without writing any custom integration code or managing server hosting.