Vibe Coding for IT

A design pattern (and category term coined by Serval) for building ITSM automations: IT admins describe an automation in natural language; an AI automation agent generates deterministic, auditable code that becomes a callable tool for a separate help-desk agent to invoke at runtime.

Terminology note: The press (TechCrunch) labeled the authoring side a “Builder agent.” The official Serval Product Security docs use “Automation agent” for the admin-facing workflow builder. The pattern is the same; “automation agent” is the term Serval prefers internally.

What It Is

The IT analog of “vibe coding” in software development — but with two opinionated constraints that distinguish it from generic LLM-codegen:

  1. The output is deterministic code, not freeform LLM action. Code is reviewable, versionable, testable. The help-desk agent at runtime calls tools; it does not generate API calls inline.
  2. The code is the contract. Approval workflows, MFA requirements, scope limits, and time-bounds are encoded in code, not stored as agent prompt rules. Auditors trace behavior to source.
Traditional automation:        Drag-drop nodes → opaque rule graph
LLM-only automation:           Agent reasons → calls APIs directly (non-deterministic)
Vibe coding for IT:            NL prompt → builder agent → reviewable code → tool catalog → help-desk agent invokes deterministically

How It Works

  1. Author — IT admin types a description (“when an engineer asks for Figma access, check if peers have it, then provision Editor for 30 days”) in natural language.
  2. Generation — Automation agent reads relevant docs, screenshots, or APIs and emits TypeScript with tests (or equivalent typed language).
  3. Review — IT admin reviews the code, can edit in plain English or directly in code, may run tests. Serval’s CLI lets the admin pull the workflow into a team.yaml + workflows/<slug>/{index.ts, workflow.yaml} tree, review/diff/branch in Git, then push back.
  4. Publish — the code becomes a named tool in the org’s tool catalog.
  5. Invocation — the help-desk agent (a separate agent) decides which tool fits the user’s request and calls it with structured arguments. No LLM in the runtime path unless the IT admin explicitly inserted an LLM step in the workflow body — the runtime is fully deterministic by default.
  6. Audit — every invocation is logged with arguments, approver, duration, and outcome. Versioning is tracked per-publish (timestamps, authors, restore).

Why It Matters

  • Bounds non-determinism to invocation, not execution. The agent may pick the wrong tool, but it cannot “improvise” what the tool does.
  • Imports software-engineering primitives into IT operations — code review, version control, tests, types — historically absent from ITSM admin work.
  • Scales by tool catalog, not by agent prompt. New behavior = new tool, not a longer system prompt.
  • Defends against rogue-AI risk. A common enterprise objection (“the agent will do something terrible”) is mitigated because every “tool” is finite, reviewed, and code-defined.

Trade-offs

  • Output language is a strong opinion. TypeScript-with-tests is great for engineering-adjacent IT teams; intimidating for traditional IT admins. Adoption hinges on whether the natural-language wrapper is good enough that IT admins rarely touch the code directly.
  • Automation-agent governance question. Who is allowed to vibe-code which automations? Serval’s RBAC model (5 team roles) restricts custom-workflow authoring to Builder and Manager roles, but multi-tenant authoring policy across many teams remains lightly documented.
  • API understanding is the bottleneck. “Vibe coding” works best when the upstream API is well-documented; brittle on undocumented or unstable APIs.
  • Counterposition: hide the code entirely. Console takes the alternative approach — natural-language policy blocks with no exposed code surface. Both can work; the choice signals which buyer persona the product courts (engineer-IT vs ops-IT).
  • Credential and scope isolation matters. Serval’s integration proxy keeps API keys server-side and fixes API scope at integration setup so workflow code can’t widen access. Without this kind of guarantee, “let the AI write code” devolves quickly into “the AI has the keys to everything.”

Worked Examples Shown Publicly

  • JNUC 2025: screenshot-of-Jamf-Pro → working Mac-IT automation in <60 seconds.
  • Apr 2026 sales demo: Reset MFA Factors workflow generated from NL → TypeScript + visual step flow.
  • Installable Okta-grouped templates shown in the demo: List/Get/Assign/Unassign Okta Apps, Create Okta Group, etc. — these are the “factory floor” of the Automation agent’s tool catalog.

Notable Practitioners

  • Serval — coined the term and ships TypeScript-with-tests as the output. Demonstrates “screenshot-to-tool in <60 seconds” against Jamf Pro APIs.
  • Console — adjacent pattern with Console Assistant (NL → policy block) but hides code surface from the user.
  • Internal LLM-on-Bash patterns at AI-forward companies — informal precursors.