Automation Audit Agent

Describe any workflow. Get quantified savings and cost estimates, a payback timeline, interactive what-if modelling, and a prioritised execution plan with solution design.

SELECT TYPE

Audit Sequence

  1. Step 1

    Describe workflow

    Describe your workflow and generate an automation audit.

  2. Step 2

    Review audit results

    Check savings estimates and opportunities on the Results tab.

  3. Step 3

    Save to portfolio

    Save audits from the Results tab to rank and compare workflows.

Why run this audit?

Enter a workflow to generate:

  • Quantified savings range (P10/P50/P90) & cost estimates
  • ROI timeline & confidence scoring
  • What-if modelling
  • Ranked automation opportunities
  • Proposed solution design

Describe your workflow

Example input

Trigger: Customer support email arrives.

Process: Triage > route > draft > QA > send (~1,200 tickets/month at ~$45/hour blended).

Systems: Gmail, Zendesk, Notion KB.

Rules: Keep human approval on sensitive cases; no auto-send without review.

Result: Automate triage + first-response drafting while keeping CSAT stable.

Use this format for optimal results

Trigger: {What starts the workflow}

Process: {Actual step-by-step flow + monthly volume + blended labor cost assumptions}

Systems: {Systems in scope + real constraints}

Rules: {Approval, compliance, and escalation guardrails that actually apply}

Result: {Desired business outcome + success criteria}

Your workflow

RUN AUDIT

MODEL SELECTION

OVERVIEW

A workflow-audit agent that converts one free-text process brief into a decision-ready automation report. Each run returns modeled P10/P50/P90 savings, expected payback horizon, confidence scoring, ranked opportunities, and a dependency-ordered execution sequence with rollout guardrails. Users can adjust assumptions in What-If controls, save snapshots into a comparable portfolio, and download a server-rendered PDF report of the active scenario.

ARCHITECTURE

The implementation combines a typed React client with Next.js API routes for generation and export. `/api/audit` validates payloads, applies per-IP and per-session rate limits, attempts provider generation with model selection and fallback handling, enforces schema and consistency checks, and streams section events over SSE. The client merges streamed data with deterministic post-processing (scenario recalculation, confidence reasoning, execution planning, and guardrail mapping). Portfolio snapshots persist in localStorage for ranking and comparison, while `/api/audit/pdf` renders a governed PDF document for the active audit state.

FUNCTIONALITY

  • Free-text workflow intake with explicit model selection (or auto-routing)
  • SSE-based progressive loading across workflow, assumptions, metrics, opportunities, and design sections
  • P10/P50/P90 monthly savings estimates with expected annualized framing
  • Expected payback range derived from implementation cost versus savings quantiles
  • Confidence score with factorized rationale and methodology/source transparency
  • What-If controls that recompute metrics live from active scenario assumptions
  • Task-level breakdown, ranked opportunities, and dependency-aware execution sequence
  • Guardrails + success-metric mapping and proposed-design traceability table
  • Portfolio dashboard with save, rank, compare, load, and delete snapshot workflows
  • Downloadable server-rendered PDF export for the current audit state
  • Server-side rate limiting with explicit 429 + Retry-After contract

HOW IT WORKS

A user submits a process brief from the project page. The audit API validates input, applies rate limits, generates a structured result through provider routing, then validates and normalises the payload before streaming section events to the UI. As data arrives, the client computes deterministic layers such as confidence factors, scenario math, task breakdown, execution sequencing, and guardrail tables. The user can tune assumptions, save the result into Portfolio for side-by-side prioritisation, or export the current state as a PDF for stakeholder review.

OUTCOMES

  • Turns vague process descriptions into explicit automation decisions with defensible assumptions
  • Improves trust through transparent confidence/method signals and deterministic post-processing
  • Supports faster prioritisation across multiple workflows via portfolio ranking and compare flows
  • Produces shareable audit artifacts through a governed PDF export of the active scenario state
  • Maintains operational safety with explicit validation, fallback paths, and rate-limit contracts