Skip to main content

The Pipeline

Dino processes your API in four stages. Each stage is isolated, testable, and deterministic.
1

Discovery

Dino connects to your GraphQL endpoint and runs introspection. It captures the full schema — types, queries, mutations, subscriptions — and enumerates every operation.
  • GraphQL introspection query against your endpoint
  • Schema capture: all types, fields, arguments, directives
  • Operation enumeration: every query and mutation extracted
  • Schema snapshot saved for future diffing
2

Agent Execution

Six autonomous agents run in parallel against every operation. Each agent tests one quality dimension — input handling, response correctness, access control, rate limiting, error formatting, and deprecation tracking.No agent depends on another agent’s output. They run concurrently and produce independent findings. A failure in one agent never blocks or corrupts another.See The 12 Agents for details.
3

Catalog

Findings from all agents are merged into a unified operation catalog. Each operation gets:
  • A health score (0-100) based on findings across all dimensions
  • Metadata: argument types, return types, deprecation status
  • AI-generated descriptions (additive — the catalog exists with or without them)
4

Report

The catalog is rendered into JSON or Markdown. Every report includes the schema snapshot it was generated from, so you can diff between runs and see exactly what changed.

Package Architecture

Dino is a monorepo with strict dependency direction. Dependencies flow one way only.

@dino/core

Schema types, shared interfaces, deterministic primitives (Clock, Timer, RandomSource). Zero dependencies on anything above it.

@dino/plugins

The six agent implementations. Each plugin tests one quality dimension. Depends only on core types.

@dino/agents

Agent orchestration — parallel execution, finding aggregation, health score computation.

@dino/cli

The dino command. Config loading, agent wiring, output rendering. Top of the dependency chain.

@dino/reasoning

AI-powered descriptions and explanations. Additive — everything works without it.

@dino/analytics

Event tracking, historical comparisons, regression detection across scan runs.
The strict dependency direction (cli → agents → plugins → core) is enforced at build time. A plugin cannot import from an agent. An agent cannot import from the CLI.

What This Means for You

The pipeline is deterministic. Same API state in, same findings out. See Deterministic Engine.
Agents run in parallel. A six-agent scan is one pass with six concurrent workers, not six sequential passes.
JSON output, exit codes, schema snapshots for diffing. Built for pipelines, not dashboards.
Every output is a file you own. JSON reports, Markdown docs, schema snapshots — stored in your repo. If you stop using Dino, your data stays.