Technical Review Pack

Public technical material for reviewing VARUX control surfaces.

This page exposes the public artifacts technical buyers and engineers usually inspect first: integration boundaries, policy objects, decision outcomes, and audit record shape.

  • Database write-path boundaries
  • Agent tool and approval boundaries
  • Policy and evidence objects
Public Scope

Enough detail to review the model before a pilot conversation.

This pack stays honest about scope. It contains public technical material only. Environment-specific deployment detail, benchmarks, and customer-specific workflows are discussed during direct review.

Included

  • Boundary placement and request path examples.
  • Shared policy object structure.
  • Audit and decision record examples.

Not Included

  • Customer-specific topology assumptions.
  • Invented benchmarks or private references.
  • Deployment promises detached from scope.

Best Use

  • Initial engineering fit evaluation.
  • Cross-functional review with security and platform teams.
  • Scoping a first controlled pilot boundary.
Review Angles

Four things this pack should answer quickly.

The material here is designed to remove ambiguity around where control lives, what policy decides, and how evidence survives review.

Boundary Placement

Where AXIS or ARBITER sits relative to the request path, approval services, and downstream systems.

Policy Inputs

The fields used to classify a request, match policy, and decide whether to allow, block, or require approval.

Decision Outcomes

What changes when the outcome is allow, block, or approval-required, and which assumptions stay out of band.

Evidence Fields

The actor, surface, policy, approval, and execution data preserved for review and retention.

Integration Guide

How the control boundary is inserted without hand-waving.

Both products follow the same operating sequence even though the request surface changes: classify the action, evaluate policy, attach approval if needed, then preserve evidence.

Shared insertion checklist

What matters most in the first architecture review is not the total platform diagram. It is the exact point where state-changing work crosses from intent into execution.

  • Identify the risky request path
  • Define the actor and target surface
  • Attach approval without widening overrides
  • Record evidence before release
01

Capture the request

Collect the minimum identity, target, and context fields needed to evaluate the action without guessing.

02

Normalize the surface

Reduce database writes or agent tool calls into a stable shape the policy engine can classify consistently.

03

Decide and gate

Return allow, block, or require-approval using explicit policy objects and narrow approval rules.

04

Emit evidence

Persist the decision record with reason codes, approval lineage, and execution timing before the action is treated as complete.

Policy Model

Shared object shape, surface-specific matching.

AXIS and ARBITER do not pretend every surface is the same. They do share the same decision grammar: scoped match inputs, explicit effect, and reviewable evidence fields.

AXIS Example

Database write-path rule

AXIS policies typically classify statement type, scope, environment, and approval requirements before any write reaches production state.

{
  "id": "axis.write_requires_approval",
  "surface": "postgres.write",
  "match": {
    "statement_class": ["UPDATE", "DELETE", "DDL"],
    "table": ["customers", "accounts"],
    "environment": ["production"]
  },
  "decision": {
    "effect": "require_approval",
    "approval_scope": "customers.balance"
  },
  "evidence": {
    "reason_code": "WRITE_REQUIRES_APPROVAL",
    "log_timing": "pre_exec"
  }
}
ARBITER Example

Agent tool rule

ARBITER policies usually classify agent identity, requested tool, target resource, and whether the action can proceed without a narrow approval path.

{
  "id": "arbiter.tool.requires_human",
  "surface": "agent.tool_call",
  "match": {
    "agent_id": ["ops-assistant"],
    "tool": ["github.merge_pull_request"],
    "environment": ["production"]
  },
  "decision": {
    "effect": "require_approval",
    "approval_scope": "repo.merge"
  },
  "evidence": {
    "reason_code": "TOOL_CALL_REQUIRES_APPROVAL",
    "log_timing": "pre_exec"
  }
}
Shared invariants across both products: explicit default-deny behavior under incomplete context, stable reason codes per decision path, and evidence fields that survive downstream incident review.
Audit Schema

Decision records are part of the product, not an afterthought.

Reviewability breaks when the evidence trail depends on post-event reconstruction. The sample below shows the minimum fields we treat as first-class in a decision record.

Database Example

AXIS decision record

{
  "request_id": "req_92af01",
  "surface": "postgres.write",
  "actor": "app_role",
  "target": "customers.balance",
  "policy_id": "axis.write_requires_approval",
  "decision": "require_approval",
  "reason_code": "WRITE_REQUIRES_APPROVAL",
  "approval_state": "pending",
  "evidence_timing": "pre_exec",
  "trace_id": "8b14e2c1c0a34a4aa1726f5d"
}
  • The decision record exists before the write is released.
  • Approval state is attached to the same record rather than a separate informal note.
  • Reason codes stay stable enough for review and retention policy.
Agent Example

ARBITER action record

{
  "request_id": "req_4b0d3a",
  "surface": "agent.tool_call",
  "agent_id": "ops-assistant",
  "tool": "github.merge_pull_request",
  "user_scope": "release-manager",
  "policy_id": "arbiter.tool.requires_human",
  "decision": "require_approval",
  "reason_code": "TOOL_CALL_REQUIRES_APPROVAL",
  "approval_state": "pending",
  "evidence_timing": "pre_exec",
  "session_id": "sess_f5ac12"
}
  • Agent identity, user scope, and tool request stay in one reviewable object.
  • The approval chain is attached to the action, not stored as an external assumption.
  • Execution outcome can be appended without changing the original decision basis.
Checklist

What to bring into the first review.

The best first conversation is narrow and technical. It should be obvious which boundary matters and what evidence the team expects to keep.

Bring this

  • The risky write path or agent tool action you want to control first.
  • The actor model behind that action: service, human, or agent.
  • The approval or exception path that exists today.

Ask this

  • Where exactly does the control point sit?
  • Which policy inputs are mandatory before release?
  • How is evidence preserved if the action is denied or paused?

Do not assume

  • That one pilot surface automatically means total platform coverage.
  • That approval rules can stay informal if the action is high-risk.
  • That post-event logs are enough for evidence-grade review.

Expect next

  • A scope confirmation around one boundary worth proving.
  • Requests for topology, actor, and approval-path detail.
  • Environment-specific material only after the boundary is real.
Next Step

Need environment-specific detail?

Use the public material here to decide whether a direct technical review is worth the time. If it is, bring one real boundary rather than a generic platform request.

Public review pack only Direct contact: contact@varuxcyber.com