HomeConnectOpenAI Agents SDK

Integration Guide

OpenAI Agents SDK

Connect OpenAI Agents SDK to OSuite and get your first governed action into /decisions in under 20 minutes.

Instance URL detected: https://studio.osuite.ai

Governance context

Runtime class

Code-First Single-Agent Runtime

A self-built agent where the customer owns the orchestration code and can add guard calls directly.

Recommended surfaces

Embedded Runtime SDK / Package

Instrument customer-controlled runtimes, tools, and orchestrators from inside the code.

Control Plane

Run policy, approvals, replay, and evidence management as the operator system of record.

Typical governance range

Advisory Governance -> Approval-Orchestrated Governance -> Runtime-Enforced Governance

Self-host and future SaaS are deployment model choices for the OSuite control plane. They do not determine governance level by themselves.

1

Deploy OSuite

Get a running instance. Click the Vercel deploy button or run locally.

Already have an instance? Skip to Step 2.

2

Install the OSuite SDK

Add the OSuite Node.js SDK to your agent project.

Terminal

npm install osuite dotenv
3

Set environment variables

Create a .env file in your agent project root.

.env

OSUITE_BASE_URL=https://studio.osuite.ai
OSUITE_API_KEY=<your-workspace-api-key>
4

Add the governance loop to your agent

Wrap your agent's tool execution in OSuite's guard-record-outcome pattern. This annotated walkthrough shows the complete governance loop inline — each comment explains the purpose of that SDK call.

governed-agent.js

import 'dotenv/config';
import { OSuite } from 'osuite';

process.on('unhandledRejection', (reason) => {
  console.error('Unhandled Rejection:', reason);
  process.exit(1);
});

const claw = new OSuite({
  baseUrl: process.env.OSUITE_BASE_URL,
  apiKey: process.env.OSUITE_API_KEY,
  agentId: 'my-openai-agent',
});

// 1. GUARD: Check policy before acting
const decision = await claw.guard({
  action_type: 'data_export',
  declared_goal: 'Export customer report to CSV',
  risk_score: 45,
  systems_touched: ['customer_database'],
});
console.log('Guard decision:', decision.decision);

// 2. RECORD: Declare intent
const action = await claw.createAction({
  action_type: 'data_export',
  declared_goal: 'Export customer report to CSV',
  risk_score: 45,
});
const actionId = action.action?.action_id || action.action_id;

// 3. OUTCOME: Report result
await claw.updateOutcome(actionId, {
  status: 'completed',
  output_summary: 'Exported 150 customer records to report.csv',
});

console.log('Decision recorded:', actionId);

This inline walkthrough covers the complete guard-record-outcome governance loop. For a full example with OpenAI Agents SDK tools, scan, and delete operations, see examples/openai-agents-governed/ in the repo.

5

Run the governed agent

Execute your agent and watch the governance flow.

Terminal

node --env-file=.env governed-agent.js
6

See the result in OSuite

Open your OSuite dashboard to confirm the action was recorded.

Go to /decisions — you should see your action in the ledger with action_type 'data_export', status 'completed', and the output summary you provided.

What success looks like

Go to /decisions — you should see your action in the ledger with action_type 'data_export', agent_id 'my-openai-agent', and status 'completed'.

Navigate to /decisions in your OSuite instance. Your action should appear in the ledger within seconds of the agent run.

Governance as Code

Drop a guardrails.yml in your project root to enforce policies without code changes. OSuite evaluates these rules at the guard step before any action executes.

guardrails.yml

version: 1
project: my-openai-agent
description: >
  Governance policy for an OpenAI Agents SDK data agent.
  High-risk deletions require approval. Reads are auto-allowed.

policies:
  - id: approve_deletions
    description: Require human approval for any delete operation
    applies_to:
      tools:
        - delete_records
        - drop_table
    rule:
      require: approval

  - id: auto_allow_reads
    description: Read operations are low risk
    applies_to:
      tools:
        - scan_for_pii
        - list_records
    rule:
      allow: true