Skip to main content
The webAI SDK is a lightweight package that wraps the platform APIs into a clean, typed interface. Instead of manually accessing window.OasisHost and window.ApogeeShell, use the SDK for safe access, streaming helpers, chat memory, and a React hook.
The SDK is located at /apps/webai-sdk/ in the webAI repository. You can copy it into your app or import it directly when building inside the platform.

Quick start

import { useWebai, streamTurn } from 'webai-sdk';

function MyApp() {
  const { oasisState, oasisReady, modelName, host } = useWebai();

  async function ask(prompt) {
    const response = await streamTurn(
      prompt,
      { systemPrompt: 'You are a helpful assistant.' },
      (text) => console.log(text) // accumulated text on each token
    );
    return response;
  }

  return (
    <div>
      <p>Status: {oasisState} | Model: {modelName}</p>
      <button disabled={!oasisReady} onClick={() => ask('Hello!')}>
        Ask AI
      </button>
    </div>
  );
}

Exports

useWebai(options?)

React hook that polls the Oasis runtime and returns the current state. Parameters:
FieldTypeDefaultDescription
pollIntervalMsnumber1200How often to poll the runtime state (ms)
Returns:
FieldTypeDescription
oasisState'waiting' | 'loading' | 'ready'Current runtime state
oasisReadybooleantrue when a model is loaded and idle
modelNamestringDisplay name of the loaded model (last path segment)
hostOasisHostManager | nullThe host instance, or null outside the shell
const { oasisState, oasisReady, modelName, host } = useWebai({ pollIntervalMs: 2000 });

streamTurn(prompt, options, onToken?)

Streams a completion from the Oasis runtime. Handles acquire, request, and release automatically. Parameters:
FieldTypeDescription
promptstringThe user’s input
optionsStreamTurnOptionsConfiguration (see below)
onToken(accumulatedText: string) => voidCalled with the full accumulated text as each token arrives
Returns: Promise<string> — the final accumulated response. StreamTurnOptions:
FieldTypeDefaultDescription
systemPromptstringSystem prompt guiding the model
maxTokensnumber2048Max tokens to generate
temperaturenumber0.7Sampling temperature
timeoutMsnumber150000Request timeout in ms
personaTypestringPersona to use (triggers permission flow if needed)
appIdstringAuto-detectedApp ID for persona permissions and memory
memoryContextboolean | 'auto''auto'Controls chat memory injection (see Chat memory)
chatSessionstringSession ID for memory isolation between independent sessions
const result = await streamTurn(
  'Explain WebGPU in one paragraph.',
  {
    systemPrompt: 'You are a technical writer.',
    maxTokens: 512,
    temperature: 0.5,
    personaType: 'research'
  },
  (text) => updateUI(text)
);

probeOasisState()

Check the current state of the Oasis runtime without side effects. Returns: 'waiting' | 'loading' | 'ready'
StateMeaning
waitingNo model loaded or host unavailable
loadingModel is loading or actively generating
readyModel loaded and idle

getOasisHost()

Returns the OasisHostManager instance or null if running outside the shell.

getApogeeShell()

Returns the ApogeeShellManager instance or null if running outside the shell.

getDefaultAppId()

Returns the app ID injected by the shell when running in an iframe (window.__APOGEE_APP_ID__), or undefined if not set.

Chat memory

The SDK provides methods to load and clear per-app chat history. Memory is persisted locally and can be injected into prompts automatically.

loadAppChatHistory(appId)

Load the persisted chat memory for an app. Returns: AppChatMemory
{
  summary: string;                        // Rolling summary of conversation
  preferences: Record<string, unknown>;   // Learned user preferences
  recentTurns: Array<{
    role: string;                         // "user" | "assistant"
    text: string;
    at: string;                           // ISO 8601 timestamp
    turnId: string;
    personaId: string | null;
    estimatedTokens: number;
    chatSession: string | null;
  }>;
  toolOutcomes: Array<{
    toolId: string;
    success: boolean;
    error: string | null;
    resultText: string;
    at: string;
    turnId: string;
    personaId: string | null;
  }>;
  updatedAt: string | null;
}

clearAppChatHistory(appId)

Clears all chat memory for an app.

Memory context options

When using streamTurn, the memoryContext option controls how chat history is injected into prompts:
ValueBehavior
'auto' (default)Follows the shell’s Memory context toggle in settings
trueAlways inject memory context
falseNever inject memory context for this request
Chat history is always auto-saved when appId is present, regardless of memoryContext. The chatSession option provides isolation between independent chat sessions within the same app. When set, memory filtering uses the session ID instead of the persona ID.

Constants

ConstantValueDescription
REQUEST_TIMEOUT_MS150000Default request timeout (2.5 minutes)
SAMPLING_DEFAULTS.temperature0.7Default sampling temperature
SAMPLING_DEFAULTS.maxTokens2048Default max tokens

Types

The SDK exports these TypeScript types:
  • OasisState'waiting' | 'loading' | 'ready'
  • OasisHostStatus — Runtime status fields (hasRuntime, lastModel, loadingModel, isGenerating, etc.)
  • OasisHostManager — Full host interface
  • OasisRequestOptions — Options for host.request()
  • OasisPersonaInfo — Persona definition
  • OasisModelSelectionResult — Model selection result
  • AppChatMemory — Chat memory structure
  • ApogeeShellManager — Shell manager interface
  • UseWebaiOptions / UseWebaiResult — Hook types
  • StreamTurnOptions — Options for streamTurn()