The webAI SDK is a lightweight package that wraps the platform APIs into a clean, typed interface. Instead of manually accessing window.OasisHost and window.ApogeeShell, use the SDK for safe access, streaming helpers, chat memory, and a React hook.
The SDK is located at /apps/webai-sdk/ in the webAI repository. You can copy it into your app or import it directly when building inside the platform.
Quick start
import { useWebai, streamTurn } from 'webai-sdk';
function MyApp() {
const { oasisState, oasisReady, modelName, host } = useWebai();
async function ask(prompt) {
const response = await streamTurn(
prompt,
{ systemPrompt: 'You are a helpful assistant.' },
(text) => console.log(text) // accumulated text on each token
);
return response;
}
return (
<div>
<p>Status: {oasisState} | Model: {modelName}</p>
<button disabled={!oasisReady} onClick={() => ask('Hello!')}>
Ask AI
</button>
</div>
);
}
Exports
useWebai(options?)
React hook that polls the Oasis runtime and returns the current state.
Parameters:
| Field | Type | Default | Description |
|---|
pollIntervalMs | number | 1200 | How often to poll the runtime state (ms) |
Returns:
| Field | Type | Description |
|---|
oasisState | 'waiting' | 'loading' | 'ready' | Current runtime state |
oasisReady | boolean | true when a model is loaded and idle |
modelName | string | Display name of the loaded model (last path segment) |
host | OasisHostManager | null | The host instance, or null outside the shell |
const { oasisState, oasisReady, modelName, host } = useWebai({ pollIntervalMs: 2000 });
streamTurn(prompt, options, onToken?)
Streams a completion from the Oasis runtime. Handles acquire, request, and release automatically.
Parameters:
| Field | Type | Description |
|---|
prompt | string | The user’s input |
options | StreamTurnOptions | Configuration (see below) |
onToken | (accumulatedText: string) => void | Called with the full accumulated text as each token arrives |
Returns: Promise<string> — the final accumulated response.
StreamTurnOptions:
| Field | Type | Default | Description |
|---|
systemPrompt | string | — | System prompt guiding the model |
maxTokens | number | 2048 | Max tokens to generate |
temperature | number | 0.7 | Sampling temperature |
timeoutMs | number | 150000 | Request timeout in ms |
personaType | string | — | Persona to use (triggers permission flow if needed) |
appId | string | Auto-detected | App ID for persona permissions and memory |
memoryContext | boolean | 'auto' | 'auto' | Controls chat memory injection (see Chat memory) |
chatSession | string | — | Session ID for memory isolation between independent sessions |
const result = await streamTurn(
'Explain WebGPU in one paragraph.',
{
systemPrompt: 'You are a technical writer.',
maxTokens: 512,
temperature: 0.5,
personaType: 'research'
},
(text) => updateUI(text)
);
probeOasisState()
Check the current state of the Oasis runtime without side effects.
Returns: 'waiting' | 'loading' | 'ready'
| State | Meaning |
|---|
waiting | No model loaded or host unavailable |
loading | Model is loading or actively generating |
ready | Model loaded and idle |
getOasisHost()
Returns the OasisHostManager instance or null if running outside the shell.
getApogeeShell()
Returns the ApogeeShellManager instance or null if running outside the shell.
getDefaultAppId()
Returns the app ID injected by the shell when running in an iframe (window.__APOGEE_APP_ID__), or undefined if not set.
Chat memory
The SDK provides methods to load and clear per-app chat history. Memory is persisted locally and can be injected into prompts automatically.
loadAppChatHistory(appId)
Load the persisted chat memory for an app.
Returns: AppChatMemory
{
summary: string; // Rolling summary of conversation
preferences: Record<string, unknown>; // Learned user preferences
recentTurns: Array<{
role: string; // "user" | "assistant"
text: string;
at: string; // ISO 8601 timestamp
turnId: string;
personaId: string | null;
estimatedTokens: number;
chatSession: string | null;
}>;
toolOutcomes: Array<{
toolId: string;
success: boolean;
error: string | null;
resultText: string;
at: string;
turnId: string;
personaId: string | null;
}>;
updatedAt: string | null;
}
clearAppChatHistory(appId)
Clears all chat memory for an app.
Memory context options
When using streamTurn, the memoryContext option controls how chat history is injected into prompts:
| Value | Behavior |
|---|
'auto' (default) | Follows the shell’s Memory context toggle in settings |
true | Always inject memory context |
false | Never inject memory context for this request |
Chat history is always auto-saved when appId is present, regardless of memoryContext.
The chatSession option provides isolation between independent chat sessions within the same app. When set, memory filtering uses the session ID instead of the persona ID.
Constants
| Constant | Value | Description |
|---|
REQUEST_TIMEOUT_MS | 150000 | Default request timeout (2.5 minutes) |
SAMPLING_DEFAULTS.temperature | 0.7 | Default sampling temperature |
SAMPLING_DEFAULTS.maxTokens | 2048 | Default max tokens |
Types
The SDK exports these TypeScript types:
OasisState — 'waiting' | 'loading' | 'ready'
OasisHostStatus — Runtime status fields (hasRuntime, lastModel, loadingModel, isGenerating, etc.)
OasisHostManager — Full host interface
OasisRequestOptions — Options for host.request()
OasisPersonaInfo — Persona definition
OasisModelSelectionResult — Model selection result
AppChatMemory — Chat memory structure
ApogeeShellManager — Shell manager interface
UseWebaiOptions / UseWebaiResult — Hook types
StreamTurnOptions — Options for streamTurn()