A step-by-step guide to building, testing, and deploying your own apps inside webAI — from a single HTML file to a fully collaborative, AI-powered tool.
If you can build a web page, you can build a webAI app. Apps are single HTML files that run inside the platform and get access to on-device AI, real-time collaboration, user identity, and more — with zero server infrastructure.This guide walks you through the entire process, starting with the simplest possible app and progressively adding platform features.
You don’t need any special tooling. For the quick-start path, all you need is a text editor. For framework-based apps (React, Vue), you’ll want Node.js installed.Here’s a quick look at the two paths:
Path
Best for
Build step?
Vanilla HTML
Prototypes, simple tools, learning the platform
No
Framework (React / Vue)
Production apps, complex UIs, team projects
Yes (Vite)
Start with vanilla HTML to learn the concepts, then move to a framework when your app outgrows a single hand-written file.
A webAI app is just an HTML file. Here’s a complete, working example that demonstrates AI inference, identity, and theme integration — all in a single file:
Save this as index.html and upload it to webAI — you’ll have a working AI chat app with theme support and user identity. The sections below break down each platform API in detail.
The webAI shell injects JavaScript globals into your app’s window. These give you access to AI inference, collaboration, identity, encryption, and navigation — no imports or installs needed.
All of these return null when your app runs outside the webAI shell (e.g., during local development). Always check for null before calling any methods.
The OasisHost API lets your app run language model inference directly on the user’s device. No cloud, no API keys. The flow is: acquire the runtime, send a prompt, stream tokens, and release the runtime.
Copy
async function askAI(prompt) { const host = window.OasisHost ?? window.parent?.OasisHost; if (!host) { console.warn('AI not available outside webAI.'); return null; } const release = await host.acquire({ warmRuntime: true }); try { let result = ''; await host.request(prompt, { systemPrompt: 'You are a helpful assistant.', maxTokens: 2048, temperature: 0.7, onToken: (token) => { result += token; document.getElementById('output').textContent = result; }, }); return result; } finally { if (release) release(); }}
Key points:
acquire() locks the AI runtime for your app. Always release it when done.
onToken fires for each generated token, enabling real-time streaming UI.
getStatus() tells you whether a model is loaded (ready), loading (loading), or unavailable (waiting).
CollaborationManager lets your app create and join real-time peer-to-peer spaces. The platform handles all networking, state sync, persistence, and conflict resolution.
Copy
const collab = window.CollaborationManager ?? window.parent?.CollaborationManager;async function startRoom() { if (!collab) return; const state = await collab.hostRoom({ roomName: 'My Space', password: null }); console.log('Space created! Code:', state.roomCode);}async function enterRoom(code) { if (!collab) return; return collab.joinRoom(code, null);}
The collaboration flow:
One user hosts a space — this generates a space code
Others join using that code
Participants exchange state through the platform
The platform broadcasts changes, persists state, and resolves conflicts via
For anything beyond a simple prototype, a framework with a bundler gives you a better development experience. Here’s how to set up a project that produces the single-file output webAI requires.
Instead of scattering window.OasisHost ?? window.parent?.OasisHost throughout your codebase, create a src/webai.js file as your single integration layer:
Copy
export const getShellAPI = (name) => window[name] ?? window.parent?.[name] ?? null;export const getOasisHost = () => getShellAPI('OasisHost');export const getApogeeShell = () => getShellAPI('ApogeeShell');export const getCollaborationManager = () => getShellAPI('CollaborationManager');export const getUserIdentityManager = () => getShellAPI('UserIdentityManager');export const getE2ECrypto = () => getShellAPI('E2ECrypto');export function getOasisState() { const host = getOasisHost(); if (!host?.getStatus) return 'waiting'; const s = host.getStatus(); if (s?.lastModel) return 'ready'; if (s?.loadingModel || s?.isGenerating) return 'loading'; return 'waiting';}export async function streamCompletion(prompt, systemPrompt, onToken) { const host = getOasisHost(); if (!host) throw new Error('Oasis AI is not available in this environment.'); const release = await host.acquire({ warmRuntime: true }); try { return await host.request(prompt, { systemPrompt: systemPrompt ?? '', maxTokens: 2048, temperature: 0.7, onToken, }); } finally { if (release) release(); }}
Then import from it anywhere in your app:
Copy
import { getOasisHost, streamCompletion } from './webai';
Your app opens in a normal browser tab. Shell APIs will be null — this is expected. Design your UI with graceful fallbacks so you can iterate without the full shell running.
Show a subtle banner in dev mode like “Running outside webAI — AI and collaboration unavailable.” This makes it obvious which features need the shell while you focus on building your UI.
Every shell API returns null outside of webAI. Wrap every call in a check so your app works during local development and doesn’t crash when a feature isn’t available.
Keep your bundle small
Apps under 1MB load quickly. If your build exceeds 5MB, consider optimizing images, removing unused dependencies, or lazy-loading heavy components.
Release the AI runtime
Always call the release function after acquire(), even if the request fails. Use try/finally to guarantee cleanup — otherwise other apps can’t use the AI runtime.
Handle disconnections in collaborative apps
Peers can drop out at any time. Design your state model to handle partial participation. See the Collaboration API best practices.
Test outside the shell first
Build your UI and core logic so it works in a normal browser tab. Add platform features on top with graceful fallbacks. This makes development much faster.