OpenAI costs too much. You're locked into one vendor. Your prompts hit centralised servers you don't control.
There's a better way.
Phonix is to edge compute what Ethers.js is to EVM chains โ
one interface, any provider.
npm install @phonixsdk/inference
Your existing OpenAI code works unchanged. Swap baseURL and apiKey โ that's it. Requests route automatically to the cheapest available GPU cluster across io.net, Akash, and Acurast TEE nodes.
openai npm package, LangChain, LlamaIndex// Expensive. Centralised. No control. import OpenAI from 'openai'; const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, });
// Decentralised GPU. Private. Cheaper. import OpenAI from 'openai'; const client = new OpenAI({ baseURL: 'https://<your-app>.vercel.app/v1', // โ your deployed handler apiKey: process.env.PHONIX_SECRET_KEY, // โ new }); // Everything else is identical โ const res = await client.chat.completions.create({ model: 'phonix-llama-3-70b', messages: [{ role: 'user', content: prompt }], stream: true, });
โ Deploy the handler with @phonixsdk/inference โ see setup guide โ
Live provider health
Real-time status across all five networks โ updated every 5 minutes.
Deploy the same AI workload to any supported network without rewriting your code. Phonix automatically routes to the fastest, cheapest option.
One CLI, one config file, one client API โ regardless of which network you deploy to.
Choose a provider and template. Phonix generates your config and entry point.
The phonix auth wizard generates and stores all required keys. Your .env is locked to owner-only permissions and never committed.
Run phonix run-local to simulate the full provider runtime on your machine โ no network required.
One command bundles, uploads to IPFS, and registers on-chain. Then call from any JavaScript environment with @phonixsdk/sdk.
// Deploy once โ run on edge nodes import { PhonixClient } from '@phonixsdk/sdk'; const client = new PhonixClient({ provider: 'ionet', secretKey: process.env.PHONIX_SECRET_KEY, }); await client.connect(); // Listen for results from the TEE client.onMessage((msg) => { const { result } = msg.payload as { result: string }; console.log('Result:', result); }); // Send a prompt to a processor node await client.send('0xproc1...', { requestId: 'req-001', prompt: 'Summarize: The quick brown fox...', }); client.disconnect();
# Initialise project phonix init # Set up credentials interactively phonix auth # Test without deploying phonix run-local # Deploy to the network phonix deploy # โ Deployment live! # Processors: 3 matched # โข 0xproc1...
Deploy your edge processors once with the Phonix CLI, then call them directly from your React Native or Expo app. Real-time results, secure key storage, and automatic background/foreground lifecycle management โ all in one package.
usePhonix, useMessages, useSend for clean component integration<PhonixProvider> and access the client anywhere in the treeexpo-secure-store, in-memory fallback for bare RNnpm install @phonixsdk/mobile
// Wrap your root component once import { PhonixProvider } from '@phonixsdk/mobile'; export default function App() { return ( <PhonixProvider provider="akash" secretKey={process.env.PHONIX_SECRET_KEY} autoConnect > <NavigationContainer> <MainStack /> </NavigationContainer> </PhonixProvider> ); }
// Access from any screen in the tree import { usePhonixContext, useMessages, useSend, } from '@phonixsdk/mobile'; export function ResultsScreen() { const { connected } = usePhonixContext(); const messages = useMessages(client); const { send, sending } = useSend(client); return ( <View> <Text>{connected ? '๐ข Live' : 'โช Offline'}</Text> <Button title={sending ? 'Sending...' : 'Run inference'} onPress={() => send(leaseUrl, { prompt: 'Summarize the news today' })} /> {messages.map((m, i) => ( <Text key={i}>{m.payload as string}</Text> ))} </View> ); }
// Without context โ manage client directly import { usePhonix, useMessages } from '@phonixsdk/mobile'; export function InferenceScreen() { const { client, connected, connect, error } = usePhonix({ provider: 'akash', secretKey: PHONIX_SECRET_KEY, }); const messages = useMessages(client); return ( <View> <Button title="Connect" onPress={connect} disabled={connected} /> {error && <Text>{error.message}</Text>} {messages.map((m, i) => ( <Text key={i}> {JSON.stringify(m.payload)} </Text> ))} </View> ); }
Tired of OpenAI pricing? Need private inference? @phonixsdk/inference is a drop-in baseURL replacement โ no code changes required.
| Model | Provider | Notes |
|---|---|---|
phonix-llama-3-70b |
io.net | A100 spot, best for large context |
phonix-mistral-7b |
io.net | GPU, cost-efficient |
phonix-llama-3-8b |
Akash | Container cloud, moderate cost |
phonix-tee-phi-3-mini |
Acurast | TEE, private, lowest cost |
openai npm package, LangChain, LlamaIndex, and morecost, latency, or balancednpm install @phonixsdk/inference
import OpenAI from 'openai'; const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, });
import OpenAI from 'openai'; const client = new OpenAI({ // โ swap these two lines baseURL: 'https://<your-app>.vercel.app/v1', apiKey: process.env.PHONIX_SECRET_KEY, }); // All your existing code stays identical โ const res = await client.chat.completions.create({ model: 'phonix-llama-3-70b', messages: [{ role: 'user', content: prompt }], stream: true, });
import { PhonixInferenceHandler } from '@phonixsdk/inference'; const handler = new PhonixInferenceHandler({ apiKey: process.env.PHONIX_SECRET_KEY, ionetEndpoint: process.env.IONET_ENDPOINT, akashEndpoint: process.env.AKASH_ENDPOINT, strategy: 'cost', }); export const POST = (req: Request) => handler.handleRequest(req); export const GET = (req: Request) => handler.handleRequest(req);
PhonixRouter manages multiple providers simultaneously.
On every request it scores each provider by health, latency, cost, and availability โ
then sends to the best one. If that provider fails, it retries on the next.
Circuit breakers prevent routing to unhealthy nodes until they recover.
Works on Node.js (SDK) and React Native (mobile). The same routing engine powers both.
// Server SDK
import { PhonixRouter } from '@phonixsdk/sdk';
const router = new PhonixRouter({
providers: ['akash', 'acurast'],
secretKey: process.env.PHONIX_SECRET_KEY,
strategy: 'latency',
});
await router.connect();
await router.deploy(config); // deploys to all providers
// Automatically picks fastest callable provider
await router.send({ prompt: 'Hello' });
// Health snapshot
router.health().forEach(h => {
console.log(h.provider, h.latencyMs, h.circuitState);
});
// Mobile (React Native / Expo)
import { usePhonixRouter } from '@phonixsdk/mobile';
const { router, connected, health } = usePhonixRouter({
routes: [
{ provider: 'akash', endpoint: AKASH_URL, secretKey },
{ provider: 'acurast', endpoint: ACURAST_WS, secretKey },
],
strategy: 'balanced',
autoConnect: true,
});
// AppState-aware: pauses on background, resumes on foreground
await router?.send({ prompt: 'Hello from iOS' });
Security and reliability are not afterthoughts โ built into every layer of the SDK and covered by 135 tests across all five providers.
Your code runs inside hardware TEEs. Prompts, responses, and logic are private โ even from device owners and network operators.
All HTTP calls validate URLs and resolve hostnames to IPs before opening connections, blocking requests to internal infrastructure.
The auth wizard generates keys locally, writes them with chmod 600, and enforces .gitignore โ secrets never leave your machine accidentally.
esbuild compiles your TypeScript to a single optimised IIFE with env vars injected at build time. No runtime dependencies on the edge node.
The mock runtime simulates the full provider API locally โ WebSocket messages, HTTP callbacks, fulfill โ so you iterate without touching the network.
Switch from io.net to Akash to Acurast by changing one config field. Your application code stays identical across all five providers.
A100 and H100 spot clusters at ~$0.40/hr. The same SDK interface โ deploy, send, receive โ with a 60-second timeout and 4 MiB response cap for large model outputs.
Drop @phonixsdk/inference into any Next.js, Express, or Cloudflare Worker. Change baseURL in your existing OpenAI client and you're done.
@phonixsdk/mobile brings React hooks, context, and AppState lifecycle to your React Native and Expo apps โ iOS Keychain, Android Keystore, zero native modules.
PhonixRouter scores every provider on latency, availability, and cost in real time. Circuit breakers open on failure and recover automatically โ zero downtime across DePIN networks.
The full deployment lifecycle in a single tool.
Open source, Apache-2.0 licensed, and live on npm. Switch from OpenAI in two lines.