Open Source ยท v0.2.0 ยท GPU + TEE Compute ยท Now on npm

Deploy AI to the edge.
Any network, one SDK.

OpenAI costs too much. You're locked into one vendor. Your prompts hit centralised servers you don't control.
There's a better way.

Phonix is to edge compute what Ethers.js is to EVM chains โ€”
one interface, any provider.

Switch from OpenAI โ†’ View on GitHub
$ npm install @phonixsdk/inference
GitHub stars @phonixsdk/inference on npm @phonixsdk/sdk on npm @phonixsdk/mobile on npm @phonixsdk/cli on npm License

Switch to decentralised GPU
in two lines

Your existing OpenAI code works unchanged. Swap baseURL and apiKey โ€” that's it. Requests route automatically to the cheapest available GPU cluster across io.net, Akash, and Acurast TEE nodes.

  • โœ“ Works with the openai npm package, LangChain, LlamaIndex
  • โœ“ Streaming SSE, automatic failover, cost / latency routing
  • โœ“ Next.js, Express, Cloudflare Workers โ€” any Node.js HTTP server
  • โœ“ ~$0.40/hr A100 spot vs $0.06/1K tokens on GPT-4
Quick start View on npm โ†’
Before โ€” OpenAI
// Expensive. Centralised. No control.
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});
After โ€” Phonix (2 lines changed)
// Decentralised GPU. Private. Cheaper.
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://<your-app>.vercel.app/v1', // โ† your deployed handler
  apiKey:  process.env.PHONIX_SECRET_KEY,           // โ† new
});

// Everything else is identical โ†“
const res = await client.chat.completions.create({
  model:    'phonix-llama-3-70b',
  messages: [{ role: 'user', content: prompt }],
  stream:   true,
});

โ†’ Deploy the handler with @phonixsdk/inference โ€” see setup guide โ†“

Live provider health

Real-time status across all five networks โ€” updated every 5 minutes.

io.net
Akash
Acurast
Fluence
Koii
View dashboard โ†’

One SDK, five networks

Deploy the same AI workload to any supported network without rewriting your code. Phonix automatically routes to the fastest, cheapest option.

io.net
GPU ยท New
Decentralised GPU clusters โ€” A100, H100, RTX 4090 spot compute at a fraction of AWS pricing. The best option for large model inference.
๐Ÿ–ฅ๏ธ GPU clusters ๐Ÿ”‘ API key โšก ~$0.40/hr A100
Akash
Supported
Decentralised cloud marketplace where providers bid to run containerised workloads. Deploy bundles via IPFS and pay in AKT โ€” no lock-in, no vendor overhead.
โ˜๏ธ Docker containers ๐Ÿ”‘ BIP-39 mnemonic โšก nodejs
Acurast
TEE ยท Supported
237,000+ smartphone nodes running inside hardware Trusted Execution Environments. Your code and data are private โ€” even from device owners.
๐Ÿ“ฑ TEE-based ๐Ÿ”‘ P256 auth โšก nodejs / wasm
Fluence
Supported
Decentralised serverless cloud built on libp2p. Deploy spells to a permissionless peer-to-peer network of compute providers.
๐ŸŒ P2P relay ๐Ÿ”‘ Ed25519 โšก nodejs
Koii
Supported
Community-owned compute network with Solana-compatible task nodes. Ideal for recurring data tasks, oracles, and agent workloads.
๐Ÿ”— Task nodes ๐Ÿ”‘ Solana keypair โšก nodejs

From zero to deployed in minutes

One CLI, one config file, one client API โ€” regardless of which network you deploy to.

1

Initialise your project

Choose a provider and template. Phonix generates your config and entry point.

2

Configure credentials

The phonix auth wizard generates and stores all required keys. Your .env is locked to owner-only permissions and never committed.

3

Test locally

Run phonix run-local to simulate the full provider runtime on your machine โ€” no network required.

4

Deploy and call

One command bundles, uploads to IPFS, and registers on-chain. Then call from any JavaScript environment with @phonixsdk/sdk.

your-app/index.ts
// Deploy once โ€” run on edge nodes
import { PhonixClient } from '@phonixsdk/sdk';

const client = new PhonixClient({
  provider: 'ionet',
  secretKey: process.env.PHONIX_SECRET_KEY,
});

await client.connect();

// Listen for results from the TEE
client.onMessage((msg) => {
  const { result } = msg.payload as { result: string };
  console.log('Result:', result);
});

// Send a prompt to a processor node
await client.send('0xproc1...', {
  requestId: 'req-001',
  prompt: 'Summarize: The quick brown fox...',
});

client.disconnect();
terminal
# Initialise project
phonix init

# Set up credentials interactively
phonix auth

# Test without deploying
phonix run-local

# Deploy to the network
phonix deploy

# โœ” Deployment live!
# Processors: 3 matched
#   โ€ข 0xproc1...

Call your processors
from iOS & Android

Deploy your edge processors once with the Phonix CLI, then call them directly from your React Native or Expo app. Real-time results, secure key storage, and automatic background/foreground lifecycle management โ€” all in one package.

iOS
Android
Expo
  • โœ“ React hooks โ€” usePhonix, useMessages, useSend for clean component integration
  • โœ“ Context provider โ€” wrap your root with <PhonixProvider> and access the client anywhere in the tree
  • โœ“ Secure key storage โ€” iOS Keychain & Android Keystore via expo-secure-store, in-memory fallback for bare RN
  • โœ“ AppState lifecycle โ€” auto-disconnects when the app backgrounds, auto-reconnects on foreground
  • โœ“ SSRF protection โ€” all endpoints validated: https:// only, private IPs blocked
  • โœ“ Zero native modules โ€” pure JavaScript, works with Expo Go and bare React Native
$ npm install @phonixsdk/mobile
// Wrap your root component once
import { PhonixProvider } from '@phonixsdk/mobile';

export default function App() {
  return (
    <PhonixProvider
      provider="akash"
      secretKey={process.env.PHONIX_SECRET_KEY}
      autoConnect
    >
      <NavigationContainer>
        <MainStack />
      </NavigationContainer>
    </PhonixProvider>
  );
}
// Access from any screen in the tree
import {
  usePhonixContext,
  useMessages,
  useSend,
} from '@phonixsdk/mobile';

export function ResultsScreen() {
  const { connected } = usePhonixContext();
  const messages = useMessages(client);
  const { send, sending } = useSend(client);

  return (
    <View>
      <Text>{connected ? '๐ŸŸข Live' : 'โšช Offline'}</Text>
      <Button
        title={sending ? 'Sending...' : 'Run inference'}
        onPress={() => send(leaseUrl, {
          prompt: 'Summarize the news today'
        })}
      />
      {messages.map((m, i) => (
        <Text key={i}>{m.payload as string}</Text>
      ))}
    </View>
  );
}
// Without context โ€” manage client directly
import { usePhonix, useMessages } from '@phonixsdk/mobile';

export function InferenceScreen() {
  const { client, connected, connect, error }
    = usePhonix({
        provider: 'akash',
        secretKey: PHONIX_SECRET_KEY,
      });
  const messages = useMessages(client);

  return (
    <View>
      <Button
        title="Connect"
        onPress={connect}
        disabled={connected}
      />
      {error && <Text>{error.message}</Text>}
      {messages.map((m, i) => (
        <Text key={i}>
          {JSON.stringify(m.payload)}
        </Text>
      ))}
    </View>
  );
}

Switch to decentralised GPU
in two lines

Tired of OpenAI pricing? Need private inference? @phonixsdk/inference is a drop-in baseURL replacement โ€” no code changes required.

ModelProviderNotes
phonix-llama-3-70b io.net A100 spot, best for large context
phonix-mistral-7b io.net GPU, cost-efficient
phonix-llama-3-8b Akash Container cloud, moderate cost
phonix-tee-phi-3-mini Acurast TEE, private, lowest cost
  • โœ“ OpenAI wire-compatible โ€” works with the openai npm package, LangChain, LlamaIndex, and more
  • โœ“ Streaming SSE โ€” pass-through token streaming, no buffering overhead
  • โœ“ Automatic failover โ€” if a provider is unavailable, the next best one is tried automatically
  • โœ“ Routing strategies โ€” cost, latency, or balanced
  • โœ“ Works everywhere โ€” Next.js App Router, Express, Cloudflare Workers, any Node.js HTTP server
$ npm install @phonixsdk/inference
Before โ€” OpenAI SDK
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});
After โ€” Phonix (2 line change)
import OpenAI from 'openai';

const client = new OpenAI({
  // โ† swap these two lines
  baseURL: 'https://<your-app>.vercel.app/v1',
  apiKey:  process.env.PHONIX_SECRET_KEY,
});

// All your existing code stays identical โ†“
const res = await client.chat.completions.create({
  model:    'phonix-llama-3-70b',
  messages: [{ role: 'user', content: prompt }],
  stream:   true,
});
app/api/v1/chat/completions/route.ts
import { PhonixInferenceHandler } from '@phonixsdk/inference';

const handler = new PhonixInferenceHandler({
  apiKey:        process.env.PHONIX_SECRET_KEY,
  ionetEndpoint: process.env.IONET_ENDPOINT,
  akashEndpoint: process.env.AKASH_ENDPOINT,
  strategy:      'cost',
});

export const POST = (req: Request) => handler.handleRequest(req);
export const GET  = (req: Request) => handler.handleRequest(req);

Route across DePIN networks โ€” automatically

PhonixRouter manages multiple providers simultaneously. On every request it scores each provider by health, latency, cost, and availability โ€” then sends to the best one. If that provider fails, it retries on the next. Circuit breakers prevent routing to unhealthy nodes until they recover.

  • balancedEqual weight across availability, latency, and cost โ€” the safe default.
  • latencyAlways picks the fastest-responding provider. Good for interactive workloads.
  • availabilityMaximises uptime. Prefers providers with the highest recent success rate.
  • costRoutes to the cheapest option. Ideal for batch and background jobs.
  • round-robinDistributes load evenly across all callable providers.

Works on Node.js (SDK) and React Native (mobile). The same routing engine powers both.

// Server SDK
import { PhonixRouter } from '@phonixsdk/sdk';

const router = new PhonixRouter({
  providers: ['akash', 'acurast'],
  secretKey: process.env.PHONIX_SECRET_KEY,
  strategy: 'latency',
});

await router.connect();
await router.deploy(config);  // deploys to all providers

// Automatically picks fastest callable provider
await router.send({ prompt: 'Hello' });

// Health snapshot
router.health().forEach(h => {
  console.log(h.provider, h.latencyMs, h.circuitState);
});
// Mobile (React Native / Expo)
import { usePhonixRouter } from '@phonixsdk/mobile';

const { router, connected, health } = usePhonixRouter({
  routes: [
    { provider: 'akash',   endpoint: AKASH_URL,   secretKey },
    { provider: 'acurast', endpoint: ACURAST_WS, secretKey },
  ],
  strategy: 'balanced',
  autoConnect: true,
});

// AppState-aware: pauses on background, resumes on foreground
await router?.send({ prompt: 'Hello from iOS' });

Built for production

Security and reliability are not afterthoughts โ€” built into every layer of the SDK and covered by 135 tests across all five providers.

๐Ÿ”’

Confidential by default

Your code runs inside hardware TEEs. Prompts, responses, and logic are private โ€” even from device owners and network operators.

๐Ÿ›ก๏ธ

SSRF & DNS rebinding protection

All HTTP calls validate URLs and resolve hostnames to IPs before opening connections, blocking requests to internal infrastructure.

๐Ÿ”‘

Safe credential management

The auth wizard generates keys locally, writes them with chmod 600, and enforces .gitignore โ€” secrets never leave your machine accidentally.

๐Ÿ“ฆ

Single-file bundles

esbuild compiles your TypeScript to a single optimised IIFE with env vars injected at build time. No runtime dependencies on the edge node.

๐Ÿงช

Local testing runtime

The mock runtime simulates the full provider API locally โ€” WebSocket messages, HTTP callbacks, fulfill โ€” so you iterate without touching the network.

๐ŸŒ

Provider-agnostic API

Switch from io.net to Akash to Acurast by changing one config field. Your application code stays identical across all five providers.

๐Ÿ–ฅ๏ธ

GPU compute via io.net

A100 and H100 spot clusters at ~$0.40/hr. The same SDK interface โ€” deploy, send, receive โ€” with a 60-second timeout and 4 MiB response cap for large model outputs.

๐Ÿ”Œ

OpenAI-compatible endpoint

Drop @phonixsdk/inference into any Next.js, Express, or Cloudflare Worker. Change baseURL in your existing OpenAI client and you're done.

๐Ÿ“ฑ

iOS & Android ready

@phonixsdk/mobile brings React hooks, context, and AppState lifecycle to your React Native and Expo apps โ€” iOS Keychain, Android Keystore, zero native modules.

โšก

Intelligent multi-provider routing

PhonixRouter scores every provider on latency, availability, and cost in real time. Circuit breakers open on failure and recover automatically โ€” zero downtime across DePIN networks.

Everything from the terminal

The full deployment lifecycle in a single tool.

phonix init
Interactive setup โ€” generates phonix.json, .env, and template files
phonix auth
Credential wizard โ€” generates and stores keys for your provider
phonix deploy
Bundle, upload to IPFS, and register deployment on-chain
phonix run-local
Run your script locally with a full mock provider runtime
phonix status
List deployments, processor IDs, and live status
phonix send <id> <msg>
Send a test message directly to a processor node

Deploy AI to the edge today

Open source, Apache-2.0 licensed, and live on npm. Switch from OpenAI in two lines.