One simple interface for all top models. No more juggling different SDKs or API signatures.
Built for speed with native streaming support. We optimize routing for the lowest latency.
Stop chasing the latest model weights. We auto-integrate the simplified best-in-class models.
Switching to Freeference is as easy as changing a single line of code.
Sign up and generate a unified API key from the dashboard.
Point your OpenAI SDK to https://api.freeference.com/v1.
Run your app. We handle the model routing and failover.
<span className="text-pink-400">import</span> OpenAI <span className="text-pink-400">from</span> <span className="text-green-300">'openai'</span>;
<span className="text-gray-500">// Initialize with Freeference</span>
<span className="text-pink-400">const</span> client = <span className="text-pink-400">new</span> OpenAI({
apiKey: <span className="text-green-300">process.env.FREEFERENCE_KEY</span>,
baseURL: <span className="text-green-300">'https://api.freeference.io/v1'</span>
});
<span className="text-pink-400">const</span> response = <span className="text-pink-400">await</span> client.chat.completions.create({
model: <span className="text-green-300">'gpt-4-turbo'</span>, <span className="text-gray-500">// We route this!</span>
messages: [{ role: <span className="text-green-300">'user'</span>, content: <span className="text-green-300">'Hello world!'</span> }]
});Join thousands of developers building the next generation of AI apps without vendor lock-in.