Call 200+ LLMs with a single OpenAI compatible format
The most used feature of the LLM proxy is the ability to call 200+ LLMs with a single API format. You can switch between models with only 1 line of code change.
After you integrate the LLM proxy, you can choose a model from the Models page. In this page, you can see each model’s description, pricing, and other metrics, which helps you choose the best model for your use case.
from openai import OpenAIclient = OpenAI( base_url="https://api.keywordsai.co/api/", api_key="YOUR_KEYWORDSAI_API_KEY",)response = client.chat.completions.create( model="claude-3-5-haiku-20241022", messages=[ {"role": "user", "content": "Tell me a long story"} ])
from openai import OpenAIclient = OpenAI( base_url="https://api.keywordsai.co/api/", api_key="YOUR_KEYWORDSAI_API_KEY",)response = client.chat.completions.create( model="claude-3-5-haiku-20241022", messages=[ {"role": "user", "content": "Tell me a long story"} ])
Here is an example of how to disable logging in the OpenAI TypeScript SDK. In OpenAI TypeScript SDK, you should add a // @ts-expect-error before the disable_log field.
import { OpenAI } from "openai";const client = new OpenAI({ baseURL: "https://api.keywordsai.co/api", apiKey: "YOUR_KEYWORDSAI_API_KEY",});const response = await client.chat.completions .create({ messages: [{ role: "user", content: "Say this is a test" }], model: "claude-3-5-sonnet-20241022" }) .asResponse();console.log(await response.json());