litellm

πŸš… LiteLLM

<p align="center">
    <p align="center">Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, etc.]
    <br>
</p>

OpenAI Proxy Server | <a href="https://docs.litellm.ai/docs/enterprise"target="_blank">Enterprise Tier</a>

PyPI Version CircleCI Y Combinator W23 Whatsapp Discord

LiteLLM manages:

Jump to OpenAI Proxy Docs
Jump to Supported LLM Providers

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

Usage (Docs)

[!IMPORTANT] LiteLLM v1.0.0 now requires openai>=1.0.0. Migration guide here

Open In Colab

pip install litellm
from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-openai-key" 
os.environ["COHERE_API_KEY"] = "your-cohere-key" 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Async (Docs)

from litellm import acompletion
import asyncio

async def test_get_response():
    user_message = "Hello, how are you?"
    messages = [{"content": user_message, "role": "user"}]
    response = await acompletion(model="gpt-3.5-turbo", messages=messages)
    return response

response = asyncio.run(test_get_response())
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

from litellm import completion
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

# claude 2
response = completion('claude-2', messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Langfuse, DynamoDB, s3 Buckets, LLMonitor, Helicone, Promptlayer, Traceloop, Athina, Slack

from litellm import completion

## set env variables for logging tools
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["LLMONITOR_APP_ID"] = "your-llmonitor-app-id"
os.environ["ATHINA_API_KEY"] = "your-athina-api-key"

os.environ["OPENAI_API_KEY"]

# set callbacks
litellm.success_callback = ["langfuse", "llmonitor", "athina"] # log input/output to langfuse, llmonitor, supabase, athina etc

#openai call
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi πŸ‘‹ - i'm openai"}])

OpenAI Proxy - (Docs)

Set Budgets & Rate limits across multiple projects

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

πŸ“– Proxy Endpoints - Swagger Docs

Quick Start Proxy - CLI

pip install 'litellm[proxy]'

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder

#INFO: Proxy running on http://0.0.0.0:8000

Step 2: Make ChatCompletions Request to Proxy

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

Proxy Key Management (Docs)

UI on /ui on your proxy server ui_3

Set budgets and rate limits across multiple projects POST /key/generate

Request

curl 'http://0.0.0.0:8000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'

Expected Response

{
    "key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token
    "expires": "2023-11-19T01:38:25.838000+00:00" # datetime object
}

Supported Providers (Docs)

| Provider | Completion | Streaming | Async Completion | Async Streaming | Async Embedding | Async Image Generation | | β€”β€”β€”β€”- | β€”β€”β€”β€”- | β€”β€”β€”β€”- | β€”β€”β€”β€”- | β€”β€”β€”β€”- | β€”β€”β€”β€”- | β€”β€”β€”β€”- | | openai | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… | | azure | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… | | aws - sagemaker | βœ… | βœ… | βœ… | βœ… | βœ… | | aws - bedrock | βœ… | βœ… | βœ… | βœ… |βœ… | | google - vertex_ai [Gemini] | βœ… | βœ… | βœ… | βœ… | | google - palm | βœ… | βœ… | βœ… | βœ… | | google AI Studio - gemini | βœ… | | βœ… | | | | mistral ai api | βœ… | βœ… | βœ… | βœ… | βœ… | | cloudflare AI Workers | βœ… | βœ… | βœ… | βœ… | | cohere | βœ… | βœ… | βœ… | βœ… | βœ… | | anthropic | βœ… | βœ… | βœ… | βœ… | | huggingface | βœ… | βœ… | βœ… | βœ… | βœ… | | replicate | βœ… | βœ… | βœ… | βœ… | | together_ai | βœ… | βœ… | βœ… | βœ… | | openrouter | βœ… | βœ… | βœ… | βœ… | | ai21 | βœ… | βœ… | βœ… | βœ… | | baseten | βœ… | βœ… | βœ… | βœ… | | vllm | βœ… | βœ… | βœ… | βœ… | | nlp_cloud | βœ… | βœ… | βœ… | βœ… | | aleph alpha | βœ… | βœ… | βœ… | βœ… | | petals | βœ… | βœ… | βœ… | βœ… | | ollama | βœ… | βœ… | βœ… | βœ… | | deepinfra | βœ… | βœ… | βœ… | βœ… | | perplexity-ai | βœ… | βœ… | βœ… | βœ… | | Groq AI | βœ… | βœ… | βœ… | βœ… | | anyscale | βœ… | βœ… | βœ… | βœ… | | voyage ai | | | | | βœ… | | xinference [Xorbits Inference] | | | | | βœ… |

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here’s how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
poetry run flake8
poetry run pytest .

Step 4: Submit a PR with your changes! πŸš€

Support / talk with founders

Why did we build this

Contributors