LiteLLM analytics installation

Note: LiteLLM can be used as a Python SDK or as a proxy server. PostHog observability requires LiteLLM version 1.77.3 or higher.

  1. Install LiteLLM

    Required

    Choose your installation method based on how you want to use LiteLLM:

    pip install litellm
  2. Configure PostHog observability

    Required

    Configure PostHog by setting your project API key and host as well as adding posthog to your LiteLLM callback handlers. You can find your API key in your project settings.

    import os
    import litellm
    # Set environment variables
    os.environ["POSTHOG_API_KEY"] = "<ph_project_api_key>"
    os.environ["POSTHOG_API_URL"] = "https://us.i.posthog.com" # Optional, defaults to https://app.posthog.com
    # Enable PostHog callbacks
    litellm.success_callback = ["posthog"]
    litellm.failure_callback = ["posthog"] # Optional: also log failures
  3. Call LLMs through LiteLLM

    Required

    Now, when you use LiteLLM to call various LLM providers, PostHog automatically captures an $ai_generation event.

    response = litellm.completion(
    model="gpt-4o-mini",
    messages=[
    {"role": "user", "content": "Tell me a fun fact about hedgehogs"}
    ],
    metadata={
    "user_id": "user_123", # Maps to PostHog distinct_id
    "company": "company_id_in_your_db" # Custom property
    }
    )
    print(response.choices[0].message.content)

    Notes:

    • This works with streaming responses by setting stream=True.
    • To disable logging for specific requests, add {"no-log": true} to metadata.
    • If you want to capture LLM events anonymously, don't pass a user_id in metadata. See our docs on anonymous vs identified events to learn more.

    You can expect captured $ai_generation events to have the following properties:

    PropertyDescription
    $ai_modelThe specific model, like gpt-5-mini or claude-4-sonnet
    $ai_latencyThe latency of the LLM call in seconds
    $ai_toolsTools and functions available to the LLM
    $ai_inputList of messages sent to the LLM
    $ai_input_tokensThe number of tokens in the input (often found in response.usage)
    $ai_output_choicesList of response choices from the LLM
    $ai_output_tokensThe number of tokens in the output (often found in response.usage)
    $ai_total_cost_usdThe total cost in USD (input + output)
    ...See full list of properties
  4. Verify traces and generations

    Checkpoint
    Confirm LLM events are being sent to PostHog

    Let's make sure LLM events are being captured and sent to PostHog. Under LLM analytics, you should see rows of data appear in the Traces and Generations tabs.


    LLM generations in PostHog
    Check for LLM events in PostHog
  5. Capture embeddings

    Optional

    PostHog can also capture embedding generations as $ai_embedding events through LiteLLM:

    response = litellm.embedding(
    input="The quick brown fox",
    model="text-embedding-3-small",
    metadata={
    "user_id": "user_123", # Maps to PostHog distinct_id
    "company": "company_id_in_your_db" # Custom property
    }
    )

Community questions

Was this page useful?

Questions about this page? or post a community question.