-
Hey all, I am using LiteLLM + Langfuse as proxy for Azure OpenAI and monitoring for my application. Now, I want to pass some extra metadata via the request body to the completion call of The LiteLLM documentation to pass additional metadata to Langfuse states import openai
client = openai.OpenAI(
api_key="anything",
base_url="http://0.0.0.0:4000"
)
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={
"metadata": {
"generation_name": "ishaan-generation-openai-client",
"generation_id": "openai-client-gen-id22",
"trace_id": "openai-client-trace-id22",
"trace_user_id": "openai-client-user-id2"
}
}
)
print(response) so passing an extra How do I pass custom metadata to streamObject, for example: async function main() {
const result = await streamObject({
model: openai('gpt-4-turbo'),
maxTokens: 2000,
// I want to pass additional metadata here, for example via body: { metadata: { trace_id: "my id" } }
schema: z.object({
characters: z.array(
z.object({
name: z.string(),
class: z
.string()
.describe('Character class, e.g. warrior, mage, or thief.'),
description: z.string(),
}),
),
}),
prompt:
'Generate 3 character descriptions for a fantasy role playing game.',
}); I am aware of the callback options for LiteLLM but some customization is missing for me. Any hints appreciated. Thank you. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
You can create a custom The request body cannot be modified atm. |
Beta Was this translation helpful? Give feedback.
You can create a custom
openai
provider instance and add headers: https://sdk.vercel.ai/providers/ai-sdk-providers/openai#provider-instanceThe request body cannot be modified atm.