You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many users have expressed the need for a detailed trace of the entire LLM app process. This encompasses every step, from the moment a question is posed to the point of receiving an answer, ensuring all calls and intermediate steps are transparent and traceable.
2. Proposed Solution
Immediate Step-by-Step Solution:
Modify the LLM app endpoint to return a comprehensive structured response instead of just a plain text. This means transitioning from a simple text response to a format like:
This way, when running the endpoint in production, you skip the extra outputs.
Long-term Vision:
Instead of returning all the trace as part of the endpoint, we will save them to a db, and return a run-id which can be used by the frontend to get that information.
3. Detailed Implementation Details
Users add to their script the command ag = Agenta().
They can then choose specific items to log using ag.log(output). [Here we need to look at other solutions in the market to decide the best way we could implement this. We need to look into langsmith, helicone, langfuse, and pezzo, and compare their API for logging]
We modify the @post wrapper to convert the output of the user function to {"message":the output, "logs":ag.log(), "costs":ag.costs()}
4. Future steps
We can later add callback integrations with langchain and llama_index for automatic logging. The user would just need to add our callbacks, and the logging would happen in the background
We can add special @wrappers for subfunctions, like @llm, which would result in automatic logging. [see langsmith-sdk for examples]
Later, when we implement observability, we will change the logic from returning the logs in the api endpoint, to saving the logs/traces in the database, and returning a run-id which can be accessed by the playground to get the logs of a certain call
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
1. Problem Statement
Many users have expressed the need for a detailed trace of the entire LLM app process. This encompasses every step, from the moment a question is posed to the point of receiving an answer, ensuring all calls and intermediate steps are transparent and traceable.
2. Proposed Solution
Immediate Step-by-Step Solution:
Modify the LLM app endpoint to return a comprehensive structured response instead of just a plain text. This means transitioning from a simple text response to a format like:
Add a
verbose
parameter to the generate endpoint:True
, logs should accompany outputs.verbose=False
) should be:This way, when running the endpoint in production, you skip the extra outputs.
Long-term Vision:
Instead of returning all the trace as part of the endpoint, we will save them to a db, and return a run-id which can be used by the frontend to get that information.
3. Detailed Implementation Details
ag = Agenta()
.ag.log(output)
. [Here we need to look at other solutions in the market to decide the best way we could implement this. We need to look into langsmith, helicone, langfuse, and pezzo, and compare their API for logging]4. Future steps
langchain
andllama_index
for automatic logging. The user would just need to add our callbacks, and the logging would happen in the background@llm
, which would result in automatic logging. [see langsmith-sdk for examples]Beta Was this translation helpful? Give feedback.
All reactions