You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here are several enhancement proposals for the next iteration of our SDK, aimed at increasing its usability and flexibility.
Firstly, we've identified certain limitations in the current version that hinder developer experience:
Incorporating multiple parameters in a single call often results in cumbersome function calls. Let's take an example where an application makes three calls to GPT-3.5 and wishes to expose each model's configuration. Currently, we end up with a lengthy function call like this:
With this design, all the parameters get added directly to the API and the playground. An even more simplistic and user-friendly approach might look like this:
In this setup, we could use the @llm_call to track the latency of each function and the ag.completion call to track costs, tokens, and prompts. This will also allow us to display all the parameters needed to configure the LLM by default.
Flexibility in number of inputs:
Users might wish to adjust the number of inputs in a single prompt application.
We could redesign the SDK such that the user code looks like this:
However, this poses a challenge. The configuration of expected inputs will now need to be stored in the backend instead of openapi.json. Essentially, when the frontend encounters an ag.Dict in our inputs, it should automatically add a new parameter (list of inputs) which it will 1) save in the backend but not use in the call (to make things simpler, we can add that parameter to the fastapi call but not use it, thus maintaining the same interaction between the frontend and backend).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Here are several enhancement proposals for the next iteration of our SDK, aimed at increasing its usability and flexibility.
Firstly, we've identified certain limitations in the current version that hinder developer experience:
A more streamlined approach would be:
With this design, all the parameters get added directly to the API and the playground. An even more simplistic and user-friendly approach might look like this:
In this setup, we could use the @llm_call to track the latency of each function and the ag.completion call to track costs, tokens, and prompts. This will also allow us to display all the parameters needed to configure the LLM by default.
Users might wish to adjust the number of inputs in a single prompt application.
We could redesign the SDK such that the user code looks like this:
However, this poses a challenge. The configuration of expected inputs will now need to be stored in the backend instead of openapi.json. Essentially, when the frontend encounters an ag.Dict in our inputs, it should automatically add a new parameter (list of inputs) which it will 1) save in the backend but not use in the call (to make things simpler, we can add that parameter to the fastapi call but not use it, thus maintaining the same interaction between the frontend and backend).
Beta Was this translation helpful? Give feedback.
All reactions