Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do both text output and FunctionCall in the same request? #232

Open
jaromiru opened this issue Jun 6, 2024 · 3 comments
Open

How to do both text output and FunctionCall in the same request? #232

jaromiru opened this issue Jun 6, 2024 · 3 comments

Comments

@jaromiru
Copy link

jaromiru commented Jun 6, 2024

Hello, I want the model to reason before making a function call in one request, which is possible in other libraries. However, I haven't found a way to do this in magentic. The usual signature of a method is:

def do_something(param: str) -> FunctionCall: ...

which does not allow any other outputs than the function call.

I'd be happy to do something like:

def do_something(param: str) -> tuple[Annotated[str, "reasoning"], FunctionCall]: ...

but this results in a pydantic error. It applies to ParallelFunctionCall also.

@jackmpcollins
Copy link
Owner

Hi @jaromiru

The OpenAI API returns either a message or tool call(s), while the Anthropic API can return "thoughts" before making a tool call (see related issue #220). Because of this, a return type that allows "thinking" before making a function call would only work with some LLM providers, but it might still be useful to have.

One approach you could try is having separate steps for the thinking and the function call. Something like

@prompt(...)
def think_about_doing_something(param: str) -> str: ...

@prompt("Based on thoughts {thoughts}. Do thing")
def do_something(thoughts: str, param: str) -> str: ...

thoughts = think_about_doing_something(param)
function_call = do_something(thoughts, param)

Another option which would reduce this to a single query would be to set the return type to a pydantic model with an "explanation" field and fields for the function parameters. You would need to do this for all available functions and union them in the return type. Similar to https://magentic.dev/structured-outputs/#chain-of-thought-prompting

class ExplainedFunctionParams(BaseModel):
    explanation: str = Field(description="reasoning for the following choice of parameters")
    param: str

@prompt(...)  # No `functions` provided here
def do_something(param: str) -> ExplainedFunctionParams: ...

func_params = do_something(param)
my_function(func_params.param)

Please let me know if either of these approaches would work for you. Thanks for using magentic.

@jaromiru
Copy link
Author

Hi @jackmpcollins, thanks for the answer and the proposed workarounds. Obviously, both of them have their drawbacks.
My main motivation was to reduce costs by having one LLM call (especially when the context is large), and the first method does not help with this.
The second proposal makes it complicated, brittle, hand-engineered, prone to errors and avoids the ease-of-use of Magentic library.

I currently have no solutions on my own, so we can close the ticket, or keep it open for followup discussion.

@jackmpcollins
Copy link
Owner

jackmpcollins commented Jun 27, 2024

@jaromiru Let's leave this open because it would be great for magentic to support this for the Anthropic API. Please let me know if/how you solve this for OpenAI using another library or their API directly because that could inform how to solve it generally for magentic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants