-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple Function Calls in One Turn Fails to Accept Response #14022
Comments
I couldn't figure out how to label this issue, so I've labeled it for a human to triage. Hang tight. |
Hi @TiVoShane, it looks like in parallel function calling the backend requires that all of the function responses be in the same This modification to your code resolved the issue in my own testing (starting from
Let me know if this unblocks you. By the way, it was interesting to see how your prompt "What can you do?" seems to make the model more likely to accurately respond with a function call. I'd be curious to try adding its own response wording
or something similar to the system prompt to see if that accomplishes the same thing. Thanks for the tip! |
Interesting. Ok, then the example code in FunctionCallingViewModel would
need to be altered.
private extension [FunctionResponsePart] {
func modelContent() -> [ModelContent] {
return self.map { ModelContent(role: "function", parts: [$0]) }
}
}
This code is creating multiple ModelContents.
I altered the code to this and it's now working.
private extension Array where Element == FunctionResponsePart {
func modelContent() -> [ModelContent] {
[ModelContent(role: "function", parts: self as [any Part])]
}
}
Shane Miller
https://twitter.com/TiVoShane
https://www.instagram.com/tivoshane
<https://www.instagram.com/tivoshane?igsh=OXV3amY0ejAzejgz&utm_source=qr>
…On Mon, Nov 4, 2024 at 3:30 PM Andrew Heard ***@***.***> wrote:
Hi @TiVoShane <https://github.com/TiVoShane>, it looks like in parallel
function calling the backend requires that all of the function responses be
in the same ModelContent (i.e., a single "function" turn with multiple
parts). I see some potentially related, not yet released, validation
changes in the backend but haven't looked in detail as to whether they
loosen this specific requirement.
This modification to your code resolved the issue in my own testing
(starting from //there should be two function calls and ending at // END
METHOD 1):
var apiResponse = makeAPIRequest(currencyFrom: "USD", currencyTo: "SEK")
let functionResponse1 = FunctionResponsePart(name: "getExchangeRate", response: apiResponse)
apiResponse = makeAPIRequest(currencyFrom: "USD", currencyTo: "EUR")
let functionResponse2 = FunctionResponsePart(name: "getExchangeRate", response: apiResponse)
let result = try! chat?.sendMessageStream([
ModelContent(role: "function", parts: functionResponse1, functionResponse2)
])
await processResult(response: result!)
Let me know if this unblocks you.
By the way, it was interesting to see how your prompt "What can you do?"
seems to make the model more likely to accurately respond with a function
call. I'd be curious to try adding its own response wording
I can access and process information from the available tools. Currently,
I have access to a default_api which allows me to get exchange rates
between different currencies. I can answer questions about exchange rates
using this API. For example, you could ask me "What is the exchange rate
from USD to EUR?".
or something similar to the system prompt to see if that accomplishes the
same thing. Thanks for the tip!
—
Reply to this email directly, view it on GitHub
<#14022 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA5ICKGI6T7ORECW37EDMALZ67KPBAVCNFSM6AAAAABREW5IJWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJVG42DCMJYGI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Description
When utilizing a model that can call multiple function calls in one pass (i.e. gemini-1.5-flash-002), there is no way to provide multiple responses without a failure occurring.
In my example code, when asking about one exchange rate, I return one FunctionResponse and it succeeds. However, when asking about two exchange rates, and two FunctionCalls appear in one turn, I can't send a response without getting an error. I tried sending them as [FunctionResponse1, FunctionResponse2] and also separately, as two separate responses. Each time I get the following error:
11.4.0 - [FirebaseVertexAI][I-VTX002004] Response payload: {
"error": {
"code": 400,
"message": "Please ensure that function response turn comes immediately after a function call turn. And the number of function response parts should be equal to number of function call parts of the function call turn.",
"status": "INVALID_ARGUMENT"
}
}
Reproducing the issue
import Foundation
import FirebaseVertexAI
// Creating a TestAgent() will execute the testTwoFunctionCallsStreaming function.
class TestAgent {
var systemInstructions = """
Users will ask you information about exchange rates. Use your tools to answer them."
"""
private var modelName = "gemini-1.5-flash-002" // gemini-1.5-pro-exp-0801", //gemini-1.5-pro", //gemini-1.5-flash",
private var chat : Chat?
// Initialize the Vertex AI service
let vertex = VertexAI.vertexAI()
let config = GenerationConfig(
temperature: 0,
topP: 0.95,
topK: 40,
maxOutputTokens: 8192,
responseMIMEType: "text/plain"
)
/*
let result = try! chat?.sendMessageStream([functionResponse1])
await processResult(response: result!)
*/
// END METHOD 2
print(chat?.history.debugDescription ?? "No History")
print("END testTwoFunctionCalls")
}
private extension [FunctionResponsePart] {
func modelContent() -> [ModelContent] {
return self.map { ModelContent(role: "function", parts: [$0]) }
}
}
Firebase SDK Version
11.4.0
Xcode Version
16.1
Installation Method
Swift Package Manager
Firebase Product(s)
VertexAI
Targeted Platforms
iOS
Relevant Log Output
If using Swift Package Manager, the project's Package.resolved
Expand
Package.resolved
snippetReplace this line with the contents of your Package.resolved.
If using CocoaPods, the project's Podfile.lock
Expand
Podfile.lock
snippetReplace this line with the contents of your Podfile.lock!
The text was updated successfully, but these errors were encountered: