Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Support more configs for openAI models #237

Open
xiaofan-luan opened this issue Apr 18, 2023 · 3 comments
Open

[Feature]: Support more configs for openAI models #237

xiaofan-luan opened this issue Apr 18, 2023 · 3 comments
Assignees
Labels
help wanted Extra attention is needed

Comments

@xiaofan-luan
Copy link
Collaborator

Is your feature request related to a problem? Please describe.

Due to the document of openAI, we missed some major parameters of openAI document, see:

https://platform.openai.com/docs/api-reference/completions/create

  1. max_tokens? just bypass to GPT for now
  2. temperature: there are couple things we can do,
    1. randomly pick answer from returned result if they are all very similar.
    2. edit the answer with another small model:For instance image -> https://huggingface.co/lambdalabs/sd-image-variations-diffusers
  3. n -> if there no enough cached result, we will need to generate from OpenAI anyway.
  4. bestof -> control the topk numbers we want to retrieved from cache

Describe the solution you'd like.

No response

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

@xiaofan-luan xiaofan-luan added the help wanted Extra attention is needed label Apr 18, 2023
@jaelgu
Copy link
Collaborator

jaelgu commented Apr 25, 2023

working on temperature

@tmquan
Copy link

tmquan commented Aug 1, 2023

Will function_call="auto" in openai.ChatCompletion.create(...) be supported?

@SimFG
Copy link
Collaborator

SimFG commented Aug 1, 2023

@tmquan
I haven't had time to experiment with this feature yet. But I'm a little confused, according to openai's definition of function_call: "auto":

Note that the default behavior (function_call: "auto") is for the model to decide on its own whether to call a function and if so which function to call.

If the question is asking about the weather, the answers obtained should be inconsistent every day. That is to say, if the result of the function execution will continue to change, it seems that the cache is meaningless. If the execution result of the function remains unchanged, then this parameter does not seem to require any additional processing, and the current cahce seems to work normally.

Not sure what other thoughts you have on this parameter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants