You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, first of all, I want to express my appreciation for your amazing work on this project. It has been incredibly helpful.
I encountered an issue while trying to configure the top_p parameter. I included the setting in my config2.yaml file, but it doesn't seem to take effect. Could you please guide me on the correct way to set this parameter?
my config2.yaml snippet looks like: llm: api_type: "open_llm" model: "xxx" base_url: "http://xxx/v1" api_key: "none" max_token: 8192 temperature: 0.9 top_p: 0.5 and I debugged to check the configured values, max_token and temperature already worked, but temperature not.
Am I missing something, or is there another place I should configure this?
Thank you in advance for your help, and I’m looking forward to your response!
The text was updated successfully, but these errors were encountered:
def_cons_kwargs(self, messages: list[dict], timeout=USE_CONFIG_TIMEOUT, **extra_kwargs) ->dict:
kwargs= {
"messages": messages,
"max_tokens": self._get_max_tokens(messages),
# "n": 1, # Some services do not provide this parameter, such as mistral# "stop": None, # Default is None, and gpt4-v does not support this parameter"temperature": self.config.temperature,
"model": self.model,
"timeout": self.get_timeout(timeout),
"top_p": self.config.top_p, # Add the top_p parameter here
}
returnkwargs
However, I’m not sure if the top_p parameter was intentionally omitted due to compatibility issues. After making this change, please test it to ensure that top_p is correctly applied in your scenario.
Hi, first of all, I want to express my appreciation for your amazing work on this project. It has been incredibly helpful.
I encountered an issue while trying to configure the top_p parameter. I included the setting in my config2.yaml file, but it doesn't seem to take effect. Could you please guide me on the correct way to set this parameter?
my config2.yaml snippet looks like:
llm: api_type: "open_llm" model: "xxx" base_url: "http://xxx/v1" api_key: "none" max_token: 8192 temperature: 0.9 top_p: 0.5
and I debugged to check the configured values, max_token and temperature already worked, but temperature not.Am I missing something, or is there another place I should configure this?
Thank you in advance for your help, and I’m looking forward to your response!
The text was updated successfully, but these errors were encountered: