Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to set top_p parameter properly? #1642

Open
sanexodus opened this issue Dec 16, 2024 · 1 comment
Open

How to set top_p parameter properly? #1642

sanexodus opened this issue Dec 16, 2024 · 1 comment

Comments

@sanexodus
Copy link

sanexodus commented Dec 16, 2024

Hi, first of all, I want to express my appreciation for your amazing work on this project. It has been incredibly helpful.
I encountered an issue while trying to configure the top_p parameter. I included the setting in my config2.yaml file, but it doesn't seem to take effect. Could you please guide me on the correct way to set this parameter?
my config2.yaml snippet looks like:
llm: api_type: "open_llm" model: "xxx" base_url: "http://xxx/v1" api_key: "none" max_token: 8192 temperature: 0.9 top_p: 0.5 and I debugged to check the configured values, max_token and temperature already worked, but temperature not.

Am I missing something, or is there another place I should configure this?

Thank you in advance for your help, and I’m looking forward to your response!

@shenchucheng
Copy link
Collaborator

It seems to be a bug where the top_p configuration is not being used. Looking at the code [here](https://github.com/geekan/MetaGPT/blob/main/metagpt/provider/openai_api.py#L128), the top_p parameter is not included in the request. You can try modifying the code as follows:

def _cons_kwargs(self, messages: list[dict], timeout=USE_CONFIG_TIMEOUT, **extra_kwargs) -> dict:
    kwargs = {
        "messages": messages,
        "max_tokens": self._get_max_tokens(messages),
        # "n": 1,  # Some services do not provide this parameter, such as mistral
        # "stop": None,  # Default is None, and gpt4-v does not support this parameter
        "temperature": self.config.temperature,
        "model": self.model,
        "timeout": self.get_timeout(timeout),
        "top_p": self.config.top_p,  # Add the top_p parameter here
    }
    return kwargs

However, I’m not sure if the top_p parameter was intentionally omitted due to compatibility issues. After making this change, please test it to ensure that top_p is correctly applied in your scenario.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants