Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The prompt has 13284 tokens and the max_tokens is 4096 tokens. #1612

Open
happytravelingskysheep opened this issue Nov 22, 2024 · 2 comments
Labels

Comments

@happytravelingskysheep
Copy link

Bug description

Metagpt exited halfway because of an exception due to max_tokens setting.

Bug solved method

Environment information

  • LLM type and model name: yi-lightning
  • System version: ubuntu 22.04
  • Python version: 3.11
  • MetaGPT version or branch: main
  • packages version: 0.8.1
  • installation method: pip
  • packages version:
  • installation method:

Screenshots or logs

2024-11-22 17:18:08.597 | INFO | metagpt.actions.write_code_review:run:185 - Code review and rewrite main.py: 1/2 | len(iterative_code)=12485, len(self.i_context.code_doc.content)=12485
2024-11-22 17:18:31.001 | WARNING | metagpt.utils.common:wrapper:673 - There is a exception in role's execution, in order to resume, we delete the newest role communication message in the role's memory.
2024-11-22 17:18:31.002 | ERROR | metagpt.utils.common:wrapper:655 - Exception occurs, start to serialize the project, exp:
Traceback (most recent call last):
File "/root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/common.py", line 650, in wrapper
result = await func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/team.py", line 134, in run
await self.env.run()
openai.BadRequestError: Error code: 400 - {'error': {'code': 'bad_request', 'message': 'The total number of tokens in the prompt and the max_tokens must be less than or equal to 16000. The prompt has 13284 tokens and the max_tokens is 4096 tokens.', 'type': 'invalid_request_error', 'param': None}}

@voidking
Copy link
Collaborator

Try to set max_token in config2.yaml

llm:
  api_type: "openai"
  model: "gpt-3.5-turbo-16k" 
  base_url: "https://api.openai.com/v1" 
  api_key: "sk-xxx"
  max_token: 16000

Copy link

This issue has no activity in the past 30 days. Please comment on the issue if you have anything to add.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants