We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
模型启动方法:python -m vllm.entrypoints.openai.api_server --served-model-name qwen2-7b-instruct --model /app/Qwen2-7B-Instruct --gpu-memory-utilization 0.9 评测方法:swift eval --eval_url http://127.0.0.1:8000/v1 --eval_is_chat_model true --model_type qwen2-7b-instruct --eval_dataset no --custom_eval_config /app/swift/examples/pytorch/llm/eval_example/custom_config.json 配置文件:[ { "name": "custom_general_qa", "pattern": "general_qa", "dataset": "/app/swift/examples/pytorch/llm/eval_example/custom_general_qa", "subset_list": ["default"] }, { "name": "custom_ceval", "pattern": "ceval", "dataset": "/app/swift/examples/pytorch/llm/eval_example/custom_ceval", "subset_list": ["default"] } ]
报错:future: <Task finished name='Task-26' coro=<EvalModel._call_openai() done, defined at /app/swift/swift/llm/eval.py:44> exception=HTTPError("[{'type': 'extra_forbidden', 'loc': ('body', 'num_beams'), 'msg': 'Extra inputs are not permitted', 'input': 1}]")> Traceback (most recent call last): File "/app/swift/swift/llm/eval.py", line 48, in _call_openai resp = await inference_client_async( File "/app/swift/swift/llm/utils/client_utils.py", line 309, in inference_client_async raise HTTPError(resp_obj['message']) requests.exceptions.HTTPError: [{'type': 'extra_forbidden', 'loc': ('body', 'num_beams'), 'msg': 'Extra inputs are not permitted', 'input': 1}]
The text was updated successfully, but these errors were encountered:
同样的模型启动方式,评测使用ARC-e就没有问题。 swift eval --eval_url http://127.0.0.1:8000/v1 --eval_is_chat_model true --model_type qwen2-7b-instruct --eval_dataset ARC_e
Sorry, something went wrong.
No branches or pull requests
模型启动方法:python -m vllm.entrypoints.openai.api_server --served-model-name qwen2-7b-instruct --model /app/Qwen2-7B-Instruct --gpu-memory-utilization 0.9
评测方法:swift eval --eval_url http://127.0.0.1:8000/v1 --eval_is_chat_model true --model_type qwen2-7b-instruct --eval_dataset no --custom_eval_config /app/swift/examples/pytorch/llm/eval_example/custom_config.json
配置文件:[
{
"name": "custom_general_qa",
"pattern": "general_qa",
"dataset": "/app/swift/examples/pytorch/llm/eval_example/custom_general_qa",
"subset_list": ["default"]
},
{
"name": "custom_ceval",
"pattern": "ceval",
"dataset": "/app/swift/examples/pytorch/llm/eval_example/custom_ceval",
"subset_list": ["default"]
}
]
报错:future: <Task finished name='Task-26' coro=<EvalModel._call_openai() done, defined at /app/swift/swift/llm/eval.py:44> exception=HTTPError("[{'type': 'extra_forbidden', 'loc': ('body', 'num_beams'), 'msg': 'Extra inputs are not permitted', 'input': 1}]")>
Traceback (most recent call last):
File "/app/swift/swift/llm/eval.py", line 48, in _call_openai
resp = await inference_client_async(
File "/app/swift/swift/llm/utils/client_utils.py", line 309, in inference_client_async
raise HTTPError(resp_obj['message'])
requests.exceptions.HTTPError: [{'type': 'extra_forbidden', 'loc': ('body', 'num_beams'), 'msg': 'Extra inputs are not permitted', 'input': 1}]
The text was updated successfully, but these errors were encountered: