Skip to content

Commit

Permalink
Fix the hardcoded judge & expansion LLM names
Browse files Browse the repository at this point in the history
Summary: Our log cannot correctly reflect the judge & expansion model names.

Reviewed By: dwjsong

Differential Revision: D51924358

fbshipit-source-id: 34c334b4cb53a6859daa2b95f24305782ed21777
  • Loading branch information
SimonWan authored and facebook-github-bot committed Dec 7, 2023
1 parent 3f8a7be commit 304809e
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion CybersecurityBenchmarks/benchmark/mitre_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ async def run(self, run_llm_in_parallel: bool = True) -> None:
e_llm = self.expansion_llm

logging.info(
"Expanding the response with llama and judging the response using codellama"
f"Expanding the response with {j_llm.model} and judging the response using {e_llm.model}"
)
judge_response_result = []
judge_response_result.extend(
Expand Down

0 comments on commit 304809e

Please sign in to comment.