Skip to content

Latest commit

 

History

History
276 lines (207 loc) · 12.5 KB

llama_cookbook.md

File metadata and controls

276 lines (207 loc) · 12.5 KB

LLaMA 类模型转换参考

这个文档提供了了转换LLaMA同结构模型的方法。

LLaMA类模型有着基本相同的结构,但权重和prompt构造有差异。在fastllm中,通过转转模型时修改部分配置,实现对这些变体模型的支持、

声明

以下配置方案根据模型的源代码整理,不保证模型推理结果与原版完全一致。

修改方式

目前,转换脚本和两行加速方式均可用于llama类模型。但无论采用哪一种方式,都需要预留足够的内存(可以用swap空间)。

在float16模式下,转换时约需要4×参数量+1GB的空闲内存。

转换脚本

这里以支持推理各类Llama结构的基座模型为例,介绍如何应用本文档。

  • 方案一:修改转换脚本

以alpaca2flm.py为模板修改。在创建model之后添加:

    model = LlamaForCausalLM.from_pretrained(model_name).float()
    # config.json中定义了自己的model_type的需要添加
    conf = model.config.__dict__
    conf["model_type"] = "llama"
    # 接下来的部分各个Chat模型有差别,Base模型有的需要添加pre_prompt。
    torch2flm.tofile(exportPath, model, tokenizer, pre_prompt = "", 
                     user_role = "", bot_role = "", history_sep = "", 
                     dtype = dtype)

其中,pre_promptuser_rolebot_rolehistory_sep分别为“开始的系统提示词(第一轮对话之前)”,“用户角色标志”,“用户话语结束标志及模型回复开始标志”,“两轮对话之间的分隔符”。

  • 方案二:修改config.json 在下载的模型目录下,修改配置文件config.json中,修改"model_type"为llama,并增加下面的键-值对:
    "pre_prompt": "",
    "user_role": "",
    "bot_role": "",
    "history_sep":  "",

如需添加Token ID而非字符串(类似baichuan-chat模型),可以使用“<FLM_FIX_TOKEN_{ID}>”的格式添加。

  • 执行脚本
python3 tools/alpaca2flm.py [输出文件名] [精度] [原始模型名称或路径]

两行加速

    conf = model.config.__dict__
    conf["model_type"] = "llama"
    llm.from_hf(model, tokenizer, pre_prompt = "", 
                user_role = "", bot_role = "", history_sep = "", 
                dtype = dtype)

对齐

如果想使fastllm模型和原版transformers模型基本一致,最主要的操作是对齐tokenizer。 如果模型使用了huggingface 加速版本的Tokenizers(即模型目录中包含tokenizer.json并优先使用),目前的转换脚本仅在从本地文件转换时,能够对齐tokenizer

注意检查原始tokenizer的encode()方法返回的结果前面是否会加空格。如果原始tokenizer没有加空格,则需要设置:

    conf["tokenizer_add_dummy_prefix"] = False

Base Model

见上方“修改方案”。

一部分模型需要制定bos_token_id,假设bos_token_id为1则可以配置如下:

    torch2flm.tofile(exportPath, model, tokenizer, pre_prompt = "<FLM_FIX_TOKEN_1>", 
                     user_role = "", bot_role = "", history_sep = "", 
                     dtype = dtype)

Chat Model

对Chat Model,同样是修改转换脚本,或修改模型的config.json,以下是目前常见的chat model的配置:

InternLM(书生)

    conf = model.config.__dict__
    conf["model_type"] = "llama"
    torch2flm.tofile(exportPath, model, tokenizer, pre_prompt = "<s><s>", 
                     user_role = "<|User|>:", bot_role = "<eoh>\n<|Bot|>:", 
                     history_sep = "<eoa>\n<s>", dtype = dtype)

可以直接使用llamalike2flm.py脚本转换:

cd build
python3 tools/llamalike2flm.py internlm-7b-fp16.flm float16 internlm/internlm-chat-20b #导出float16模型
python3 tools/llamalike2flm.py internlm-7b-int8.flm int8 internlm/internlm-chat-20b #导出int8模型
python3 tools/llamalike2flm.py internlm-7b-int4.flm int4 internlm/internlm-chat-20b #导出int4模型
python3 tools/llamalike2flm.py internlm-7b-int4.flm float16 internlm/internlm-chat-7b #导出internlm-chat-7b float16模型

使用llamalike2flm.py脚本转换:

cd build
python3 tools/llamalike2flm.py internlm2-1.8b-fp16.flm float16 iinternlm/internlm2-chat-1_8b #导出1.8B float16模型
python3 tools/llamalike2flm.py internlm2-7b-fp16.flm float16 internlm/internlm2-chat-7b #导出chat-7b float16模型
python3 tools/llamalike2flm.py internlm2-7b-int8.flm int8 internlm/internlm2-chat-7b #导出chat-7b int8模型

XVERSE

    conf = model.config.__dict__
    conf["model_type"] = "llama"
    conf["tokenizer_add_dummy_prefix"] = False
    torch2flm.tofile(exportPath, model, tokenizer, pre_prompt = "", 
                     user_role = "Human: ", bot_role = "\n\nAssistant: ", 
                     history_sep = "<FLM_FIX_TOKEN_3>", dtype = dtype)

XVERSE-13B-Chat V1 版本需要对输入做NFKC规范化,fastllm暂不支持,因此需要使用原始tokenizer.

该模型没有将RoPE外推参数放到config中,因此需要手工指定:

    conf = model.config.__dict__
    conf["model_type"] = "llama"
    conf["rope_theta"] = 500000
    conf["rope_scaling.type"] = "dynamic"
    conf["rope_scaling.factor"] = 2.0
    conf["tokenizer_add_dummy_prefix"] = False
    torch2flm.tofile(exportPath, model, tokenizer, pre_prompt = "", 
                     user_role = "Human: ", bot_role = "\n\nAssistant: ", 
                     history_sep = "<FLM_FIX_TOKEN_3>", dtype = dtype)

其他 llama1 系列

  • Vicuna v1.1 v1.3
    torch2flm.tofile(exportPath, model, tokenizer, 
                     pre_prompt="A chat between a curious user and an artificial intelligence assistant. "
                                "The assistant gives helpful, detailed, and polite answers to the user's questions. "
                     user_role="USER: ", bot_role=" ASSISTANT:",  history_sep="<s>", dtype=dtype)
  • BiLLa
    torch2flm.tofile(exportPath, model, tokenizer, pre_prompt = "\n", 
                     user_role = "Human: ", bot_role = "\nAssistant: ", 
                     history_sep = "\n", dtype = dtype)

llama2-chat

  • meta-llama/Llama-2-chat
Model Llama2-chat Llama2-chat-hf
7B meta-llama/Llama-2-7b-chat meta-llama/Llama-2-7b-chat-hf
13B meta-llama/Llama-2-13b-chat meta-llama/Llama-2-13b-chat-hf
Model CodeLlama-Instruct
7B codellama/CodeLlama-7b-Instruct-hf
13B codellama/CodeLlama-13b-Instruct-hf

官方示例代码中,可以不用系统提示语:

    torch2flm.tofile(exportPath, model, tokenizer, pre_prompt = "<FLM_FIX_TOKEN_1>", 
                     user_role = "[INST] ", bot_role = " [/INST]", 
                     history_sep = " <FLM_FIX_TOKEN_2><FLM_FIX_TOKEN_1>", dtype = dtype)

Llama-2系列支持系统提示语需要修改代码,单轮可以使用以下带有系统提示语的版本:

    torch2flm.tofile(exportPath, model, tokenizer, 
                     pre_prompt = "<FLM_FIX_TOKEN_1>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, " \
        "while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. " \
        "Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, " \
        "or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, " \
        "please don't share false information.\n<</SYS>>\n\n", 
                     user_role = " ", bot_role = " [/INST]", 
                     history_sep = " <FLM_FIX_TOKEN_2><FLM_FIX_TOKEN_1>", dtype = dtype)
  • ymcui/Chinese-Alpaca-2
Model Chinese-Alpaca-2 Chinese-Alpaca-2-16K
7B ziqingyang/chinese-alpaca-2-7b ziqingyang/chinese-alpaca-2-7b-16k
13B ziqingyang/chinese-alpaca-2-13b ziqingyang/chinese-alpaca-2-13b-16k
    torch2flm.tofile(exportPath, model, tokenizer, 
                     pre_prompt = "<FLM_FIX_TOKEN_1>[INST] <<SYS>>\nYou are a helpful assistant. 你是一个乐于助人的助手。\n<</SYS>>\n\n"
                     user_role = " ", bot_role = " [/INST]", 
                     history_sep = " <FLM_FIX_TOKEN_2><FLM_FIX_TOKEN_1>", dtype = dtype)

RUC-GSAI/YuLan-Chat

    torch2flm.tofile(exportPath, model, tokenizer, 
                     pre_prompt="The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. " \
                                "The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n",
                     user_role="[|Human|]:", bot_role="\n[|AI|]:", history_sep="\n", dtype=dtype)

Yi

    torch2flm.tofile(exportPath, model, tokenizer, pre_prompt="",
                     user_role="<|im_start|>user\n", bot_role="<|im_end|><|im_start|>assistant\n", history_sep="<|im_end|>\n", dtype=dtype)

WizardCoder

    torch2flm.tofile(exportPath, model, tokenizer, 
                     pre_prompt="Below is an instruction that describes a task. " \
                                "Write a response that appropriately completes the request.\n\n",
                     user_role="### Instruction:\n", bot_role="\n\n### Response:", history_sep="\n", dtype=dtype)

Deepseek Coder

    torch2flm.tofile(exportPath, model, tokenizer, 
                     pre_prompt="<FLM_FIX_TOKEN_32013>	You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, " \
                                "and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, " \
                                "and other non-computer science questions, you will refuse to answer.\n",
                     user_role="### Instruction:\n", bot_role="\n### Response:\n", history_sep="\n<|EOT|>\n", dtype=dtype)