使用PEFT库将原始模型与LoRA权重合并

发布于:2025-05-16 ⋅ 阅读:(15) ⋅ 点赞:(0)

使用PEFT库将原始模型与LoRA权重合并

步骤如下:
  • 基础模型加载:需保持与LoRA训练时相同的模型配置
  • merge_and_unload():该方法会执行权重合并并移除LoRA层
  • 保存格式:合并后的模型保存为标准HuggingFace格式,可直接用于推理
  • 代码如下:

import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

def merge_and_save_model(base_model_path, lora_path, output_path):
    # 1. 加载基础模型和tokenizer
    print(f"Loading base model from {base_model_path}")
    base_model = AutoModelForCausalLM.from_pretrained(
        base_model_path,
        torch_dtype=torch.bfloat16,
        device_map="auto",
        trust_remote_code=True
    )
    tokenizer = AutoTokenizer.from_pretrained(
        base_model_path,
        trust_remote_code=True
    )

    # 2. 加载LoRA适配器
    print(f"Loading LoRA adapter from {lora_path}")
    lora_model = PeftModel.from_pretrained(
        base_model,
        lora_path,
        torch_dtype=torch.float16
    )

    # 3. 合并权重并卸载适配器
    print("Merging LoRA weights with base model")
    merged_model = lora_model.merge_and_unload()

    # 4. 保存合并后的模型
    print(f"Saving merged model to {output_path}")
    merged_model.save_pretrained(output_path)
    tokenizer.save_pretrained(output_path)
    print("Merge completed successfully!")

# 使用示例
if __name__ == "__main__":
    merge_and_save_model(
        base_model_path= 基础模型路径
        lora_path= lora训练后的权重
        output_path= 合并后模型的输出路径
    )


网站公告

今日签到

点亮在社区的每一天
去签到