Qwen-14B-Chat 非量化微调

发布于:2024-05-17 ⋅ 阅读:(172) ⋅ 点赞:(0)
首先从modelscope上下载模型 从github 上下载官方demo
  1. 模型连接:
    注意 大文件需要git lfs插件提醒一下

    git clone https://www.modelscope.cn/qwen/Qwen-14B-Chat.git
    
  2. 官方demo连接:

    git clone https://github.com/QwenLM/Qwen.git
    
demo中有很多种微调方法

我选择的方案是最普遍的 lora + deepspeed +zero3 + bf16 + 单机多卡 这套通用方案
配置文件更改如下

Qwen/finetune/finetune_lora_ds.sh
#!/bin/bash
export CUDA_DEVICE_MAX_CONNECTIONS=1
DIR=`pwd`

# Guide:
# This script supports distributed training on multi-gpu workers (as well as single-worker training).
# Please set the options below according to the comments.
# For multi-gpu workers training, these options should be manually set for each worker.
# After setting the options, please run the script on each worker.

# Number of GPUs per GPU worker
GPUS_PER_NODE=$(python -c 'import torch; print(torch.cuda.device_count())')

# Number of GPU workers, for single-worker training, please set to 1
NNODES=1

# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0
NODE_RANK=${NODE_RANK:-0}

# The ip address of the rank-0 worker, for single-worker training, please set to localhost
MASTER_ADDR=${MASTER_ADDR:-localhost}

# The port for communication
MASTER_PORT=${MASTER_PORT:-6001}

MODEL="/sys/fs/cgroup/Qwen-14B-Chat" # Set the path if you do not want to load from huggingface directly
# ATTENTION: specify the path to your training data, which should be a json file consisting of a list of conversations.
# See the section for finetuning in README for more information.
DATA="/root/t1.json"
DS_CONFIG_PATH="finetune/ds_config_zero2.json"

function usage() {
    echo '
Usage: bash finetune/finetune_lora_ds.sh [-m MODEL_PATH] [-d DATA_PATH] [--deepspeed DS_CONFIG_PATH]
'
}

while [[ "$1" != "" ]]; do
    case $1 in
        -m | --model )
            shift
            MODEL=$1
            ;;
        -d | --data )
            shift
            DATA=$1
            ;;
        --deepspeed )
            shift
            DS_CONFIG_PATH=$1
            ;;
        -h | --help )
            usage
            exit 0
            ;;
        * )
            echo "Unknown argument ${1}"
            exit 1
            ;;
    esac
    shift
done

DISTRIBUTED_ARGS="
    --nproc_per_node $GPUS_PER_NODE \
    --nnodes $NNODES \
    --node_rank $NODE_RANK \
    --master_addr $MASTER_ADDR \
    --master_port $MASTER_PORT
"

torchrun $DISTRIBUTED_ARGS finetune.py \
    --model_name_or_path /sys/fs/cgroup/Qwen-14B-Chat \
    --data_path /root/t1.json \
    --bf16 True \
    --output_dir output_qwen2 \
    --num_train_epochs 5 \
    --per_device_train_batch_size 1 \
    --per_device_eval_batch_size 1 \
    --gradient_accumulation_steps 8 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 1000 \
    --save_total_limit 10 \
    --learning_rate 3e-4 \
    --weight_decay 0.1 \
    --adam_beta2 0.95 \
    --warmup_ratio 0.01 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --report_to "none" \
    --model_max_length 512 \
    --lazy_preprocess True \
    --use_lora \
    --gradient_checkpointing \
    --deepspeed finetune/ds_config_zero3.json

训练数据自己去弄吧 符合qwen需要的格式就行

开始训练
bash finetune/finetune_lora_ds.sh 

然后找到输出目录 直接进行合并 生成新的模型

模型合并
from transformers import AutoModelForCausalLM,AutoTokenizer
from peft import PeftModel
import torch

model = AutoModelForCausalLM.from_pretrained("/sys/fs/cgroup/Qwen-14B-Chat", torch_dtype=torch.float16, device_map="auto", trust_remote_code=True)
model = PeftModel.from_pretrained(model, "/root/Qwen/output_qwen2")
merged_model = model.merge_and_unload()
merged_model.save_pretrained("/sys/fs/cgroup/qwen14b_new", max_shard_size="2048MB", safe_serialization=True)
tokenizer = AutoTokenizer.from_pretrained("/sys/fs/cgroup/Qwen-14B-Chat", trust_remote_code=True)
tokenizer.save_pretrained("/sys/fs/cgroup/qwen14b_new")

显存四卡 每张卡大概使用20g左右这样子
在这里插入图片描述


网站公告

今日签到

点亮在社区的每一天
去签到