利用deepspeed在Trainer下面微调大模型Qwen2.5-3B

发布于:2025-07-18 ⋅ 阅读:(17) ⋅ 点赞:(0)

当模型参数越来越大的情况下,如果我们的GPU内存比较小,那么就没办法直接进行全参数微调,此时我们可以借助deepspeed来进行微调。

1、deepspeed的配置文件:

$ more deepspeed.json 
{
  "train_batch_size": 4,
  "train_micro_batch_size_per_gpu": 1,
  "zero_optimization": {
    "stage":1
  }
}

2、启动脚本run_deepspeed

$ more run_deepspeed 
export TRANSFORMERS_OFFLINE=1
export HF_DATASETS_OFFLINE=1
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
export CUDA_VISIBLE_DEVICES=0,1,2,3
export CUDA_DEVICE_ORDER=PCI_BUS_ID
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
export DS_SKIP_CUDA_CHECK=1
export TF_ENABLE_ONEDNN_OPTS=0
export CUDA_HOME="/usr/local/cuda-12.2"
export LIBRARY_PATH="/usr/local/cuda-12.2/lib64:$LIBRARY_PATH"
nohup deepspeed train.py > logd.txt 2>&1 &

3、真正的训练脚本:train.py

$ more train.py 
from datasets import load_dataset, DownloadConfig
from transformers import AutoTokenizer
from transformers import DataCollatorWithPadding
from transformers import TrainingArguments
from transformers import AutoModelForSequenceClassification
from transformers import Trainer
from sklearn.metrics import precision_score

download_config = DownloadConfig(local_files_only=True)
cache_dir = '/data1/dataset_cache_dir'
path = '/data1/data_0616'
raw_datasets = load_dataset(path=path, download_config=download_config,cache_dir=cache_dir)

print(raw_datasets)

model_name = "/data1/model/Qwen2.5-3B"

tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.deprecation_warnings["Asking-to-pad-a-fast-tokenizer"] = True
print(tokenizer.pad_token)

def tokenize_function(batch):
    return tokenizer(batch["title"], batch["text"], truncation=True, padding=True, max_length=512)

tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)

data_collator = DataCollatorWithPadding(tokenizer=tokenizer, padding='max_length', max_length=512)
output_dir = "/data1/result_0704"
training_args = TrainingArguments(output_dir=output_dir, evaluation_strategy="steps", num_train_epochs=100, learning_rate=5e-6,
                                  save_strategy="steps", greater_is_better=True, metric_for_best_model="precision",
                                  per_device_train_batch_size=1,per_device_eval_batch_size=1,deepspeed="deepspeed.json",
                                  load_best_model_at_end=True,local_rank=0,save_total_limit=10)

model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)
print(model.config.eos_token_id)
model.config.pad_token_id = model.config.eos_token_id

def compute_metrics(pred):
    labels = pred.label_ids
    preds = pred.predictions.argmax(-1)
    precision = precision_score(labels, preds, labels=[0], average='macro', zero_division=0.0)
    print('precision:', precision)
    return {"precision": precision}

trainer = Trainer(
    model,
    training_args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["validation"],
    data_collator=data_collator,
    tokenizer=tokenizer,
    compute_metrics=compute_metrics
)

trainer.train()
print("train end")
results = trainer.evaluate()
print(results)


网站公告

今日签到

点亮在社区的每一天
去签到