下载LLM

发布于:2024-06-21 ⋅ 阅读:(127) ⋅ 点赞:(0)

0.导入相关依赖

# 升级pip
python -m pip install --upgrade pip

# 下载速度慢可以考虑一下更换镜像源。
# -i https://pypi.tuna.tsinghua.edu.cn/simple

pip install modelscope==1.9.5
pip install transformers==4.35.2
pip install streamlit==1.24.0
pip install sentencepiece==0.1.99
pip install accelerate==0.24.1

1.通过snapshot_download函数。这是魔塔提供的函数

import torch
from modelscope import snapshot_download, AutoTokenizer, AutoModelForCausalLM
import os
model_dir = snapshot_download('Shanghai_AI_Laboratory/internlm-20b', cache_dir='/root/zyc/internlm', revision='v1.0.2')
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
torch_dtype=torch.bfloat16
model = AutoModelForCausalLM.from_pretrained(model_dir, torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
model = model.eval()
inputs = tokenizer(["来到美丽的大自然,我们发现"], return_tensors="pt")
for k,v in inputs.items():
    inputs[k] = v.cuda()
gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.05}
output = model.generate(**inputs, **gen_kwargs)
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
print(output)

2.通过huggingface下载,这是外网

①配置环境(每次使用前都需要)

export HF_ENDPOINT="https://hf-mirror.com"

②huggingface-cli使用之前需要登陆

huggingface-cli login
#输入token

③下载huggingface插件

pip install -U huggingface_hub

④下载加速插件

pip install -U hf-transfer

⑤下载模型

huggingface-cli download --resume-download <model-id-or-name> --cache-dir /path/to/cache

3.通过大模型做一个flask问答接口

from flask import Flask, request, jsonify
import torch
from modelscope import snapshot_download, AutoTokenizer, AutoModelForCausalLM

app = Flask(__name__)


model_dir = 'model_path'
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
torch_dtype = torch.bfloat16
model = AutoModelForCausalLM.from_pretrained(model_dir, torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
model = model.eval()
app.model_loaded = True

@app.route('/generate_text', methods=['POST'])
def generate_text():
    data = request.get_json()  # 假设客户端发送的是JSON格式的数据
    if not data or 'prompt' not in data:
        return jsonify({"error": "缺少prompt参数"}), 400

    prompt = data['prompt']
    inputs = tokenizer([prompt], return_tensors="pt")
    for k, v in inputs.items():
        inputs[k] = v.cuda()

    gen_kwargs = {
        "max_length": 128,
        "top_p": 0.8,
        "temperature": 0.8,
        "do_sample": True,
        "repetition_penalty": 1.05
    }
    output = model.generate(**inputs, **gen_kwargs)
    generated_text = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)

    return jsonify({"generated_text": generated_text})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8848 ,debug=True)

4.conda镜像配置

#展示所有镜像源
conda config --show channels
#删除镜像源
conda config --remove channels xxx
#配置清华源
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --set show_channel_urls yes
#单次下载时指定镜像源
conda install -c url 包名


网站公告

今日签到

点亮在社区的每一天
去签到