在您的基础设施上微调 Llama 3.1#

Meta 于 2024 年 7 月 23 日发布了Llama 3.1 模型系列,其中包括基础模型和指令微调形式的 405B 参数模型。Llama 3.1 405B 成为了首个能与 GPT-4o 和 Claude 3.5 Sonnet 等顶级专有模型相媲美的开源 LLM。
本指南展示了如何使用SkyPilot和torchtune在您自己的数据和基础设施上微调 Llama 3.1。一切都打包在一个简单的SkyPilot YAML文件中,您可以在您的基础设施上用一个命令启动它
本地 GPU 工作站
Kubernetes 集群
云账户 (支持 12 种云)

让我们微调 Llama 3.1#
我们将使用torchtune来微调 Llama 3.1。下面的示例使用了 yahma/alpaca-cleaned
数据集,您稍后可以将其替换为您自己的数据集。
要设置用于启动微调作业的环境,请先完成附录:准备工作部分。
微调作业打包在一个 SkyPilot YAML 文件中。您可以使用相同的接口在您自己的任何基础设施上启动它,例如 Kubernetes 或任何云。
用于微调 Llama 3.1 的 SkyPilot YAML:lora.yaml
# LoRA finetuning Meta Llama 3.1 on any of your own infra.
#
# Usage:
#
# HF_TOKEN=xxx sky launch lora.yaml -c llama31 --env HF_TOKEN
#
# To finetune a 70B model:
#
# HF_TOKEN=xxx sky launch lora.yaml -c llama31-70 --env HF_TOKEN --env MODEL_SIZE=70B
envs:
MODEL_SIZE: 8B
HF_TOKEN:
DATASET: "yahma/alpaca-cleaned"
# Change this to your own checkpoint bucket
CHECKPOINT_BUCKET_NAME: sky-llama-31-checkpoints
resources:
accelerators: A100:8
disk_tier: best
use_spot: true
file_mounts:
/configs: ./configs
/output:
name: $CHECKPOINT_BUCKET_NAME
mode: MOUNT
# Optionally, specify the store to enforce to use one of the stores below:
# r2/azure/gcs/s3/cos
# store: r2
setup: |
pip install torch torchvision
# Install torch tune from source for the latest Llama 3.1 model
pip install git+https://github.com/pytorch/torchtune.git@58255001bd0b1e3a81a6302201024e472af05379
# pip install torchtune
tune download meta-llama/Meta-Llama-3.1-${MODEL_SIZE}-Instruct \
--hf-token $HF_TOKEN \
--output-dir /tmp/Meta-Llama-3.1-${MODEL_SIZE}-Instruct \
--ignore-patterns "original/consolidated*"
run: |
tune run --nproc_per_node $SKYPILOT_NUM_GPUS_PER_NODE \
lora_finetune_distributed \
--config /configs/${MODEL_SIZE}-lora.yaml \
dataset.source=$DATASET
# Remove the checkpoint files to save space, LoRA serving only needs the
# adapter files.
rm /tmp/Meta-Llama-3.1-${MODEL_SIZE}-Instruct/*.pt
rm /tmp/Meta-Llama-3.1-${MODEL_SIZE}-Instruct/*.safetensors
mkdir -p /output/$MODEL_SIZE-lora
rsync -Pavz /tmp/Meta-Llama-3.1-${MODEL_SIZE}-Instruct /output/$MODEL_SIZE-lora
cp -r /tmp/lora_finetune_output /output/$MODEL_SIZE-lora/
在您的本地机器上运行以下命令
# Download the files for Llama 3.1 finetuning
git clone https://github.com/skypilot-org/skypilot
cd skypilot/llm/llama-3.1
export HF_TOKEN=xxxx
# It takes about 40 mins on 8 A100 GPUs to finetune a 8B
# Llama3.1 model with LoRA on Alpaca dataset.
sky launch -c llama31 lora.yaml \
--env HF_TOKEN --env MODEL_SIZE=8B \
--env CHECKPOINT_BUCKET_NAME="your-own-bucket-name"
要微调一个参数量为 70B 的更大模型,您可以简单地按如下方式更改参数
sky launch -c llama31-70 lora.yaml \
--env HF_TOKEN --env MODEL_SIZE=70B \
--env CHECKPOINT_BUCKET_NAME="your-own-bucket-name"
微调 Llama 3.1 405B:正在进行中!如果您想关注这项工作,请加入SkyPilot 社区 Slack 进行讨论。
使用您的自定义数据#
上面的示例在 Alpaca 数据集 (yahma/alpaca-cleaned
) 上微调 Llama 3.1,但在实际用例中,您可能希望在您自己的数据集上微调它。
您可以通过指定您自己的数据集的 huggingface 路径来实现,如下所示(我们在下面使用 gbharti/finance-alpaca
作为示例)
# It takes about 1 hour on 8 A100 GPUs to finetune a 8B
# Llama3.1 model with LoRA on finance dataset.
sky launch -c llama31 lora.yaml \
--env HF_TOKEN --env MODEL_SIZE=8B \
--env CHECKPOINT_BUCKET_NAME="your-own-bucket-name" \
--env DATASET="gbharti/finance-alpaca"

提供微调后的模型服务#
使用在您的数据集上训练过的微调 Llama 3.1,您现在可以通过一个命令提供微调模型的服务
注意:
CHECKPOINT_BUCKET_NAME
应该是您在上一步微调中用于存储检查点的存储桶。
sky launch -c serve-llama31 serve.yaml \
--env LORA_NAME="my-finance-lora" \
--env CHECKPOINT_BUCEKT_NAME="your-own-bucket-name"
您可以在终端中与模型交互
ENDPOINT=$(sky status --endpoint 8081 serve-llama31)
curl http://$ENDPOINT/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "my-finance-lora",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "For a car, what scams can be plotted with 0% financing vs rebate?"
}
]
}' | jq .
:tada: 恭喜! 您现在拥有了一个在金融主题方面表现出色的微调 Llama 3.1 8B 模型。总而言之,所有模型检查点和副本都保留在您自己的私有基础设施中。
用于提供微调模型服务的 SkyPilot YAML serve.yaml
# Serve a LoRA finetuned Meta Llama 3.1.
#
# Usage:
#
# HF_TOKEN=xxx sky launch serve.yaml -c llama31-serve --env HF_TOKEN
envs:
MODEL_SIZE: 8B
HF_TOKEN:
# Change this to your checkpoint bucket created in lora.yaml
CHECKPOINT_BUCKET_NAME: your-checkpoint-bucket
LORA_NAME: my-finance-lora
resources:
accelerators: L4
ports: 8081
cpus: 32+
file_mounts:
/checkpoints:
name: $CHECKPOINT_BUCKET_NAME
mode: MOUNT
setup: |
pip install vllm==0.5.3post1
pip install vllm-flash-attn==2.5.9.post1
pip install openai
run: |
vllm serve meta-llama/Meta-Llama-3.1-${MODEL_SIZE}-Instruct \
--tensor-parallel-size $SKYPILOT_NUM_GPUS_PER_NODE --enable-lora \
--lora-modules $LORA_NAME=/checkpoints/${MODEL_SIZE}-lora/Meta-Llama-3.1-${MODEL_SIZE}-Instruct/ \
--max-model-len=2048 --port 8081
附录:准备工作#
请求访问huggingface 上的 Llama 3.1 权重(点击蓝色框并按照步骤操作):
获取您的huggingface 访问令牌:
将 huggingface 令牌添加到您的环境变量中
export HF_TOKEN="xxxx"
安装 SkyPilot 以启动微调
pip install skypilot-nightly[aws,gcp,kubernetes]
# or other clouds (12 clouds + kubernetes supported) you have setup
# See: https://docs.skypilot.org.cn/en/latest/getting-started/installation.html
检查您的基础设施设置
sky check
🎉 Enabled clouds 🎉
✔ AWS
✔ GCP
✔ Azure
✔ OCI
✔ Lambda
✔ RunPod
✔ Paperspace
✔ Fluidstack
✔ Cudo
✔ IBM
✔ SCP
✔ vSphere
✔ Cloudflare (for R2 object store)
✔ Kubernetes
接下来做什么#
包含的文件#
configs/70B-lora.yaml
# Config for multi-device LoRA in lora_finetune_distributed.py
# using a Llama3.1 70B model
#
# This config assumes that you've run the following command before launching
# this run:
# tune download meta-llama/Meta-Llama-3.1-70B-Instruct --output-dir /tmp/Meta-Llama-3.1-70B-Instruct --ignore-patterns "original/consolidated*"
#
# This config needs 8 GPUs to run
# tune run --nproc_per_node 8 lora_finetune_distributed --config llama3_1/70B_lora
# Model Arguments
model:
_component_: torchtune.models.llama3_1.lora_llama3_1_70b
lora_attn_modules: ['q_proj', 'k_proj', 'v_proj']
apply_lora_to_mlp: False
apply_lora_to_output: False
lora_rank: 16
lora_alpha: 32
tokenizer:
_component_: torchtune.models.llama3.llama3_tokenizer
path: /tmp/Meta-Llama-3.1-70B-Instruct/original/tokenizer.model
checkpointer:
_component_: torchtune.utils.FullModelHFCheckpointer
checkpoint_dir: /tmp/Meta-Llama-3.1-70B-Instruct/
checkpoint_files: [
model-00001-of-00030.safetensors,
model-00002-of-00030.safetensors,
model-00003-of-00030.safetensors,
model-00004-of-00030.safetensors,
model-00005-of-00030.safetensors,
model-00006-of-00030.safetensors,
model-00007-of-00030.safetensors,
model-00008-of-00030.safetensors,
model-00009-of-00030.safetensors,
model-00010-of-00030.safetensors,
model-00011-of-00030.safetensors,
model-00012-of-00030.safetensors,
model-00013-of-00030.safetensors,
model-00014-of-00030.safetensors,
model-00015-of-00030.safetensors,
model-00016-of-00030.safetensors,
model-00017-of-00030.safetensors,
model-00018-of-00030.safetensors,
model-00019-of-00030.safetensors,
model-00020-of-00030.safetensors,
model-00021-of-00030.safetensors,
model-00022-of-00030.safetensors,
model-00023-of-00030.safetensors,
model-00024-of-00030.safetensors,
model-00025-of-00030.safetensors,
model-00026-of-00030.safetensors,
model-00027-of-00030.safetensors,
model-00028-of-00030.safetensors,
model-00029-of-00030.safetensors,
model-00030-of-00030.safetensors,
]
recipe_checkpoint: null
output_dir: /tmp/Meta-Llama-3.1-70B-Instruct/
model_type: LLAMA3
resume_from_checkpoint: False
# Dataset and Sampler
dataset:
_component_: torchtune.datasets.alpaca_dataset
seed: null
shuffle: True
batch_size: 2
# Optimizer and Scheduler
optimizer:
_component_: torch.optim.AdamW
weight_decay: 0.01
lr: 3e-4
lr_scheduler:
_component_: torchtune.modules.get_cosine_schedule_with_warmup
num_warmup_steps: 100
loss:
_component_: torch.nn.CrossEntropyLoss
# Training
epochs: 1
max_steps_per_epoch: null
gradient_accumulation_steps: 1
# Logging
output_dir: /tmp/lora_finetune_output
metric_logger:
_component_: torchtune.utils.metric_logging.DiskLogger
log_dir: ${output_dir}
log_every_n_steps: 1
log_peak_memory_stats: False
# Environment
device: cuda
dtype: bf16
enable_activation_checkpointing: True
configs/8B-lora.yaml
# Config for multi-device LoRA finetuning in lora_finetune_distributed.py
# using a Llama3.1 8B Instruct model
#
# This config assumes that you've run the following command before launching
# this run:
# tune download meta-llama/Meta-Llama-3.1-8B-Instruct --output-dir /tmp/Meta-Llama-3.1-8B-Instruct --ignore-patterns "original/consolidated.00.pth"
#
# To launch on 2 devices, run the following command from root:
# tune run --nproc_per_node 2 lora_finetune_distributed --config llama3_1/8B_lora
#
# You can add specific overrides through the command line. For example
# to override the checkpointer directory while launching training
# you can run:
# tune run --nproc_per_node 2 lora_finetune_distributed --config llama3_1/8B_lora checkpointer.checkpoint_dir=<YOUR_CHECKPOINT_DIR>
#
# This config works best when the model is being fine-tuned on 2+ GPUs.
# For single device LoRA finetuning please use 8B_lora_single_device.yaml
# or 8B_qlora_single_device.yaml
# Tokenizer
tokenizer:
_component_: torchtune.models.llama3.llama3_tokenizer
path: /tmp/Meta-Llama-3.1-8B-Instruct/original/tokenizer.model
# Model Arguments
model:
_component_: torchtune.models.llama3_1.lora_llama3_1_8b
lora_attn_modules: ['q_proj', 'v_proj']
apply_lora_to_mlp: False
apply_lora_to_output: False
lora_rank: 8
lora_alpha: 16
checkpointer:
_component_: torchtune.utils.FullModelHFCheckpointer
checkpoint_dir: /tmp/Meta-Llama-3.1-8B-Instruct/
checkpoint_files: [
model-00001-of-00004.safetensors,
model-00002-of-00004.safetensors,
model-00003-of-00004.safetensors,
model-00004-of-00004.safetensors
]
recipe_checkpoint: null
output_dir: /tmp/Meta-Llama-3.1-8B-Instruct/
model_type: LLAMA3
resume_from_checkpoint: False
# Dataset and Sampler
dataset:
_component_: torchtune.datasets.alpaca_cleaned_dataset
seed: null
shuffle: True
batch_size: 2
# Optimizer and Scheduler
optimizer:
_component_: torch.optim.AdamW
weight_decay: 0.01
lr: 3e-4
lr_scheduler:
_component_: torchtune.modules.get_cosine_schedule_with_warmup
num_warmup_steps: 100
loss:
_component_: torch.nn.CrossEntropyLoss
# Training
epochs: 1
max_steps_per_epoch: null
gradient_accumulation_steps: 32
# Logging
output_dir: /tmp/lora_finetune_output
metric_logger:
_component_: torchtune.utils.metric_logging.DiskLogger
log_dir: ${output_dir}
log_every_n_steps: 1
log_peak_memory_stats: False
# Environment
device: cuda
dtype: bf16
enable_activation_checkpointing: False
lora.yaml
# LoRA finetuning Meta Llama-3.1 on any of your own infra.
#
# Usage:
#
# HF_TOKEN=xxx sky launch lora.yaml -c llama31 --env HF_TOKEN
#
# To finetune a 70B model:
#
# HF_TOKEN=xxx sky launch lora.yaml -c llama31-70 --env HF_TOKEN --env MODEL_SIZE=70B
envs:
MODEL_SIZE: 8B
HF_TOKEN:
DATASET: "yahma/alpaca-cleaned"
# Change this to your own checkpoint bucket
CHECKPOINT_BUCKET_NAME: sky-llama-31-checkpoints
resources:
accelerators: A100:8
disk_tier: best
use_spot: true
file_mounts:
/configs: ./configs
/output:
name: $CHECKPOINT_BUCKET_NAME
mode: MOUNT
# Optionally, specify the store to enforce to use one of the stores below:
# r2/azure/gcs/s3/cos
# store: r2
setup: |
pip install torch==2.4.0 torchvision
# Install torch tune from source for the latest Llama-3.1 model
pip install git+https://github.com/pytorch/torchtune.git@58255001bd0b1e3a81a6302201024e472af05379
# pip install torchtune
tune download meta-llama/Meta-Llama-3.1-${MODEL_SIZE}-Instruct \
--hf-token $HF_TOKEN \
--output-dir /tmp/Meta-Llama-3.1-${MODEL_SIZE}-Instruct \
--ignore-patterns "original/consolidated*"
run: |
tune run --nproc_per_node $SKYPILOT_NUM_GPUS_PER_NODE \
lora_finetune_distributed \
--config /configs/${MODEL_SIZE}-lora.yaml \
dataset.source=$DATASET
# Remove the checkpoint files to save space, LoRA serving only needs the
# adapter files.
rm /tmp/Meta-Llama-3.1-${MODEL_SIZE}-Instruct/*.pt
rm /tmp/Meta-Llama-3.1-${MODEL_SIZE}-Instruct/*.safetensors
mkdir -p /output/$MODEL_SIZE-lora
rsync -Pavz /tmp/Meta-Llama-3.1-${MODEL_SIZE}-Instruct /output/$MODEL_SIZE-lora
cp -r /tmp/lora_finetune_output /output/$MODEL_SIZE-lora/
serve.yaml
# Serve a LoRA finetuned Meta Llama-3.1.
#
# Usage:
#
# HF_TOKEN=xxx sky launch serve.yaml -c llama31-serve --env HF_TOKEN
envs:
MODEL_SIZE: 8B
HF_TOKEN:
# Change this to your checkpoint bucket created in lora.yaml
CHECKPOINT_BUCKET_NAME: your-checkpoint-bucket
LORA_NAME: my-finance-lora
resources:
accelerators: L4
ports: 8081
cpus: 32+
file_mounts:
/checkpoints:
name: $CHECKPOINT_BUCKET_NAME
mode: MOUNT
setup: |
pip install vllm==0.5.3post1
pip install vllm-flash-attn==2.5.9.post1
pip install openai
run: |
vllm serve meta-llama/Meta-Llama-3.1-${MODEL_SIZE}-Instruct \
--tensor-parallel-size $SKYPILOT_NUM_GPUS_PER_NODE --enable-lora \
--lora-modules $LORA_NAME=/checkpoints/${MODEL_SIZE}-lora/Meta-Llama-3.1-${MODEL_SIZE}-Instruct/ \
--max-model-len=2048 --port 8081