来源:llm/llama-3

Llama 3:Meta 开源大型语言模型#

Llama-3 x SkyPilot

Llama-3 是 Meta 最新的顶级开源 LLM。它以授权商业使用的许可协议发布。您只需一个简单的命令,即可使用 SkyPilot 在您自己的云环境中部署私有的 Llama-3 聊天机器人。

为什么选择 SkyPilot 而非商业托管解决方案?#

  • 无锁定:在任何支持的云上运行 - AWS、Azure、GCP、Lambda Cloud、IBM、Samsung、OCI

  • 所有内容都留在您的云账户中(您的虚拟机和存储桶)

  • 没有其他人会看到您的聊天记录

  • 支付最低成本 — 没有托管解决方案的加价

  • 根据规模和预算自由选择模型大小、GPU 类型、GPU 数量等。

…只需单击一下即可获得所有这些优势 — 让 SkyPilot 自动化基础设施。

先决条件#

  • 前往HuggingFace 模型页面并请求访问模型meta-llama/Meta-Llama-3-70B-Instruct

  • 检查您是否已安装 SkyPilot (文档)。

  • 检查sky check显示已启用云或 Kubernetes。

SkyPilot YAML#

点击查看完整 recipe YAML
envs:
  MODEL_NAME: meta-llama/Meta-Llama-3-70B-Instruct
  # MODEL_NAME: meta-llama/Meta-Llama-3-8B-Instruct
  HF_TOKEN: # TODO: Fill with your own huggingface token, or use --env to pass.

service:
  replicas: 2
  # An actual request for readiness probe.
  readiness_probe:
    path: /v1/chat/completions
    post_data:
      model: $MODEL_NAME
      messages:
        - role: user
          content: Hello! What is your name?
      max_tokens: 1

resources:
  accelerators: {L4:8, A10g:8, A10:8, A100:4, A100:8, A100-80GB:2, A100-80GB:4, A100-80GB:8}
  # accelerators: {L4, A10g, A10, L40, A40, A100, A100-80GB} # We can use cheaper accelerators for 8B model.
  cpus: 32+
  use_spot: True
  disk_size: 512  # Ensure model checkpoints can fit.
  disk_tier: best
  ports: 8081  # Expose to internet traffic.

setup: |
  conda activate vllm
  if [ $? -ne 0 ]; then
    conda create -n vllm python=3.10 -y
    conda activate vllm
  fi

  pip install vllm==0.4.2
  # Install Gradio for web UI.
  pip install gradio openai
  pip install flash-attn==2.5.9.post1


run: |
  conda activate vllm
  echo 'Starting vllm api server...'

  # https://github.com/vllm-project/vllm/issues/3098
  export PATH=$PATH:/sbin

  # NOTE: --gpu-memory-utilization 0.95 needed for 4-GPU nodes.
  python -u -m vllm.entrypoints.openai.api_server \
    --port 8081 \
    --model $MODEL_NAME \
    --trust-remote-code --tensor-parallel-size $SKYPILOT_NUM_GPUS_PER_NODE \
    --gpu-memory-utilization 0.95 \
    --max-num-seqs 64 \
    2>&1 | tee api_server.log &

  while ! `cat api_server.log | grep -q 'Uvicorn running on'`; do
    echo 'Waiting for vllm api server to start...'
    sleep 5
  done

  echo 'Starting gradio server...'
  git clone https://github.com/vllm-project/vllm.git || true
  python vllm/examples/gradio_openai_chatbot_webserver.py \
    -m $MODEL_NAME \
    --port 8811 \
    --model-url http://localhost:8081/v1 \
    --stop-token-ids 128009,128001

您也可以从此处获取完整的 YAML 文件此处

服务部署 Llama-3:单实例#

在您的基础设施上启动单个 Spot 实例来服务部署 Llama-3

HF_TOKEN=xxx sky launch llama3.yaml -c llama3 --env HF_TOKEN
示例输出
...
I 04-18 16:31:30 optimizer.py:693] == Optimizer ==
I 04-18 16:31:30 optimizer.py:704] Target: minimizing cost
I 04-18 16:31:30 optimizer.py:716] Estimated cost: $1.2 / hour
I 04-18 16:31:30 optimizer.py:716]
I 04-18 16:31:30 optimizer.py:839] Considered resources (1 node):
I 04-18 16:31:30 optimizer.py:909] -----------------------------------------------------------------------------------------------------------------
I 04-18 16:31:30 optimizer.py:909]  CLOUD   INSTANCE                          vCPUs   Mem(GB)   ACCELERATORS   REGION/ZONE      COST ($)   CHOSEN
I 04-18 16:31:30 optimizer.py:909] -----------------------------------------------------------------------------------------------------------------
I 04-18 16:31:30 optimizer.py:909]  Azure   Standard_NC48ads_A100_v4[Spot]    48      440       A100-80GB:2    eastus           1.22          ✔
I 04-18 16:31:30 optimizer.py:909]  AWS     g6.48xlarge[Spot]                 192     768       L4:8           us-east-1b       1.43
I 04-18 16:31:30 optimizer.py:909]  Azure   Standard_NC96ads_A100_v4[Spot]    96      880       A100-80GB:4    eastus           2.44
I 04-18 16:31:30 optimizer.py:909]  AWS     g5.48xlarge[Spot]                 192     768       A10G:8         us-east-2b       2.45
I 04-18 16:31:30 optimizer.py:909]  GCP     g2-standard-96[Spot]              96      384       L4:8           asia-east1-a     2.49
I 04-18 16:31:30 optimizer.py:909]  Azure   Standard_ND96asr_v4[Spot]         96      900       A100:8         eastus           4.82
I 04-18 16:31:30 optimizer.py:909]  GCP     a2-highgpu-4g[Spot]               48      340       A100:4         europe-west4-a   4.82
I 04-18 16:31:30 optimizer.py:909]  AWS     p4d.24xlarge[Spot]                96      1152      A100:8         us-east-2b       4.90
I 04-18 16:31:30 optimizer.py:909]  Azure   Standard_ND96amsr_A100_v4[Spot]   96      1924      A100-80GB:8    southcentralus   5.17
I 04-18 16:31:30 optimizer.py:909]  GCP     a2-ultragpu-4g[Spot]              48      680       A100-80GB:4    us-east4-c       7.39
I 04-18 16:31:30 optimizer.py:909]  GCP     a2-highgpu-8g[Spot]               96      680       A100:8         europe-west4-a   9.65
I 04-18 16:31:30 optimizer.py:909]  GCP     a2-ultragpu-8g[Spot]              96      1360      A100-80GB:8    us-east4-c       14.79
I 04-18 16:31:30 optimizer.py:909] -----------------------------------------------------------------------------------------------------------------
I 04-18 16:31:30 optimizer.py:909]
...

要在 Kubernetes 上运行或使用按需实例,请将--no-use-spot传递到上述命令中。

Kubernetes / 按需实例的示例输出
$ HF_TOKEN=xxx sky launch llama3.yaml -c llama3 --env HF_TOKEN --no-use-spot
...
I 04-18 16:34:13 optimizer.py:693] == Optimizer ==
I 04-18 16:34:13 optimizer.py:704] Target: minimizing cost
I 04-18 16:34:13 optimizer.py:716] Estimated cost: $5.0 / hour
I 04-18 16:34:13 optimizer.py:716]
I 04-18 16:34:13 optimizer.py:839] Considered resources (1 node):
I 04-18 16:34:13 optimizer.py:909] ------------------------------------------------------------------------------------------------------------------
I 04-18 16:34:13 optimizer.py:909]  CLOUD        INSTANCE                    vCPUs   Mem(GB)   ACCELERATORS   REGION/ZONE        COST ($)   CHOSEN
I 04-18 16:34:13 optimizer.py:909] ------------------------------------------------------------------------------------------------------------------
I 04-18 16:34:13 optimizer.py:909]  Kubernetes   32CPU--512GB--8A100         32      512       A100:8         kubernetes         0.00          ✔
I 04-18 16:34:13 optimizer.py:909]  Fluidstack   recE2ZDQmqR9HBKYs5xSnjtPw   64      240       A100-80GB:2    generic_1_canada   4.96
I 04-18 16:34:13 optimizer.py:909]  Fluidstack   recUiB2e6s3XDxwE9           60      440       A100:4         calgary_1_canada   5.88
I 04-18 16:34:13 optimizer.py:909]  Azure        Standard_NC48ads_A100_v4    48      440       A100-80GB:2    eastus             7.35
I 04-18 16:34:13 optimizer.py:909]  GCP          g2-standard-96              96      384       L4:8           us-east4-a         7.98
I 04-18 16:34:13 optimizer.py:909]  Fluidstack   recWGm4oJ9AB3XVPxzRaujgbx   126     480       A100-80GB:4    generic_1_canada   9.89
I 04-18 16:34:13 optimizer.py:909]  Paperspace   A100-80Gx4                  46      320       A100-80GB:4    East Coast (NY2)   12.72
I 04-18 16:34:13 optimizer.py:909]  AWS          g6.48xlarge                 192     768       L4:8           us-east-1          13.35
I 04-18 16:34:13 optimizer.py:909]  GCP          a2-highgpu-4g               48      340       A100:4         us-central1-a      14.69
I 04-18 16:34:13 optimizer.py:909]  Azure        Standard_NC96ads_A100_v4    96      880       A100-80GB:4    eastus             14.69
I 04-18 16:34:13 optimizer.py:909]  AWS          g5.48xlarge                 192     768       A10G:8         us-east-1          16.29
I 04-18 16:34:13 optimizer.py:909]  Fluidstack   recUYj6oGJCvAvCXC7KQo5Fc7   252     960       A100-80GB:8    generic_1_canada   19.79
I 04-18 16:34:13 optimizer.py:909]  GCP          a2-ultragpu-4g              48      680       A100-80GB:4    us-central1-a      20.11
I 04-18 16:34:13 optimizer.py:909]  Paperspace   A100-80Gx8                  96      640       A100-80GB:8    East Coast (NY2)   25.44
I 04-18 16:34:13 optimizer.py:909]  Azure        Standard_ND96asr_v4         96      900       A100:8         eastus             27.20
I 04-18 16:34:13 optimizer.py:909]  GCP          a2-highgpu-8g               96      680       A100:8         us-central1-a      29.39
I 04-18 16:34:13 optimizer.py:909]  Azure        Standard_ND96amsr_A100_v4   96      1924      A100-80GB:8    eastus             32.77
I 04-18 16:34:13 optimizer.py:909]  AWS          p4d.24xlarge                96      1152      A100:8         us-east-1          32.77
I 04-18 16:34:13 optimizer.py:909]  GCP          a2-ultragpu-8g              96      1360      A100-80GB:8    us-central1-a      40.22
I 04-18 16:34:13 optimizer.py:909]  AWS          p4de.24xlarge               96      1152      A100-80GB:8    us-east-1          40.97
I 04-18 16:34:13 optimizer.py:909] ------------------------------------------------------------------------------------------------------------------
...

等待模型准备就绪(这可能需要 10 分钟以上),如下行所示

...
(task, pid=17433) Waiting for vllm api server to start...
...
(task, pid=17433) INFO:     Started server process [20621]
(task, pid=17433) INFO:     Waiting for application startup.
(task, pid=17433) INFO:     Application startup complete.
(task, pid=17433) INFO:     Uvicorn running on http://0.0.0.0:8081 (Press CTRL+C to quit)
...
(task, pid=17433) Running on local URL:  http://127.0.0.1:8811
(task, pid=17433) Running on public URL: https://xxxxxxxxxx.gradio.live
...
(task, pid=17433) INFO 03-28 04:32:50 metrics.py:218] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%

🎉 恭喜! 🎉 您已在您的基础设施上成功启动 Llama-3 Instruct LLM。

您可以通过以下方式使用模型:

  • 标准 OpenAPI 兼容端点(例如,/v1/chat/completions

  • Gradio UI(自动启动)

要 curl /v1/chat/completions

ENDPOINT=$(sky status --endpoint 8081 llama3)

# We need to manually specify the stop_token_ids to make sure the model finish
# on <|eot_id|>.
curl http://$ENDPOINT/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Meta-Llama-3-70B-Instruct",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Who are you?"
      }
    ],
    "stop_token_ids": [128009,  128001]
  }'

要使用 Gradio UI,请打开日志中显示的 URL

(task, pid=17433) Running on public URL: https://xxxxxxxxxx.gradio.live

Gradio UI serving Llama-3

停止实例

sky stop llama3

关闭所有资源

sky down llama3

注意:如果您想尝试 8B 模型,可以使用以下加速器

resources:
  accelerators: {L4, A10g, A10, L40, A40, A100, A100-80GB}

服务部署 Llama-3:使用 SkyServe 扩展#

在试用模型后,您可以使用 SkyServe 部署具有自动伸缩和负载均衡功能的模型。

YAML 文件无需更改,即可在您的基础设施上启动完全托管的服务

HF_TOKEN=xxx sky serve up llama3.yaml -n llama3 --env HF_TOKEN

等待服务准备就绪

watch -n10 sky serve status llama3
示例输出
Services
NAME  VERSION  UPTIME  STATUS  REPLICAS  ENDPOINT
llama3  1        35s     READY   2/2       xx.yy.zz.100:30001

Service Replicas
SERVICE_NAME  ID  VERSION  IP            LAUNCHED     RESOURCES                       STATUS  REGION
llama3          1   1        xx.yy.zz.121  18 mins ago  1x GCP([Spot]{'A100-80GB': 4})  READY   us-east4
llama3          2   1        xx.yy.zz.245  18 mins ago  1x GCP([Spot]{'A100-80GB': 4})  READY   us-east4

获取一个对副本进行负载均衡的单个端点

ENDPOINT=$(sky serve status --endpoint llama3)

提示:SkyServe 完全管理您副本的生命周期。例如,如果 Spot 副本被抢占,控制器将自动替换它。这大大减少了操作负担,同时节省了成本。

要 curl 端点

curl -L $ENDPOINT/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Meta-Llama-3-70B-Instruct",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Who are you?"
      }
    ]
  }'

关闭所有资源

sky serve down llama3

更多详细信息请参见SkyServe 文档

可选:将 GUI 连接到您的 Llama-3 端点#

也可以使用独立的 GUI 前端访问 Llama-3 服务,因此发送到 GUI 的用户请求将在副本之间进行负载均衡。

  1. 启动聊天 Web UI

sky launch -c llama3-gui ./gui.yaml --env ENDPOINT=$(sky serve status --endpoint llama3)
  1. 然后,我们可以通过返回的 gradio 链接访问 GUI

| INFO | stdout | Running on public URL: https://6141e84201ce0bb4ed.gradio.live

微调 Llama-3#

您可以使用您自己的数据微调 Llama-3。我们有一个关于使用 SkyPilot 微调 Llama-2 以支持 Vicuna 的教程,该教程可以适用于 Llama-3。您可以在此处找到该教程,并在此处找到详细的博客文章。

包含的文件#