Skip to content

Serve Multiple Fine-Tuned LoRA Adapters with DJL Serving

This notebook will demonstrate how you can deploy multiple fine-tuned LoRA adapters with a single base model copy on SageMaker using the DJL Serving Large Model Inference DLC. LoRA (Low Rank Adapters) is a powerful technique for fine-tuning large language models. This technique significantly reduces the number of trainable parameters compared to traditional fine-tuning while achieving comparable or superior performance. You can learn more about the LoRA technique in this paper.

A major benefit of LoRA is that the fine-tuned adapters can easily be added to and removed from the base model, which makes switching adapters pretty cheap and viable at runtime. In this notebook we will show how you can deploy a SageMaker endpoint with a single base model and multiple LoRA adapters, and change adapters for different requests.

Since LoRA adapters are much smaller than the size of a base model (can realistically be 100x-1000x smaller), we can deploy an endpoint with a single base model and multiple LoRA adapters using much less hardware than deploying an equivalent number of fully fine-tuned models.

The example we will work through in this notebook is guided by the multi adapter example in HuggingFace's PEFT library: https://github.com/huggingface/peft/blob/main/examples/multi_adapter_examples/PEFT_Multi_LoRA_Inference.ipynb.

This is the simple notebook demonstrating the simple practice using the built-in handler. For custom handlers, see the advanced notebook.

Install Packages and Import Dependencies

!pip install huggingface_hub sagemaker boto3 awscli --upgrade --quiet
import sagemaker
from sagemaker import image_uris
import boto3
import os
import time
import json
from pathlib import Path
from sagemaker.utils import name_from_base
from huggingface_hub import snapshot_download

Download Model Artifacts and Upload to S3

We will be deploying an endpoint with 2 LoRA adapters. These are the models we will be using: - Base Model: https://huggingface.co/huggyllama/llama-7b - LoRA Fine Tuned Adapter 1: https://huggingface.co/tloen/alpaca-lora-7b - LoRA Fine Tuned Adapter 2: https://huggingface.co/22h/cabrita-lora-v0-1

!rm -rf lora-multi-adapter
!mkdir -p lora-multi-adapter/adapters
snapshot_download("tloen/alpaca-lora-7b", local_dir="lora-multi-adapter/adapters/eng_alpaca", local_dir_use_symlinks=False)
snapshot_download("22h/cabrita-lora-v0-1", local_dir="lora-multi-adapter/adapters/portuguese_alpaca", local_dir_use_symlinks=False)

Creating Inference Handler and DJL Serving Configuration

The following files cover the model server configuration (serving.properties). Many of the built-in handlers such as vllm and LMI-Dist will automatically support adapters, which can be checked in the backend's user guide. You can also make a custom model.py by following the instructions in the advanced adapter notebook.

The core structure to cover here is the model directory. We include both the base model and LoRA adapters in the model directory like this:

|- model_dir
    |- adapters/
        |--- <adapter_1>/
        |--- <adapter_2>/
        |--- ...
        |--- <adapter_n>/
    |- serving.properties
    |- model.py (optional)

It is also possible to have model files located in a separate s3 bucket by specifying that location using an s3 option.model_id in the serving.properties. In this case, the adapters directory can be located either alongside the serving.properties or alongside the model files in s3.

Each of the adapters in the adapters directory contains the LoRA adapter artifacts. Typically there are two files: adapter_model.bin and adapter_config.json which are the adapter weights and adapter configuration respectively. These are typically obtained from the Peft library via the PeftModel.save_pretrained() method.

%%writefile lora-multi-adapter/serving.properties
option.model_id=huggyllama/llama-7b
option.engine=Python
option.rolling_batch=vllm
option.tensor_parallel_degree=1
option.enable_lora=true
option.gpu_memory_utilization=0.8
!rm -f model.tar.gz
!rm -rf lora-multi-adapter/.ipynb_checkpoints
!tar czvf model.tar.gz -C lora-multi-adapter .

Create SageMaker Model and Endpoint

role = "arn:aws:iam::125045733377:role/AmazonSageMaker-ExecutionRole-djl"  # execution role for the endpoint
sess = sagemaker.session.Session()  # sagemaker session for interacting with different AWS APIs
bucket = sess.default_bucket()  # bucket to house artifacts
model_bucket = sess.default_bucket()  # bucket to house artifacts
s3_code_prefix = "hf-large-model-djl/lora-multi-adapter"  # folder within bucket where code artifact will go

region = sess._region_name
account_id = sess.account_id()

s3_client = boto3.client("s3")
sm_client = boto3.client("sagemaker")
smr_client = boto3.client("sagemaker-runtime")
s3_code_artifact_accelerate = sess.upload_data("model.tar.gz", bucket, s3_code_prefix)
inference_image_uri = image_uris.retrieve(
        framework="djl-deepspeed",
        region=region,
        version="0.27.0"
    )model_name_acc = name_from_base(f"lora-multi-adapter")

# LoRA Adapters feature is a preview feature and ENABLE_ADAPTERS_PREVIEW environmnet variable should be set to use it
create_model_response = sm_client.create_model(
    ModelName=model_name_acc,
    ExecutionRoleArn=role,
    PrimaryContainer={"Image": inference_image_uri,
                      "ModelDataUrl": s3_code_artifact_accelerate,
                     })
model_arn = create_model_response["ModelArn"]
endpoint_config_name = f"{model_name_acc}-config"
endpoint_name = f"{model_name_acc}-endpoint"

endpoint_config_response = sm_client.create_endpoint_config(
    EndpointConfigName=endpoint_config_name,
    ProductionVariants=[
        {
            "VariantName": "variant1",
            "ModelName": model_name_acc,
            "InstanceType": "ml.g5.12xlarge",
            "InitialInstanceCount": 1,
            "ModelDataDownloadTimeoutInSeconds": 1800,
            "ContainerStartupHealthCheckTimeoutInSeconds": 1800,
        },
    ],
)
print(f"endpoint_name: {endpoint_name}")
create_endpoint_response = sm_client.create_endpoint(
    EndpointName=f"{endpoint_name}", EndpointConfigName=endpoint_config_name
)
import time

resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Status: " + status)

while status == "Creating":
    time.sleep(60)
    resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
    status = resp["EndpointStatus"]
    print("Status: " + status)

print("Arn: " + resp["EndpointArn"])
print("Status: " + status)

Make Inference Requests

%%time

response_model = smr_client.invoke_endpoint(
    EndpointName=endpoint_name,
    Body=json.dumps({"inputs": ["Tell me about Alpacas", "Invente uma desculpa criativa pra dizer que não preciso ir à festa.", "Tell me about AWS"],
                     "adapters": ["eng_alpaca", "portuguese_alpaca", "eng_alpaca"]}),
    ContentType="application/json",
)

response_model["Body"].read().decode("utf8")

Inference Request targetting the base model without any adapters

Clean up Resources

sm_client.delete_endpoint(EndpointName=endpoint_name)
sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)