Skip to content

Run this notebook online:Binder

Llama2-70B seq-scheduler rollingbatch deployment guide

In this notebook, we deploy the Llama-2-70B model across GPUs on a ml.p4d.24xlarge instance.

Import the relevant libraries and configure several global variables using boto3

!pip install sagemaker boto3 --upgrade
import sagemaker
import jinja2
from sagemaker import image_uris
import boto3
import os
import time
import json
from pathlib import Path
role = sagemaker.get_execution_role()  # execution role for the endpoint
sess = sagemaker.session.Session()  # sagemaker session for interacting with different AWS APIs
bucket = sess.default_bucket()  # bucket to house artifacts
model_bucket = sess.default_bucket()  # bucket to house artifacts
s3_code_prefix = "hf-large-model-djl/code_llama2"  # folder within bucket where code artifact will go

region = sess._region_name
account_id = sess.account_id()

s3_client = boto3.client("s3")
sm_client = boto3.client("sagemaker")
smr_client = boto3.client("sagemaker-runtime")

jinja_env = jinja2.Environment()

Create SageMaker compatible Model artifact, upload model to S3 and bring your own inference script.

SageMaker Large Model Inference containers can be used to host models without providing your own inference code. This is extremely useful when there is no custom pre-processing of the input data or postprocessing of the model's predictions.

SageMaker needs the model artifacts to be in a Tarball format. In this example, we provide the following files - serving.properties.

The tarball is in the following format:

code
├──── 
│   └── serving.properties
  • serving.properties is the configuration file that can be used to configure the model server.
!mkdir -p code_llama2

Create serving.properties

This is a configuration file to indicate to DJL Serving which model parallelization and inference optimization libraries you would like to use. Depending on your need, you can set the appropriate configuration.

Here is a list of settings that we use in this configuration file - - engine: The engine for DJL to use. In this case, we intend to use Accelerate and hence set it to Python. - option.model_id: The model id of a pretrained model hosted inside a model repository on huggingface.co (https://huggingface.co/models). The container uses this model id to download the corresponding model repository on huggingface.co. - option.tensor_parallel_degree: Set to the number of GPU devices over which Accelerate needs to partition the model. This parameter also controls the no of workers per model which will be started up when DJL serving runs. As an example if we have a 8 GPU machine and we are creating 8 partitions then we will have 1 worker per model to serve the requests.

For more details on the configuration options and an exhaustive list, you can refer the documentation - https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-configuration.html.

# define a variable to contain the s3url of the location that has the model
pretrained_model_location = f"s3://sagemaker-example-files-prod-{region}/models/llama2/"
print(f"Pretrained model will be downloaded from ---- > {pretrained_model_location}")

In the below cell, we leverage Jinja to create a template for serving.properties. Specifically, we parameterize option.s3url so that it can be changed based on the pretrained model location.

import os

os.environ['MODEL_ID'] = "TheBloke/Llama-2-70B-fp16"
os.environ['HF_TRUST_REMOTE_CODE'] = "TRUE"
model_id = os.getenv('MODEL_ID')
with open('serving.properties', 'w') as f:
    f.write(f"""engine=Python
option.model_id={model_id}
option.tensor_parallel_degree=8
option.dtype=fp16
option.model_loading_timeout=3600
option.trust_remote_code=true

# rolling-batch parameters
option.max_rolling_batch_size=4
option.rolling_batch=scheduler

# seq-scheduler parameters
# limits the max_sparsity in the token sequence caused by padding
option.max_sparsity=0.33
# limits the max number of batch splits, where each split has its own inference call
option.max_splits=3
# other options: contrastive, sample
option.decoding_strategy=greedy
# default: true
option.disable_flash_attn=true
""")
# we plug in the appropriate model location into our `serving.properties` file based on the region in which this notebook is running
template = jinja_env.from_string(Path("code_llama2/serving.properties").open().read())
Path("code_llama2/serving.properties").open("w").write(
    template.render(s3url=pretrained_model_location)
)
!pygmentize code_llama2/serving.properties | cat -n

Image URI for the DJL container is being used here

inference_image_uri = image_uris.retrieve(
        framework="djl-deepspeed",
        region=sess.boto_session.region_name,
        version="0.27.0"
    )
print(f"Image going to be used is ---- > {inference_image_uri}")

Create the Tarball and then upload to S3 location

!rm -f model.tar.gz
!tar czvf model.tar.gz -C code_llama2 .
s3_code_artifact = sess.upload_data("model.tar.gz", bucket, s3_code_prefix)
print(f"S3 Code or Model tar ball uploaded to --- > {s3_code_artifact}")

To create the end point the steps are:

  1. Create the Model using the Image container and the Model Tarball uploaded earlier
  2. Create the endpoint config using the following key parameters

    a) Instance Type is ml.p4d.24xlarge

    b) ContainerStartupHealthCheckTimeoutInSeconds is 3600 to ensure health check starts after the model is ready

  3. Create the end point using the endpoint config created

Create the Model

Use the image URI for the DJL container and the s3 location to which the tarball was uploaded.

The container downloads the model into the /tmp space on the container because SageMaker maps the /tmp to the Amazon Elastic Block Store (Amazon EBS) volume that is mounted when we specify the endpoint creation parameter VolumeSizeInGB.

For instances like p4dn, which come pre-built with the volume instance, we can continue to leverage the /tmp on the container. The size of this mount is large enough to hold the model.

from sagemaker.utils import name_from_base

model_name = name_from_base(f"llama2-djl")
print(model_name)

create_model_response = sm_client.create_model(
    ModelName=model_name,
    ExecutionRoleArn=role,
    PrimaryContainer={"Image": inference_image_uri,
                      "ModelDataUrl": s3_code_artifact,
                      'Environment': {'MODEL_LOADING_TIMEOUT': '3600'}
                     }
)
model_arn = create_model_response["ModelArn"]

print(f"Created Model: {model_arn}")
endpoint_config_name = f"{model_name}-config"
endpoint_name = f"{model_name}-endpoint"

endpoint_config_response = sm_client.create_endpoint_config(
    EndpointConfigName=endpoint_config_name,
    ProductionVariants=[
        {
            "VariantName": "variant1",
            "ModelName": model_name,
            "InstanceType": "ml.p4d.24xlarge",
            "InitialInstanceCount": 1,
            "ModelDataDownloadTimeoutInSeconds": 3600,
            "ContainerStartupHealthCheckTimeoutInSeconds": 3600,
        },
    ],
)
endpoint_config_response
create_endpoint_response = sm_client.create_endpoint(
    EndpointName=f"{endpoint_name}", EndpointConfigName=endpoint_config_name
)
print(f"Created Endpoint: {create_endpoint_response['EndpointArn']}")

This step can take ~ 20 min or longer so please be patient

import time

resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Status: " + status)

while status == "Creating":
    time.sleep(60)
    resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
    status = resp["EndpointStatus"]
    print("Status: " + status)

print("Arn: " + resp["EndpointArn"])
print("Status: " + status)

Leverage Boto3 to invoke the endpoint.

This is a generative model so we pass in a Text as a prompt and Model will complete the sentence and return the results.

You can pass a prompt as input to the model. This done by setting inputs to a prompt. The model then returns a result for each prompt. The text generation can be configured using appropriate parameters. These parameters need to be passed to the endpoint as a dictionary of kwargs. Refer this documentation - https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig for more details.

The below code sample illustrates the invocation of the endpoint using a text prompt and also sets some parameters.

%%time
prompt = "Amazon.com is the best"
response_model = smr_client.invoke_endpoint(
    EndpointName=endpoint_name,
    Body=json.dumps(
        {
            "inputs": prompt,
            "parameters": {
                "max_new_tokens": 50,
                "do_sample": True,
            },
        }
    ),
    ContentType="application/json",
)

response_model["Body"].read().decode("utf8")

Conclusion

In this post, we demonstrated how to use SageMaker large model inference containers to host Llama2-70B. For more details about Amazon SageMaker and its large model inference capabilities, refer to the following:

  • Model parallelism and large model inference on Sagemaker (https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-inference.html)

Clean Up

# Delete the endpoint
sm_client.delete_endpoint(EndpointName=endpoint_name)
# In case the endpoint failed we still want to delete the model
sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)
sm_client.delete_model(ModelName=model_name)