Skip to content

Run this notebook online:Binder

LLAMA-7B-Chat rollingbatch deployment guide

In this tutorial, you will use LMI container from DLC to SageMaker and run inference with it.

Please make sure the following permission granted before running the notebook:

  • SageMaker access

Step 1: Let's bump up SageMaker and import stuff

%pip install sagemaker --upgrade  --quiet
import sagemaker
from sagemaker.djl_inference.model import DJLModel

role = sagemaker.get_execution_role()  # execution role for the endpoint
session = sagemaker.session.Session()  # sagemaker session for interacting with different AWS APIs

Step 2: Start building SageMaker endpoint

In this step, we will build SageMaker endpoint from scratch

Getting the container image URI (optional)

Check out available images: Large Model Inference available DLC

# Choose a specific version of LMI image directly:
# image_uri = "763104351884.dkr.ecr.us-west-2.amazonaws.com/djl-inference:0.28.0-lmi10.0.0-cu124"

Create SageMaker model

Here we are using LMI PySDK to create the model.

Checkout more configuration options.

model_id = "TheBloke/Llama-2-7B-Chat-fp16" # model will be download form Huggingface hub

env = {
    "TENSOR_PARALLEL_DEGREE": "1",            # use 1 GPU, set to "max" to use all GPUs on the instance
    "OPTION_ROLLING_BATCH": "lmi-dist",       # enable rolling batch with lmi-dist
}

model = DJLModel(
    model_id=model_id,
    env=env,
    role=role)

Create SageMaker endpoint

You need to specify the instance to use and endpoint names

instance_type = "ml.g5.2xlarge"
endpoint_name = sagemaker.utils.name_from_base("lmi-model")

predictor = model.deploy(initial_instance_count=1,
             instance_type=instance_type,
             endpoint_name=endpoint_name,
            )

Step 3: Test and benchmark the inference

chat = [
  {"role": "user", "content": "Hello, how are you?"},
  {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
  {"role": "user", "content": "I'd like to show off how chat templating works!"},
]

result = predictor.predict(
    {"messages": chat, "parameters": {"do_sample": True, "details": True}}
)

result
response = result["choices"][0]["message"]["content"]
print(f"assistant: {response}")

# next conversation
chat.append({"role": "assistant", "content": response})
chat.append({"role": "user", "content": "Tell me a joke."})
print(f"user: Tell me a joke.")

result = predictor.predict(
    {"messages": chat, "parameters": {"do_sample": True, "details": True, "max_new_tokens": 256}}
)
response = result["choices"][0]["message"]["content"]
print(f"assistant: {response}")

Clean up the environment

session.delete_endpoint(endpoint_name)
session.delete_endpoint_config(endpoint_name)
model.delete_model()