Running Llama 3.2 on AWS Lambda

Running Llama 3.2 on AWS Lambda

Llama 3.2 1B is a lightweight AI model that makes it interesting for serverless applications since it can be run relatively quickly without requiring GPU acceleration.

We’ll use models from Hugging Face and Nitric to demonstrate using it and manage the surrounding infrastructure, such as API routes and deployments.

Prerequisites

Project Setup

Let’s start by creating a new project using Nitric’s Python starter template.

Next, let’s install the base dependencies, then add the extra dependencies we need specifically for loading the language model.

Choose a Llama Model

Llama 3.2 is available in different sizes and configurations, each with its own trade-offs in terms of performance, accuracy and resource requirements. For serverless applications without GPU acceleration, such as AWS Lambda, it’s important to choose a model that is lightweight and efficient to ensure it runs within the constraints of that environment.

We’ll use a quantized version of the lightweight Llama 1B model, specifically Llama-3.2-1B-Instruct-Q4_K_M.gguf.

If you’re not familiar with quantization, it’s a technique that reduces a model’s size and resource requirements, which, in our case, makes it suitable for serverless applications but may affect the accuracy of the model.

The LM Studio team provides several quantized versions of Llama 3.2 1B on Hugging Face. Consider trying different versions to find one that best fits your needs, such as Q5_K_M, which is slightly larger but of higher quality.

Let’s download the chosen model and save it in a models directory in your project.

Download link for Llama-3.2-1B-Instruct-Q4_K_M.gguf:

See also  How Salesforce Built an AI-Driven App in Under 4 Days

Your folder structure should look like this:

Create a Service to Run the Model

Next, we’ll use Nitric to create an HTTP API that allows you to send prompts to the Llama model and receive the output in a response. The API will return the raw output from the model, but you can adjust this as you see fit.

Replace the contents of services/api.py with the following code, which loads the Llama model and implements the prompt functionality. Take a little time to understand the code. It defines an API with a single endpoint /prompt that accepts a POST request with a prompt in the body. The process_prompt function sends the prompt to the Llama model and returns the response.

OK, Let’s Run This Thing!

Now that you have an API defined, we can test it locally. The Python starter template uses python3.11-bookworm-slim as its basic container image, which doesn’t have the right dependencies to load the Llama model; let’s update the Dockerfile to use python3.11-bookworm (the non-slim version) instead.

Update line 2:

FROM python:3.11-bookworm

Update line 19:

FROMpython:3.11-bookworm

Now we can run our services locally:

nitric run

nitric run will start your application in a container that includes the dependencies to use llama_cpp. If you’d rather use nitric start you’ll need to install dependencies for llama-cpp-python such as CMake and LLVM.

Once it starts, you can test it with the Nitric Dashboard.

You can find the URL to the dashboard in the terminal running the Nitric CLI. By default it’s http://localhost:49152. Add a prompt to the body of the request and send it to the /prompt endpoint.

See also  Is Apache Spark Too Costly? An AWS Engineer Tells His Story

Dashboard

Deploying to AWS

When you’re ready to deploy the project, we can create a new Nitric stack file that will target AWS:

nitric stack new dev aws

Update the stack file nitric.dev.yaml with the appropriate AWS region and memory allocation to handle the model.

Since we’ll use Nitric’s default Pulumi AWS Provider, make sure you’re set up to deploy using it. You can find more information on how to set up the AWS Provider in the Nitric AWS Provider documentation.

If you’d like to deploy with Terraform or to another cloud provider, that’s also possible. You can find more information about how Nitric can deploy to other platforms in the Nitric Providers documentation.

You can then deploy using the following command:

nitric up

Take note of the API endpoint URL that is output after the deployment is complete.

If you’re done with the project later, tear it down with nitric down.

Testing on AWS

To test the service, you can use any API testing tool you like, such as cURL, Postman, etc. Here’s an example using cURL:

curl -X POST {your endpoint URL here}/prompt -d "Hello, how are you?"

Example Response

The response will include the results, plus other metadata. The output can be found in the choices array.

Summary

As you’ve seen in the code example, we’ve set up a fairly basic prompt structure, but you can expand on this to include more complex prompts, including system prompts that help restrict/guide the model’s responses or even more complex interactions with the model. Also, in this example, we expose the model directly as an API, but this limits the response time to 30 seconds on AWS with API Gateway.

See also  Bringing the AWS Serverless Strategy to Azure

In future guides, we’ll show how you can go beyond simple one-time responses to more complex interactions, such as maintaining context between requests. We can also include Websockets and streamed responses to provide a better user experience for larger responses.

The post Running Llama 3.2 on AWS Lambda appeared first on The New Stack.

RECENT POSTS

Leave a Reply

Your email address will not be published. Required fields are marked *