bentovllm-llama-3-1-8-b-instruct-service-krw0.bc-staging.bentoml.ai Open in urlscan Pro
35.230.7.76  Public Scan

URL: https://bentovllm-llama-3-1-8-b-instruct-service-krw0.bc-staging.bentoml.ai/
Submission: On December 21 via api from US — Scanned from DE

Form analysis 1 forms found in the DOM

<form class="download-url-wrapper"><input class="download-url-input" type="text" id="download-url-input" value="./docs.json"><button class="download-url-button button">Explore</button></form>

Text Content

Explore


BENTOVLLM-LLAMA3.1-8B-INSTRUCT-SERVICE

 34msbjf6wcksrq3q 

OAS 3.0

./docs.json


SELF-HOST LLAMA 3.1 8B WITH VLLM AND BENTOML

This is a BentoML example project, showing you how to serve and deploy Llama 3.1
8B using vLLM, a high-throughput and memory-efficient inference engine.

See here for a full list of BentoML example projects.

💡 This example is served as a basis for advanced code customization, such as
custom model, inference logic or vLLM options. For simple LLM hosting with
OpenAI compatible endpoint without writing any code, see OpenLLM.


PREREQUISITES

 * You have gained access to Llama 3.1 8B on its official website and Hugging
   Face.
 * If you want to test the Service locally, we recommend you use an Nvidia GPU
   with at least 16G VRAM.


INSTALL DEPENDENCIES

git clone https://github.com/bentoml/BentoVLLM.git
cd BentoVLLM/llama3.1-8b-instruct

# Recommend Python 3.11
pip install -r requirements.txt

export HF_TOEKN=<your-api-key>



RUN THE BENTOML SERVICE

We have defined a BentoML Service in service.py. Run bentoml serve in your
project directory to start the Service.

$ bentoml serve .

2024-01-18T07:51:30+0800 [INFO] [cli] Starting production HTTP BentoServer from "service:VLLM" listening on http://localhost:3000 (Press CTRL+C to quit)
INFO 01-18 07:51:40 model_runner.py:501] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 01-18 07:51:40 model_runner.py:505] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode.
INFO 01-18 07:51:46 model_runner.py:547] Graph capturing finished in 6 secs.


The server is now active at http://localhost:3000. You can interact with it
using the Swagger UI or in other different ways.







CURL



curl -X 'POST' \
  'http://localhost:3000/generate' \
  -H 'accept: text/event-stream' \
  -H 'Content-Type: application/json' \
  -d '{
  "prompt": "Explain superconductors like I'\''m five years old",
  "tokens": null
}'












Python client



import bentoml

with bentoml.SyncHTTPClient("http://localhost:3000") as client:
    response_generator = client.generate(
        prompt="Explain superconductors like I'm five years old",
        tokens=None
    )
    for response in response_generator:
        print(response)












OpenAI-compatible endpoints



This Service uses the @openai_endpoints decorator to set up OpenAI-compatible
endpoints (chat/completions and completions). This means your client can
interact with the backend Service (in this case, the VLLM class) as if they were
communicating directly with OpenAI's API. This utility does not affect your
BentoML Service code, and you can use it for other LLMs as well.

from openai import OpenAI

client = OpenAI(base_url='http://localhost:3000/v1', api_key='na')

# Use the following func to get the available models
client.models.list()

chat_completion = client.chat.completions.create(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct",
    messages=[
        {
            "role": "user",
            "content": "Explain superconductors like I'm five years old"
        }
    ],
    stream=True,
)
for chunk in chat_completion:
    # Extract and print the content of the model's reply
    print(chunk.choices[0].delta.content or "", end="")


These OpenAI-compatible endpoints also support vLLM extra parameters. For
example, you can force the chat completion output a JSON object by using the
guided_json parameters:

from openai import OpenAI

client = OpenAI(base_url='http://localhost:3000/v1', api_key='na')

# Use the following func to get the available models
client.models.list()

json_schema = {
    "type": "object",
    "properties": {
        "city": {"type": "string"}
    }
}

chat_completion = client.chat.completions.create(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
    extra_body=dict(guided_json=json_schema),
)
print(chat_completion.choices[0].message.content)  # will return something like: {"city": "Paris"}


All supported extra parameters are listed in vLLM documentation.

Note: If your Service is deployed with protected endpoints on BentoCloud, you
need to set the environment variable OPENAI_API_KEY to your BentoCloud API key
first.

export OPENAI_API_KEY={YOUR_BENTOCLOUD_API_TOKEN}


You can then use the following line to replace the client in the above code
snippet. Refer to Obtain the endpoint URL to retrieve the endpoint URL.

client = OpenAI(base_url='your_bentocloud_deployment_endpoint_url/v1')






For detailed explanations of the Service code, see vLLM inference.


DEPLOY TO BENTOCLOUD

After the Service is ready, you can deploy the application to BentoCloud for
better management and scalability. Sign up if you haven't got a BentoCloud
account.

Make sure you have logged in to BentoCloud.

bentoml cloud login


Create a BentoCloud secret to store the required environment variable and
reference it for deployment.

bentoml secret create huggingface HF_TOKEN=$HF_TOKEN

bentoml deploy . --secret huggingface


Once the application is up and running on BentoCloud, you can access it via the
exposed URL.

Note: For custom deployment in your own infrastructure, use BentoML to generate
an OCI-compliant image.

Contact BentoML Team
Servers
.



SERVICE APIS

BENTOML SERVICE API ENDPOINTS FOR INFERENCE.

POST
/generate




INFRASTRUCTURE

COMMON INFRASTRUCTURE ENDPOINTS FOR OBSERVABILITY.

GET
/healthz


GET
/livez


GET
/readyz


GET
/metrics




DEFAULT

GET
/v1/models
Show Available Models

POST
/v1/chat/completions
Create Chat Completion

POST
/v1/completions
Create Completion


SCHEMAS

BaseModel
ChatCompletionAssistantMessageParam
ChatCompletionContentPartImageParam
ChatCompletionContentPartRefusalParam
ChatCompletionContentPartTextParam
ChatCompletionFunctionMessageParam
ChatCompletionMessageToolCallParam
ChatCompletionNamedFunction
ChatCompletionNamedToolChoiceParam
ChatCompletionRequest
ChatCompletionSystemMessageParam
ChatCompletionToolMessageParam
ChatCompletionToolsParam
ChatCompletionUserMessageParam
CompletionRequest
CustomChatCompletionMessageParam
Function
FunctionCall
FunctionDefinition
HTTPValidationError
ImageURL
JsonSchemaResponseFormat
ResponseFormat
StreamOptions
ValidationError
generate__Input
TaskStatusResponse
InvalidArgument
NotFound
InternalServerError