docs.mystic.ai Open in urlscan Pro
2606:4700::6810:f276  Public Scan

Submitted URL: http://docs.mystic.ai/
Effective URL: https://docs.mystic.ai/docs/getting-started
Submission: On February 26 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Jump to Content
GitHub
GuidesAPI ReferenceChangelogv0.3.0v2.0.0

--------------------------------------------------------------------------------

GitHub
Guides
Moon (Dark Mode)Sun (Light Mode)

v2.0.0
GuidesAPI ReferenceChangelog
Search
CTRL-K
Getting started
All
Guides
Reference
Changelog

START TYPING TO SEARCH…


OVERVIEW

 * Getting started
 * Login
 * Inputs and outputs
 * Running a pipeline
 * Files
 * GPUs and accelerators
 * Troubleshooting
 * Pointers
 * Entity objects
 * Pipeline building
 * Migration guide
 * Cold start optimisations
 * Using the README.md


CLOUD INTEGRATION (BYOC)

 * Overview
 * Scaling configuration
 * Warmup & Cooldown


TUTORIALS

 * Llama 2 chat with vLLM (7B, 13B & multi-gpu 70B)
 * Deploy Mistral 7B with vLLM
 * Deploy Stable Diffusion
 * How to deploy a Hugging Face model
 * Deploy MusicGen model from AudioCraft
 * Deploy Mixtral 8x7B with Exllamav2
 * How to reduce cold starts in ML models running in production
 * Get YouTube transcripts using ML
 * Deploy Gemma 7B with TensorRT-LLM

Powered by 


GETTING STARTED

This page will help you get started with Mystic. You'll be up and running in a
jiffy!

Suggest Edits

To run your projects on the Mystic suite of software you need to use our Python
SDK. You can install this by running:

Shell
pip install pipeline-ai



> 👍
> 
> YOU'LL ALSO NEED DOCKER RUNNING ON YOUR SYSTEM, SEE:
> HTTPS://DOCS.DOCKER.COM/DESKTOP/

You can use the Mystic SDK to create "Pipelines" which can run locally or be
uploaded and run remotely on Mystic or your PCore deployment. Pipelines are
specially prepared Docker containers that run your inference code. In the next
step, we'll initialise a Pipeline and take a closer look at how they work.

In an empty directory, run the following command and follow the prompts:

shell
pipeline container init


This will create two files, pipeline.yaml and new_pipeline.py. Out of the box,
these files are ready to get a Pipeline up and running. Let's take a look at
that first, and then dive into what these files actually do.

Run the following command to build the Pipeline:

shell
pipeline container build



> 🚧
> 
> BUILDING A PIPELINE WILL GENERATE A DOCKERFILE WHICH WILL BE USED BY THE BUILD
> PROCESS. IF YOU TRY EDIT THIS FILE, CHANGES WILL BE OVERWRITTEN, SO REMEMBER
> TO EDIT YOUR PIPELINE.YAML FILE INSTEAD!

You should see build logs similar to what you would get from a Docker image
build. A successful build will end with something like the following:

shell
Pipeline 11:33:05 - [SUCCESS]: Built container a8fab0143dba
Pipeline 11:33:05 - [SUCCESS]: Created tag my_user/my_pipeline
Pipeline 11:33:05 - [SUCCESS]: Created tag my_user/my_pipeline:a8fab0143dba


We can now run the pipeline locally and test it out!

Shell
pipeline container up


Pipelines come with their own web UI for testing, and API docs. Both should be
accessible with the above command. You can run the Pipeline directly in the web
UI or from an API tool like curl.

Let's take a closer look now at the Pipeline files themselves. Here's
pipeline.yaml:

yaml
runtime:
  container_commands:
  - apt-get update
  - apt-get install -y git
  python:
    version: '3.10'
    requirements:
    - pipeline-ai
    cuda_version: '11.4'
accelerators: []
accelerator_memory: null
pipeline_graph: new_pipeline:my_new_pipeline
pipeline_name: my_user/my_pipeline


This file tells the Pipeline library how to configure and build your container.
You can add Python dependencies, specify GPU requirements (for Mystic
deployments) and add Dockerfile build commands.


> 🚧
> 
> YOU SHOULD REPLACE MY_USER WITH YOUR OWN USERNAME. IF YOU TRY TO UPLOAD A
> PIPELINE WITH A DIFFERENT USERNAME IT WILL FAIL.

The pipeline_graph entry specifies the Python object that houses your inference
code (<.py file name>:<pipeline object>). We'll see more of this when we look at
the other file new_pipeline.py:

Python
from pipeline import Pipeline, Variable, entity, pipe


# Put your model inside of the below entity class
@entity
class MyModelClass:
    @pipe(run_once=True, on_startup=True)
    def load(self) -> None:
        # Perform any operations needed to load your model here
        print("Loading model...")

        ...

        print("Model loaded!")

    @pipe
    def predict(self, output_number: int) -> str:
        # Perform any operations needed to predict with your model here
        print("Predicting...")

        ...

        print("Prediction complete!")

        return f"Your number: {output_number}"


with Pipeline() as builder:
    input_var = Variable(
        int,
        description="A basic input number to do things with",
        title="Input number",
    )

    my_model = MyModelClass()
    my_model.load()

    output_var = my_model.predict(input_var)

    builder.output(output_var)

my_new_pipeline = builder.get_pipeline()



Here we can see a Pipeline object being created, with inputs and outputs defined
through the Mystic SDK. Inside the Pipeline itself is code that is run at
startup, and code that is run every time inference happens. You can read more
about how inputs and outputs work here. Also note that all files in the current
working directly will be copied into the container, so you can use Python
modules and other files as normal in your Pipeline.

And that's it! You're now ready to build your own Pipeline, using any AI library
you want. In the next section we'll take a look at uploading and running a
Pipeline on Mystic.


UPLOADING A PIPELINE

The Mystic SDK allows you to authenticate with Mystic by running:

pipeline cluster login mystic-api API_TOKEN -u https://www.mystic.ai -a


If you don't have an API token yet, you can create one on your Mystic account
here.

[Note: you can also authenticate using environment variables if this is more
convenient. You just need to set PIPELINE_API_TOKEN=<YOUR_TOKEN>]

Uploading your Pipeline is then as simple as running:

shell
pipeline container push


The last few lines of your deployment will look something like:

shell
Pipeline 14:35:25 - [SUCCESS]: Created new pipeline deployment for my_user/my_pipeline -> pipeline_76873283cece44e6bd04d91cfdb2b632 (image=registry:5000/my_user/my_pipeline:a8fab0143dba)


Notice that your Pipeline now has an associated Pipeline ID, which you can use
to run inference. You should also be able to find your Pipeline in your Mystic
account's Pipelines page, where you can run inference through the web UI.


RUNNING A PIPELINE

The SDK provides a way to run your pipeline directly in Python:

Python
from pipeline.cloud.pipelines import run_pipeline

pointer = "my_user/my_pipeline:v1"

result = run_pipeline(pointer, 1)

print(result.outputs_formatted())


You can also run your API directly with a tool like curl:

shell
curl -X POST 'https://www.mystic.ai/v4/runs' \
--header 'Authorization: Bearer YOUR_TOKEN' \
--header 'Content-Type: application/json' \
--data '{
	"pipeline": "my_user/my_pipeline:v1",
	"inputs": 
		[
			{"type":"integer","value":5}
		]
	}
'


Keep in mind that if you've changed your input types in your Pipeline, you'll
need to change this command. You can find an auto-generated schema on your
pipeline page on www.mystic.ai.

Congratulations! You are now ready to deploy AI models at scale with Mystic AI.

Updated about 1 month ago

--------------------------------------------------------------------------------

What’s Next

We recommend reading up on the other pages in this Overview section to get
further understanding on the Pipeline SDK and Mystic product. You may also want
to dive in to a tutorial to get a feel for building a real AI Pipeline.

 * Inputs and outputs
 * Mistral AI 7B vLLM inference guide

Did this page help you?
Yes
No
 * Table of Contents
 * * Uploading a Pipeline
   * Running a Pipeline