mystic.ai
Open in
urlscan Pro
34.95.76.235
Public Scan
Submitted URL: http://mystic.ai/
Effective URL: https://mystic.ai/
Submission: On November 17 via api from US — Scanned from DE
Effective URL: https://mystic.ai/
Submission: On November 17 via api from US — Scanned from DE
Form analysis
0 forms found in the DOMText Content
Looking to run AI models on your cloud or on-prem? Check out Enterprise * Enterprise * About * Pricing * Discord * Explore AI models * Explore * Log in Sign up / Log in * RUN ANY AI MODEL AS AN APIWITHIN SECONDS Low latency serverless API to run and deploy ML models STABILITYAI/STABLE-DIFFUSION-XL-REFINER-1.0 SD-XL 1.0-base Updated 3 months ago3 months ago 181.25K Runs PAULH/OPEN-JOURNEY-XL OpenJourney XL. A finetuned SDXL on the Midjourney v5 dataset Updated 2 months ago2 months ago 43.29K Runs META/LLAMA2-7B-CHAT Llama 2 13B for chat applications with vLLM Updated about 2 months agoabout 2 months ago 20.63K Runs META/CODELLAMA-13B-INSTRUCT CodeLlama 13B instruct Updated 28 days ago28 days ago 18.69K Runs AINZOIL/SD-XL SD-XL with more parameters (new: Batch, Seeds) Updated about 1 month agoabout 1 month ago 8.5K Runs AINZOIL/DREAMSHAPER_8 trust me you'll get mind blown with this one ;) Updated 2 months ago2 months ago 7.39K Runs META/LLAMA2-13B-CHAT Updated about 2 months agoabout 2 months ago 3.81K Runs AINZOIL/ANIMAGINE-XL if you want some anime waifu but with the power of xl Updated 2 months ago2 months ago 3.3K Runs STABILITYAI/SDXL-IMG2IMG SDXL based img2img Updated about 1 month agoabout 1 month ago 1.49K Runs MISTRALAI/MISTRAL-7B-CHAT-V0.1 A 7B LLM by Mistral (chat) Updated about 2 months agoabout 2 months ago 1.48K Runs MISTRALAI/MISTRAL-7B-INSTRUCT-V0.1 A 7B LLM by Mistral (instruct) Updated about 2 months agoabout 2 months ago 962 Runs RUNWAYML/STABLE-DIFFUSION-V1-5 Stable Diffusion v1.5 for text->image Updated 3 months ago3 months ago 949 Runs MATTHEW/E5_LARGE-V2 Embedding model by Microsoft Updated 2 months ago2 months ago 515 Runs PAULH/WAIFU-DIFFUSION A waifu just for you Updated 3 months ago3 months ago 465 Runs PAULH/BLIP-2 Blip 2 from Salesforce (image -> text) Updated 2 months ago2 months ago 381 Runs META/CODELLAMA-34B-INSTRUCT Updated about 2 months agoabout 2 months ago 346 Runs AINZOIL/DREAMSHAPER_8 trust me you'll get mind blown with this one ;) Updated 2 months ago2 months ago 7.39K Runs META/LLAMA2-13B-CHAT Updated about 2 months agoabout 2 months ago 3.81K Runs AINZOIL/ANIMAGINE-XL if you want some anime waifu but with the power of xl Updated 2 months ago2 months ago 3.3K Runs STABILITYAI/SDXL-IMG2IMG SDXL based img2img Updated about 1 month agoabout 1 month ago 1.49K Runs MISTRALAI/MISTRAL-7B-CHAT-V0.1 A 7B LLM by Mistral (chat) Updated about 2 months agoabout 2 months ago 1.48K Runs MISTRALAI/MISTRAL-7B-INSTRUCT-V0.1 A 7B LLM by Mistral (instruct) Updated about 2 months agoabout 2 months ago 962 Runs RUNWAYML/STABLE-DIFFUSION-V1-5 Stable Diffusion v1.5 for text->image Updated 3 months ago3 months ago 949 Runs MATTHEW/E5_LARGE-V2 Embedding model by Microsoft Updated 2 months ago2 months ago 515 Runs PAULH/WAIFU-DIFFUSION A waifu just for you Updated 3 months ago3 months ago 465 Runs PAULH/BLIP-2 Blip 2 from Salesforce (image -> text) Updated 2 months ago2 months ago 381 Runs META/CODELLAMA-34B-INSTRUCT Updated about 2 months agoabout 2 months ago 346 Runs STABILITYAI/STABLE-DIFFUSION-XL-REFINER-1.0 SD-XL 1.0-base Updated 3 months ago3 months ago 181.25K Runs PAULH/OPEN-JOURNEY-XL OpenJourney XL. A finetuned SDXL on the Midjourney v5 dataset Updated 2 months ago2 months ago 43.29K Runs META/LLAMA2-7B-CHAT Llama 2 13B for chat applications with vLLM Updated about 2 months agoabout 2 months ago 20.63K Runs META/CODELLAMA-13B-INSTRUCT CodeLlama 13B instruct Updated 28 days ago28 days ago 18.69K Runs AINZOIL/SD-XL SD-XL with more parameters (new: Batch, Seeds) Updated about 1 month agoabout 1 month ago 8.5K Runs MISTRALAI/MISTRAL-7B-INSTRUCT-V0.1 A 7B LLM by Mistral (instruct) Updated about 2 months agoabout 2 months ago 962 Runs RUNWAYML/STABLE-DIFFUSION-V1-5 Stable Diffusion v1.5 for text->image Updated 3 months ago3 months ago 949 Runs MATTHEW/E5_LARGE-V2 Embedding model by Microsoft Updated 2 months ago2 months ago 515 Runs PAULH/WAIFU-DIFFUSION A waifu just for you Updated 3 months ago3 months ago 465 Runs PAULH/BLIP-2 Blip 2 from Salesforce (image -> text) Updated 2 months ago2 months ago 381 Runs META/CODELLAMA-34B-INSTRUCT Updated about 2 months agoabout 2 months ago 346 Runs STABILITYAI/STABLE-DIFFUSION-XL-REFINER-1.0 SD-XL 1.0-base Updated 3 months ago3 months ago 181.25K Runs PAULH/OPEN-JOURNEY-XL OpenJourney XL. A finetuned SDXL on the Midjourney v5 dataset Updated 2 months ago2 months ago 43.29K Runs META/LLAMA2-7B-CHAT Llama 2 13B for chat applications with vLLM Updated about 2 months agoabout 2 months ago 20.63K Runs META/CODELLAMA-13B-INSTRUCT CodeLlama 13B instruct Updated 28 days ago28 days ago 18.69K Runs AINZOIL/SD-XL SD-XL with more parameters (new: Batch, Seeds) Updated about 1 month agoabout 1 month ago 8.5K Runs AINZOIL/DREAMSHAPER_8 trust me you'll get mind blown with this one ;) Updated 2 months ago2 months ago 7.39K Runs META/LLAMA2-13B-CHAT Updated about 2 months agoabout 2 months ago 3.81K Runs AINZOIL/ANIMAGINE-XL if you want some anime waifu but with the power of xl Updated 2 months ago2 months ago 3.3K Runs STABILITYAI/SDXL-IMG2IMG SDXL based img2img Updated about 1 month agoabout 1 month ago 1.49K Runs MISTRALAI/MISTRAL-7B-CHAT-V0.1 A 7B LLM by Mistral (chat) Updated about 2 months agoabout 2 months ago 1.48K Runs * company: SensusFuturis * company: Seelab * company: Vellum * company: Renovate AI * company: Charisma AI * company: Hypotenuse AI * Company: Vellum * Company: Charisma AI * Company: Hypotenuse AI * Company: SensusFuturis * Company: Seelab * Company: Renovate AI OUR PRODUCT IS CRAFTED THROUGH MILLIONS OF ML RUNS 5,000+ Developers using our API 9,000+ AI models deployed THE EASIEST WAY TO GET AN API ENDPOINT FROM ANY ML MODEL All the infrastructure required to run AI models with a simple API call curl -X POST 'https://www.mystic.ai/v3/runs' -H 'Authorization: Bearer YOUR_TOKEN' -H 'Content-Type: application/json' -d '{"pipeline_id_or_pointer": "meta/llama2-70B-chat:latest", "input_data": [{"type":"string","value":"Hello World!"}]}' * ONLY PAY FOR INFERENCE TIME Pay per second with serverless pricing on our shared cluster. Pay only for the inference you use. * INFERENCE WITHIN 0.035S Within a few milliseconds our scheduler decides the optimal strategy of queuing, routing and scaling. * API FIRST AND PYTHON LOVERS RESTful API to call your model from anywhere. Python SDK to upload your own models. RUN AND DEPLOY ML MODELS WITH SPEED, SCALE AND HIGH-THROUGHPUT Our ingredients for reliable and scalable ML infrastructure * NO DOCKER, NO KUBERNETES, NO DEVOPS. USE THE TOOLS YOU ARE FAMILIAR WITH. YOU ONLY NEED TO INTERACT WITH OUR API AND PYTHON SDK TO RUN AI. * P75 SYSTEM LATENCY OF 35MS WITH 35MS AND 75MS OF P75 AND P95 SYSTEM LATENCY, RESPECTIVELY. EXPERIENCE LOW-LATENCY INFERENCE. * DYNAMIC SCALING WE DYNAMICALLY SCALE UP AND DOWN RESOURCES AS THE NUMBER OF REQUESTS FOR YOUR MODEL VARIES. * COLD STARTS REDUCTION OUR SMART SYSTEM WITH PREEMPTIVE AND MULTI-GPU CACHING HELPS US REDUCE FREQUENCY OF COLD STARTS. * INTELLIGENT ROUTING THE MOST OPTIMAL RESOURCE FOR YOUR REQUESTS IS SELECTED VIA SEVERAL RUNTIME METRICS RECORDED OVERTIME. * CI/CD, ALERTING AND MONITORING INTEGRATE AI IN YOUR CI/CD WORKFLOW, SET-UP ALERTS AND MONITOR YOUR DEPLOYMENTS WITH EASE. * PIPELINE AND MODEL VERSIONING CREATE A NEW NAMED VERSION EACH TIME YOU UPLOAD A NEW PIPELINE. KEEP TRACK OF ALL DEPLOYED PIPELINES AND MODELS. * GPU SHARING & FRACTIONALIZATION OUR SYSTEM AUTOMATICALLY CACHES MULTIPLE MODELS UNDER SAME GPU TO INCREASE EFFICIENCY AND REDUCE COSTS. * MULTI GPU BASED INFERENCE SEAMLESSLY RUN VERY LARGE MODELS IN MULTI GPU ENVIRONMENTS TO FIT THE MEMORY REQUIREMENTS. * NO DOCKER, NO KUBERNETES, NO DEVOPS. USE THE TOOLS YOU ARE FAMILIAR WITH. YOU ONLY NEED TO INTERACT WITH OUR API AND PYTHON SDK TO RUN AI. * P75 SYSTEM LATENCY OF 35MS WITH 35MS AND 75MS OF P75 AND P95 SYSTEM LATENCY, RESPECTIVELY. EXPERIENCE LOW-LATENCY INFERENCE. * DYNAMIC SCALING WE DYNAMICALLY SCALE UP AND DOWN RESOURCES AS THE NUMBER OF REQUESTS FOR YOUR MODEL VARIES. * COLD STARTS REDUCTION OUR SMART SYSTEM WITH PREEMPTIVE AND MULTI-GPU CACHING HELPS US REDUCE FREQUENCY OF COLD STARTS. * INTELLIGENT ROUTING THE MOST OPTIMAL RESOURCE FOR YOUR REQUESTS IS SELECTED VIA SEVERAL RUNTIME METRICS RECORDED OVERTIME. * CI/CD, ALERTING AND MONITORING INTEGRATE AI IN YOUR CI/CD WORKFLOW, SET-UP ALERTS AND MONITOR YOUR DEPLOYMENTS WITH EASE. * PIPELINE AND MODEL VERSIONING CREATE A NEW NAMED VERSION EACH TIME YOU UPLOAD A NEW PIPELINE. KEEP TRACK OF ALL DEPLOYED PIPELINES AND MODELS. * GPU SHARING & FRACTIONALIZATION OUR SYSTEM AUTOMATICALLY CACHES MULTIPLE MODELS UNDER SAME GPU TO INCREASE EFFICIENCY AND REDUCE COSTS. * MULTI GPU BASED INFERENCE SEAMLESSLY RUN VERY LARGE MODELS IN MULTI GPU ENVIRONMENTS TO FIT THE MEMORY REQUIREMENTS. HOW TO GET STARTED Run any model built by the community, dive into one of our tutorials, or start uploading your own models. Beginner friendly EXPLORE AI MODELS BUILT BY THE COMMUNITY Our community uploads AI models and makes them available for everyone to use. They are ready to try and use as an API. Explore AI models STABILITYAI/STABLE-DIFFUSION-XL-REFINER-1.0 SD-XL 1.0-base Updated 3 months ago3 months ago 181.25K Runs PAULH/OPEN-JOURNEY-XL OpenJourney XL. A finetuned SDXL on the Midjourney v5 dataset Updated 2 months ago2 months ago 43.29K Runs META/LLAMA2-7B-CHAT Llama 2 13B for chat applications with vLLM Updated about 2 months agoabout 2 months ago 20.63K Runs Intermediate TUTORIALS AND EXAMPLES OF WHAT YOU CAN BUILD ON MYSTIC * Llama 2 with vLLM (7B, 13B & multi-gpu 70B) * Build your own fast chatbot with Llama 2 and vLLM * Mistral AI 7B inference with vLLM View docs Intermediate UPLOAD YOUR OWN AI PIPELINE A pipeline contains all the code required to run your AI model as an endpoint. You can define the inputs to the endpoint, any pre-processing code, the inference pass, post-processing code and outputs to be returned back from the endpoint. Learn how to leverage our cold-start optimizations, createcustom environments, enable debugging mode & logging, load model from file, and much more. View docs ... @pipe def foo(bar: str) -> str: return f"Input string: {bar}" with Pipeline() as builder: bar = Variable(str) output_1 = foo(bar) builder.output(output_1) my_pipeline = builder.get_pipeline() my_pipeline.run("test") ... PAY PER SECOND Start from as little as $0.1/h $50 free credits every month Run your models on our shared cluster and pay only for the inference time. View pricing Enterprise LOOKING TO RUN AI ON YOUR OWN INFRASTRUCTURE? Our enterprise solution offers maximum privacy and scale. Run AI models as an API within your own cloud or infrastructure of choice. Learn about our Enterprise solution RESOURCES * Github * Documentation * Blog * Join our Discord COMPANY * About * Contact us * * * Copyright © 2023 Mystic AI, Inc. All rights reserved. * Terms of Service * Privacy Policy