homepage.replicate.com Open in urlscan Pro
2606:4700:20::681a:482  Public Scan

URL: https://homepage.replicate.com/
Submission: On December 07 via api from US — Scanned from US

Form analysis 1 forms found in the DOM

<form class="border-b border-black relative">
  <div class="bg-white relative border-none">
    <div class="flex gap-1">
      <div class="flex-1 overflow-ellipsis truncate">
        <div class="relative h-full"><input type="text" name="prompt" placeholder="Type your prompt here" class="w-full h-full bg-transparent text-sm lg:text-base border-0 p-3 focus:ring-0 overflow-ellipsis truncate"
            value="Clown fish swimming in a coral reef, beautiful, 8k, perfect, award winning, national geographic"></div>
      </div>
      <div class="flex-shrink-0 py-2 pr-2"><button class="btn btn-primary text-sm disabled:opacity-50 disabled:cursor-wait" type="submit">Run model</button></div>
    </div>
  </div>
</form>

Text Content

Menu
Explore Pricing Docs Blog Changelog Sign in Get started



RUN AI
WITH AN API.

Run and fine-tune open-source models. Deploy custom models at scale. All with
one line of code.

With Replicate you can

Generate imagesGenerate textGenerate videosGenerate musicGenerate speechFine
tune modelsRestore images

stability-ai/sdxl

A text-to-image generative AI model that creates beautiful images

24M runs

stability-ai/sdxl

A text-to-image generative AI model that creates beautiful images

24M runs

ai-forever/kandinsky-2.2

multilingual text2image latent diffusion model

6M runs

ai-forever/kandinsky-2.2

multilingual text2image latent diffusion model

6M runs

stability-ai/stable-diffusion

A latent text-to-image diffusion model capable of generating photo-realistic
images given any text input

105M runs

stability-ai/stable-diffusion

A latent text-to-image diffusion model capable of generating photo-realistic
images given any text input

105M runs

fofr/latent-consistency-model

Super-fast, 0.6s per image. LCM with img2img, large batching and canny
controlnet

128K runs

fofr/latent-consistency-model

Super-fast, 0.6s per image. LCM with img2img, large batching and canny
controlnet

128K runs

meta/llama-2-70b-chat

A 70 billion parameter language model from Meta, fine tuned for chat completions

3M runs

meta/llama-2-70b-chat

A 70 billion parameter language model from Meta, fine tuned for chat completions

3M runs

mistralai/mistral-7b-instruct-v0.1

An instruction-tuned 7 billion parameter language model from Mistral

281K runs

mistralai/mistral-7b-instruct-v0.1

An instruction-tuned 7 billion parameter language model from Mistral

281K runs

meta/codellama-13b

A 13 billion parameter Llama tuned for code completion

73K runs

meta/codellama-13b

A 13 billion parameter Llama tuned for code completion

73K runs

stability-ai/stable-video-diffusion

SVD is a research-only image to video model

216K runs

stability-ai/stable-video-diffusion

SVD is a research-only image to video model

216K runs

anotherjesse/zeroscope-v2-xl

Zeroscope V2 XL & 576w

181K runs

anotherjesse/zeroscope-v2-xl

Zeroscope V2 XL & 576w

181K runs

lucataco/animate-diff

Animate Your Personalized Text-to-Image Diffusion Models

113K runs

lucataco/animate-diff

Animate Your Personalized Text-to-Image Diffusion Models

113K runs

meta/musicgen

Generate music from a prompt or melody

866K runs

meta/musicgen

Generate music from a prompt or melody

866K runs

riffusion/riffusion

Stable diffusion for real-time music generation

828K runs

riffusion/riffusion

Stable diffusion for real-time music generation

828K runs

adirik/styletts2

Generates speech from text

3K runs

adirik/styletts2

Generates speech from text

3K runs

lucataco/xtts-v2

Coqui XTTS-v2: Multilingual Text To Speech Voice Cloning

8K runs

lucataco/xtts-v2

Coqui XTTS-v2: Multilingual Text To Speech Voice Cloning

8K runs

suno-ai/bark

🔊 Text-Prompted Generative Audio Model

174K runs

suno-ai/bark

🔊 Text-Prompted Generative Audio Model

174K runs

fofr/sdxl-emoji

An SDXL fine-tune based on Apple Emojis

2M runs

fofr/sdxl-emoji

An SDXL fine-tune based on Apple Emojis

2M runs

doriandarko/sdxl-hiroshinagai

SDXL model trained on Hiroshi Nagai's illustrations.

5K runs

doriandarko/sdxl-hiroshinagai

SDXL model trained on Hiroshi Nagai's illustrations.

5K runs

fofr/musicgen-choral

MusicGen fine-tuned on chamber choir music

286 runs

fofr/musicgen-choral

MusicGen fine-tuned on chamber choir music

286 runs

tencentarc/gfpgan

Practical face restoration algorithm for *old photos* or *AI-generated faces*

60M runs

tencentarc/gfpgan

Practical face restoration algorithm for *old photos* or *AI-generated faces*

60M runs

nightmareai/real-esrgan

Real-ESRGAN with optional face correction and adjustable upscale

30M runs

nightmareai/real-esrgan

Real-ESRGAN with optional face correction and adjustable upscale

30M runs

Run model
PythonJavaScriptcURL
import replicate

output = replicate.run(
  "anotherjesse/zeroscope-v2-xl:9f747673945c62801b13b84701c783929c0ee784e4748ec062204894dda1a351",
  input={
    "prompt": "Clown fish swimming in a coral reef, beautiful, 8k, perfect, award winning, national geographic"
  }
)

print(output)

import Replicate from "replicate";

const replicate = new Replicate();

const output = await replicate.run(
  "anotherjesse/zeroscope-v2-xl:9f747673945c62801b13b84701c783929c0ee784e4748ec062204894dda1a351",
  {
    input: {
      prompt: "Clown fish swimming in a coral reef, beautiful, 8k, perfect, award winning, national geographic"
    }
  }
);
console.log(output);

curl -s -X POST \
  -H "Authorization: Token $REPLICATE_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d $'{
    "version": "9f747673945c62801b13b84701c783929c0ee784e4748ec062204894dda1a351",
    "input": {
      "prompt": "Clown fish swimming in a coral reef, beautiful, 8k, perfect, award winning, national geographic"
    }
  }' \
  https://api.replicate.com/v1/predictions

Run anotherjesse/zeroscope-v2-xl with an API



THOUSANDS OF MODELS CONTRIBUTED BY OUR COMMUNITY

All the latest open-source models are on Replicate. They’re not just demos —
they all actually work and have production-ready APIs.

AI shouldn’t be locked up inside academic papers and demos. Make it real by
pushing it to Replicate.

Explore models
Push a model

meta/llama-2-7b-chat

A 7 billion parameter language model from Meta, fine tuned for chat completions

2M runs

stability-ai/stable-diffusion-inpainting

Fill in masked parts of images with Stable Diffusion

16M runs

microsoft/bringing-old-photos-back-to-life

Bringing Old Photos Back to Life

756K runs

google-research/maxim

Multi-Axis MLP for Image Processing

318K runs

salesforce/blip

Bootstrapping Language-Image Pre-training

57M runs

mistralai/mistral-7b-v0.1

A 7 billion parameter language model from Mistral.

77K runs

meta/llama-2-7b-chat

A 7 billion parameter language model from Meta, fine tuned for chat completions

2M runs

stability-ai/stable-diffusion-inpainting

Fill in masked parts of images with Stable Diffusion

16M runs

microsoft/bringing-old-photos-back-to-life

Bringing Old Photos Back to Life

756K runs

google-research/maxim

Multi-Axis MLP for Image Processing

318K runs

salesforce/blip

Bootstrapping Language-Image Pre-training

57M runs

mistralai/mistral-7b-v0.1

A 7 billion parameter language model from Mistral.

77K runs

laion-ai/erlich

Generate a logo using text.

331K runs

batouresearch/photorealistic-fx

RunDiffusion FX Photorealistic model, developed by RunDiffusion.

38K runs

pollinations/3d-photo-inpainting

3D Photography using Context-aware Layered Depth Inpainting

5K runs

pollinations/modnet

A deep learning approach to remove background & adding new background image

466K runs

prompthero/dreamshaper

Generate a new image given any input text with Dreamshaper v7

160K runs

laion-ai/erlich

Generate a logo using text.

331K runs

batouresearch/photorealistic-fx

RunDiffusion FX Photorealistic model, developed by RunDiffusion.

38K runs

pollinations/3d-photo-inpainting

3D Photography using Context-aware Layered Depth Inpainting

5K runs

pollinations/modnet

A deep learning approach to remove background & adding new background image

466K runs

prompthero/dreamshaper

Generate a new image given any input text with Dreamshaper v7

160K runs


HOW IT WORKS

You can get started with any open-source model with just one line of code. But
as you do more complex things, you fine-tune models or deploy your own custom
code.


RUN OPEN-SOURCE MODELS

Our community has already published thousands of models that are ready to use in
production. You can run these with one line of code.

Explore models

import replicate

output = replicate.run(
  "stability-ai/sdxl:39ed52f2a78e934b3ba6e2a89f5b1c712de7dfea535525255b1aa35c5565e08b",
  input={
    "width": 768,
    "height": 768,
    "prompt": "An astronaut riding a rainbow unicorn, cinematic, dramatic",
    "refine": "expert_ensemble_refiner",
    "scheduler": "K_EULER",
  }
)

print(output)


FINE-TUNE MODELS WITH YOUR OWN DATA

You can improve open-source models with your own data to create new models that
are better suited to specific tasks.

Image models like SDXL can generate images of a particular person, object, or
style.

Fine-tune image models

Language models like Llama 2 generate text in a specific style or get better at
a particular task.

Fine-tune language models

Train a model:

import replicate

training = replicate.trainings.create(
    version="stability-ai/sdxl:c221b2b8ef527988fb59bf24a8b97c4561f1c671f73bd389f866bfb27c061316",
    input={
        "input_images": "https://my-domain/my-input-images.zip",
    },
    destination="mattrothenberg/sdxl-fine-tuned"
)

print(training)

This will result in a new model:

mattrothenberg/sdxl-fine-tuned

A very special, fine-tuned version of SDXL

0 runs

mattrothenberg/sdxl-fine-tuned

A very special, fine-tuned version of SDXL

0 runs

Then, you can run it with one line of code:

output = replicate.run(
    "mattrothenberg/sdxl-fine-tuned:abcde1234...",
    input={"prompt": "a photo of TOK riding a rainbow unicorn"},
)


DEPLOY CUSTOM MODELS

You aren’t limited to the models on Replicate: you can deploy your own custom
models using Cog, our open-source tool for packaging machine learning models.

Cog takes care of generating an API server and deploying it on a big cluster in
the cloud. We scale up and down to handle demand, and you only pay for the
compute that you use.

Learn more

First, define the environment your model runs in with cog.yaml:

build:
  gpu: true
  system_packages:
    - "libgl1-mesa-glx"
    - "libglib2.0-0"
  python_version: "3.10"
  python_packages:
    - "torch==1.13.1"
predict: "predict.py:Predictor"

Next, define how predictions are run on your model with predict.py:

from cog import BasePredictor, Input, Path
import torch

class Predictor(BasePredictor):
    def setup(self):
        """Load the model into memory to make running multiple predictions efficient"""
        self.model = torch.load("./weights.pth")

    # The arguments and types the model takes as input
    def predict(self,
          image: Path = Input(description="Grayscale input image")
    ) -> Path:
        """Run a single prediction on the model"""
        processed_image = preprocess(image)
        output = self.model(processed_image)
        return postprocess(output)


SCALE ON REPLICATE

Thousands of businesses are building their AI products on Replicate. Your team
can deploy an AI feature in a day and scale to millions of users, without having
to be machine learning experts.



AUTOMATIC SCALE

If you get a ton of traffic, Replicate scales up automatically to handle the
demand. If you don't get any traffic, we scale down to zero and don't charge you
a thing.

 * CPU $0.000100/sec
 * Nvidia T4 GPU $0.000225/sec
 * Nvidia A40 GPU $0.000575/sec
 * Nvidia A40 (Large) GPU $0.000725/sec
 * Nvidia A100 (40GB) GPU $0.001150/sec
 * Nvidia A100 (80GB) GPU $0.001400/sec
 * 8x Nvidia A40 (Large) GPU $0.005800/sec
 * Learn more about pricing

PAY FOR WHAT YOU USE

Replicate only bills you for how long your code is running. You don't pay for
expensive GPUs when you're not using them.

FORGET ABOUT INFRASTRUCTURE

Deploying machine learning models at scale is hard. If you've tried, you know.
API servers, weird dependencies, enormous model weights, CUDA, GPUs, batching.

015305406:58 UTC07:22 UTC07:46 UTC08:10 UTC08:34 UTC08:58 UTC


Prediction throughput (requests per second)

LOGGING & MONITORING

Metrics let you keep an eye on how your models are performing, and logs let you
zoom in on particular predictions to debug how your model is behaving.

Logo


IMAGINE WHAT YOU CAN BUILD

Autonomous Robots Zero-shot autonomous robots with open source models

Paint with AI An iPad app that lets you paint with AI

emojis.sh AI Emojis

Replicover Find the hottest AI models on Replicate

Language Model CLI Language model command line interface


Imagine

Autonomous Robots Zero-shot autonomous robots with open source models


what

you

Paint with AI An iPad app that lets you paint with AI


can

emojis.sh AI Emojis

Replicover Find the hottest AI models on Replicate


build.

Language Model CLI Language model command line interface


With Replicate and tools like Next.js and Vercel, you can wake up with an idea
and watch it hit the front page of Hacker News by the time you go to bed.

Get started
Logo

Machine learning doesn’t need to be so hard.

Product

 * Explore
 * Pricing
 * Docs
 * Blog
 * Changelog

Community

 * Discord
 * X
 * GitHub

Company

 * About
 * Jobs
 * Privacy
 * Terms


Copy code

Copy code

Copy code
ExplorePricingDocsBlogChangelogSign inGet started
08:34 UTC