salad.com Open in urlscan Pro
172.67.72.203  Public Scan

Submitted URL: http://salad.com/
Effective URL: https://salad.com/
Submission: On June 03 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Read how Civitai powers 10 Million AI images per day on Salad's distributed
cloud
Products

Salad Container Engine (SCE)

Fully managed & massively scalable

Salad Gateway Service (SGS)

Dedicated proxies in ~200 countries

Virtual Kubelets

K8 pods as container deployments

Pricing
Use Cases

Image Generation

Up to 20,000 images per dollar

Voice AI

Up to 80% savings compared to big clouds

Computer Vision

Save 50% or more on cloud cost

Data Collection

10X better data on 1000s of residential IPs

Batch Processing

Scale easily at 90% less cost than incumbents

Language Models

Custom LLMs at low cost without sharing compute

Transcription Service

The lowest priced transcription service in the market

Dreambooth API

The lowest priced Stable Diffusion finetuning API in the market

About

About Salad

A mission to democratize the cloud

Charity

Giving back to the community

Press

Salad in the News

Looking for a new career? 
Get in touch
Blog
Resources

Docs

SaladCloud documentation

Security

Running securely on our distributed cloud

Models

Deploy popular models in a few clicks

Earn with your GPU
Deploy on Salad



Save up to 90% on your cloud cost.


THE MOST AFFORDABLE CLOUD FOR AI/ML INFERENCE AT SCALE

Deploy AI/ML production models without headaches on the lowest priced GPUs
(starting from $0.02/hr) in the market. Get 10X-100X more inferences per dollar
compared to managed services and hyperscalers.

Deploy on Salad Get a Demo


Have questions about SaladCloud for your workload?


BOOK A 15 MIN CALL WITH OUR TEAM.
GET $50 IN TESTING CREDITS.

Discuss my use case



USED BY DEVELOPERS FROM:




SCALE WITHOUT OVERSPENDING ON CLOUD

Struggling with high cloud costs, AI-focused GPU shortages & infrastructure
management? SaladCloud offers a fully-managed container service opening up
access to thousands of consumer GPUs on the world’s largest distributed network.

Watch Demo Deploy on Salad


~00%

LESS CLOUD COST


00K+

WORLDWIDE GPUS


$00/HR

GPU STARTING PRICE


00X

INFERENCES PER $


GPUs starting from $0.02/hr


USE OUR CALCULATOR TO SEE HOW MUCH YOU SAVE ON YOUR CURRENT CLOUD COST

Try Pricing Calculator Now



STOP OVERPAYING FOR CLOUD TODAY!

See how other AI/ML teams save big on cloud cost with SaladCloud.

Read the case study
“By switching to Salad, Civitai is now serving inference on over 600 consumer
GPUs to deliver 10 Million images per day and training more than 15,000 LoRAs
per month. Salad not only had the lowest prices in the market for image
generation but also offered us incredible scalability.”

Justin Maier

Founder - Civitai





BUILT FOR INFERENCE AT SCALE

Scale easily to thousands of GPU instances worldwide without the need to manage
VMs or individual instances, all with a simple usage-based price structure.


REDUCE BUDGET BLOAT

Save up to 50% on orchestration services from big box providers, plus discounts
on recurring plans.


GPU-DRIVEN PROCESSING

Distribute data batch jobs, HPC workloads, and rendering queues to thousands of
3D accelerated GPUS.


GLOBAL EDGE NETWORK

Bring workloads to the brink on low-latency edge nodes located in nearly every
corner on the planet.


MULTI-CLOUD COMPATIBLE

Deploy Salad Container Engine workloads alongside your existing hybrid or
multicloud configurations.


ON-DEMAND ELASTICITY

Distribute data batch jobs, HPC workloads, and rendering queues to thousands of
3D accelerated GPUS.


OPTIMIZED USAGE FEES

Bring workloads to the brink on low-latency edge nodes located in nearly every
corner on the planet.

Trusted by 100s of machine learning and data science teams



WELCOME TO THE COMPUTESHARING ECONOMY!
90% OF THE WORLD’S COMPUTE RESOURCES (OVER 400 MILLION CONSUMER GPUS) SIT IDLE
FOR 20-22 HRS A DAY.
‍
AT SALAD, WE HAVE ACTIVATED THIS LATENT RESOURCE TO POWER THE WORLD’S GREENEST,
MOST AFFORDABLE CLOUD.





PERFECT FOR GPU-HEAVY WORKLOADS OF ANY TYPE

Scale easily to thousands of GPU instances worldwide without the need to manage
VMs or individual instances, all with a simple usage-based price structure.

Text to Image

Text-to-Speech

Speech-To-text

Computer Vision

Language Models


TEXT-TO-IMAGE

Scale easily to thousands of GPU instances worldwide without the need to manage
VMs or individual instances, all with a simple usage-based price structure.

Get more images per dollar than any other cloud
1000 images/$ for SDXL
~10000 images/$ for Stable Diffusion 1.5
See use case Deploy on Salad



TEXT-TO-SPEECH

You are overpaying for managed services and APIs. Serve TTS inference on Salad's
consumer GPUs and get 10X-2000X more inferences per dollar.

Convert 4.7 Million words/$ with OpenVoice
Convert 39,000 words/$ with Bark TTS
Convert 23,300 words/$ with MetaVoice
See use case Deploy on Salad



SPEECH-TO-TEXT

If you are serving AI transcription, translation, captioning, etc. at scale, you
are overpaying by thousands of dollars today. Serve speech-to-text inference on
Salad for up to 90% less cost.

Transcribe 47,638 mins/$ with Parakeet TDT 1.1B
Transcribe ~30,000 mins/$ with Distil-Whisper
Transcribe 11,700 mins/$ with Whisper
See use case Deploy on Salad



COMPUTER VISION

Simplify and automate the deployment of computer vision models like YOLOv8 on
10,000+ consumer GPUs on the edge. Save 50% or more on your cloud cost compared
to managed services/APIs.

Tag 309,000 images/$ with RAM++  
Segment 50,000 images/$ with SAM
73% less cost than Azure for object detection
See use case Deploy on Salad



LANGUAGE MODELS

Running Large Language Models (LLM) on Salad is a convenient, cost-effective
solution to deploy various applications without managing infrastructure or
sharing compute.

$0.12 per Million tokens avg. for TGI
$0.04/hr starting price to deploy own LLM
$0.22/hr to run 7 Billion parameter models
See use case Deploy on Salad



READ OUR BLOG

Benchmarks, tutorials, product updates and more.

Go to Salad Blog


STABLE DIFFUSION XL (SDXL BENCHMARK - 769 IMAGES / $ ON SALAD




WHISPER LARGE INFERENCE BENCHMARK: 137 DAYS OF AUDIO TRANSCRIBED IN 15 HOURS FOR
JUST $117




YOLOV8 OBJECT DETECTION ON SALAD’S GPUS


View all
Distributed & Sustainable


BREAK FREE FROM THE BIG CLOUD MONOPOLY

We can’t print our way out of the chip shortage. Run your workloads on the edge
with already available resources. Democratization of cloud computing is the key
to a sustainable future, after all.

Take advantage of geo-distributed nodes
Save your deployments from outages & shortages with 1 Million+ distributed nodes
across 180+ countries.
A sustainable way to compute for the future
Deploying on unused, latent GPUs lessens the environmental impact, safeguards
against tech monopolies and democratizes access & profits from computing.
Affordable & Scalable


LOWER YOUR TOTAL COST OF OWNERSHIP (TCO) ON CLOUD

High TCO on popular clouds is a well-known secret. With SaladCloud, you just
containerize your application, choose your resources and we manage the rest,
lowering your TCO & getting to market quickly.

Unmatched inference. Unbeatable prices.
Get 10X more inferences per dollar compared to other clouds. If you find a lower
price, we will beat the bill.
Scale as you grow without breaking the bank
Scale up (or down) easily with no pre-paid contracts, no commitments and
transparent, usage-based pricing.

Secure & Reliable


DEPLOY SECURELY TO GEO-DISTRIBUTED NODES WITH HIGH AVAILABILITY

Over 1Million individual nodes and 100s of customers trust Salad with their
resources and applications.

Redundant security and compliance
SaladCloud is SOC2 certified and our patented approach isolates customer
environments and data across our network.
Reliable nodes available in plenty
Don’t get tied into expensive contracts & pre-payments just to get a shocking
cloud bill as you scale. Access GPUs when you need them at the lowest cost, not
when ‘they’ can provide them.


RUN POPULAR MODELS OR BRING YOUR OWN

Bark Whisper Bert Stable Diffusion Falcon Llama 7B


A FULLY MANAGED CONTAINER SERVICE

Over 1Million individual nodes and 100s of customers trust Salad with their
resources and applications.

No VM Management

You don’t have to manage any Virtual Machines (VMs).

Less Data Costs

No ingress/egress costs on SaladCloud. No surprises.

Less DevOps

Save time & resources with miniminal DevOps Work.

Infinite Scalability

Scale without worrying about access to GPUs.

Products
Salad Container Engine Salad Gateway Service Virtual Kubelet
Use Cases
Image Generation Audio Computer Vision Data Collection
About
About Salad Charity Press Blog - Cloud
Resources
Pricing Trust Center Security Documentation Report Abuse
Kitchen
Download Salad Community Blog Store

@2024. All rights reserved. Proudly built by a distributed team.
Privacy Policy Terms of Service Accessibility