www.offload.fyi Open in urlscan Pro
2606:4700:3030::6815:2132  Public Scan

URL: https://www.offload.fyi/
Submission: On December 03 via api from US — Scanned from US

Form analysis 0 forms found in the DOM

Text Content

Why?FeaturesWidgetEasy to integrateFAQBlog
ExtensionDocsLive DemoExamplesGet Started
Open main menu



THE IN-BROWSER AI STACK

Create web apps that run AI directly in your users' browsers.

Increase user privacy and decrease your inference costs

DocsGet Started for free!


WHY OFFLOAD?




Many people are concerned about data privacy when using AI features, as these
typically send their data to third-party inference APIs.

With the Offload SDK, your users can opt-in for local AI execution, without any
extra effort on your part.

This increases user privacy and also reduces your infrastructure and inference
costs, since a significant amount of computation happens directly on the user
device.

If you build AI applications or agents for healthcare, legal, finance, document
processing, or any field that processes sensitive user information, Offload is
for you.


FEATURES

Offload SDK supported and planned features

 * 
   Text generation
 * 
   Text streaming
 * 
   Structured object generation
 * 
   Automatic GPU detection and API fallback
 * 
   Dynamic model serving depending on device resources
 * 
   Prompt customization per model
 * 
   Prompt version control
 * 
   In-browser RAG pipeline
 * 
   Custom fine-tunned model support
 * 
   Advanced Analytics


supported

in Development

planned



THE OFFLOAD WIDGET

When you integrate Offload, our widget automatically appears to the users whose
device has enough resources to perform inference locally.



Easy to add to any project



Offload replaces any SDK you are currently using - just change the inference
calls.
AI tasks are processed on the user"s device when possible, with automatic
fallback to any API you configure in the dashboard.


HOW TO INSTALL



<!-- Include the Offload library on your app -->
<script src="//unpkg.com/offload-ai" defer></script>

Simply add the library either from CDN script or importing from npm.


HOW TO RUN INFERENCE



// Configure offload instance, just once in your app
Offload.config({
    appUuid: "your-app-uuid-from-dashboard",
    promptUuids: {
        user_text: "your-prompt-uuid-from-dashboard"
    }
});

// Run inference. You can use streams, force JSON output, etc.
const { text } = await Offload.offload({
    promptKey: "user_text",
});

And you are done!

Frequently Asked Questions




FAQ


WHAT IF A USER DEVICE DOESN'T HAVE ENOUGH RESOURCES?




START OFFLOADING RIGHT NOW!

Get Started for free!

Offload © 2024

miguel@offload.fyi