www.exafunction.com Open in urlscan Pro
34.111.99.89  Public Scan

Submitted URL: http://www.exafunction.com/
Effective URL: https://www.exafunction.com/
Submission: On October 22 via api from US — Scanned from DE

Form analysis 1 forms found in the DOM

<form class="flex flex-col sm:flex-row">
  <div><label class="sr-only" for="email-input">Email address</label><input type="email" autocomplete="email" class="w-72 rounded-md px-4 focus:border-transparent focus:outline-none focus:ring-2 focus:ring-primary-600 dark:bg-black" id="email-input"
      name="email" placeholder="Enter your email" required=""></div>
  <div class="mt-2 flex w-full rounded-md shadow-sm sm:mt-0 sm:ml-3"><button
      class="w-full rounded-md bg-primary-500 py-2 px-4 font-black text-white sm:py-0 hover:bg-primary-700 dark:hover:bg-primary-400 focus:outline-none focus:ring-2 focus:ring-primary-600 focus:ring-offset-2 dark:ring-offset-black" type="submit">Sign
      up</button></div>
</form>

Text Content

HomeBlogDocsAboutCareersContact
Home
Blog
Docs
About
Careers
Contact
We've publicly released our docs


EFFICIENT DEEP LEARNING AT SCALE.

Exafunction optimizes your deep learning inference workload, delivering up to a
10x improvement in resource utilization and cost. Focus on building your deep
learning application, not on managing clusters and fine-tuning performance.

Get StartedDemo


WITHOUT EXAFUNCTION

In most deep learning applications, CPU, I/O, and network bottlenecks lead to
poor utilization of GPU hardware.




WITH EXAFUNCTION

Exafunction moves any GPU code to highly utilized remote resources, even spot
instances. Your core logic remains on inexpensive CPU instances.





TRUSTED BY THE MOST DEMANDING APPLICATIONS

Exafunction is battle-tested on applications like large-scale autonomous vehicle
simulation. These workloads have complex custom models, require numerical
reproducibility, and use thousands of GPUs concurrently.




HOW IT WORKS


REGISTER ANY MODEL

Exafunction supports models from major deep learning frameworks and inference
runtimes. Models and dependencies like custom operators are versioned so you can
always be confident you’re getting the right results.

TensorflowPyTorchONNXTensorRT

with exa.ModuleRepository("repo") as repo:
    uid = repo.register_tf_savedmodel(
        "TFModel:v1.0",
        "/tf_model.savedmodel",
    )
    print(uid)
    # -> @jsGUAJrNjwp9I9sc7uPR

with exa.ModuleRepository("repo") as repo:
    uid = repo.register_tf_savedmodel(
        "TFModel:v1.0",
        "/tf_model.savedmodel",
    )
    print(uid)
    # -> @jsGUAJrNjwp9I9sc7uPR

 * 
   Tensorflow
 * 
   PyTorch
 * 
   ONNX
 * 
   TensorRT

with exa.ModuleRepository("repo") as repo:
    uid = repo.register_tf_savedmodel(
        "TFModel:v1.0",
        "/tf_model.savedmodel",
    )
    print(uid)
    # -> @jsGUAJrNjwp9I9sc7uPR

PythonC++

with exa.Session("exa-cluster") as sess:
    model = sess.NewModule("Detector:v1.0")
    image = sess.from_numpy(...)
    outputs = model.run(image=image)
    print(outputs["boxes"].numpy())

with exa.Session("exa-cluster") as sess:
    model = sess.NewModule("Detector:v1.0")
    image = sess.from_numpy(...)
    outputs = model.run(image=image)
    print(outputs["boxes"].numpy())

 * 
   Python
 * 
   C++

with exa.Session("exa-cluster") as sess:
    model = sess.NewModule("Detector:v1.0")
    image = sess.from_numpy(...)
    outputs = model.run(image=image)
    print(outputs["boxes"].numpy())


INTEGRATE YOUR APPLICATION

Integration is as simple as replacing your framework’s model inference functions
with a few lines of code.

All Posts →
Email address
Sign up
maillinkedintwitterslack
Exafunction Team
•
© 2022
•
Exafunction
Privacy Policy