dabs.su.domains Open in urlscan Pro
159.89.149.97  Public Scan

URL: https://dabs.su.domains/
Submission: On November 13 via api from US — Scanned from CA

Form analysis 1 forms found in the DOM

POST #

<form method="post" action="#">
  <div class="fields">
    <div class="field half">
      <label for="name">Name</label>
      <input type="text" name="name" id="name">
    </div>
    <div class="field half">
      <label for="email">Email</label>
      <input type="text" name="email" id="email">
    </div>
    <div class="field">
      <label for="message">Message</label>
      <textarea name="message" id="message" rows="5"></textarea>
    </div>
  </div>
  <ul class="actions">
    <li><a href="" class="button submit">Send Message</a></li>
  </ul>
</form>

Text Content

 * Overview
 * How it works
 * Datasets
 * Get in touch


DABS

The Domain Agnostic Benchmark for Self-supervised learning.

 * Learn more
 * Read the Paper
 * See the Code


EMBEDDING

First, data from pretraining datasets are embedded into vectors of uniform shape
in order to allow for a domain-agnostic model architecture that does not depend
on the shape of the data within each domain. We encourage the use of our
provided embedding module, but participants may also create their own.


PRETRAINING

Participants have agency over the pretraining objective they choose, and the
entire architecture of their model. The goal is to use pretraining datasets to
condition a model that is performant accross transfer datasets within a domain,
and ultimately to create an architecture and pretraining objective that is
performant in this way accross all six domains.


TRANSFER LEARNING

In the adaptation layer, the model is given labeled data in the same domain as
the pretraining data, but possibly from a different dataset and with a different
end task. A linear classifier is provided as the adaptation layer in our
baseline model, but participants may choose their own adaptation method so long
as it is in the spirit of the benchmark.


DOMAINS

We use six domains in order to capture performance over both traditional ML
tasks that have extensive research communities (e.g. computer vision and NLP)
and less studied/emerging focal point of the field (e.g. sensor and x-ray data).
More information about selection criteria and specific datasets can be found in
the DABS paper.


IMAGES

Phasellus convallis elit id ullam corper amet et pulvinar. Duis aliquam turpis
mauris, sed ultricies erat dapibus.


SPEECH

Phasellus convallis elit id ullam corper amet et pulvinar. Duis aliquam turpis
mauris, sed ultricies erat dapibus.


TEXT

Phasellus convallis elit id ullam corper amet et pulvinar. Duis aliquam turpis
mauris, sed ultricies erat dapibus.


SENSOR

Phasellus convallis elit id ullam corper amet et pulvinar. Duis aliquam turpis
mauris, sed ultricies erat dapibus.


CHEST X-RAY

Phasellus convallis elit id ullam corper amet et pulvinar. Duis aliquam turpis
mauris, sed ultricies erat dapibus.


TEXT-IMAGE PAIRING

Phasellus convallis elit id ullam corper amet et pulvinar. Duis aliquam turpis
mauris, sed ultricies erat dapibus.

 * Learn more


GET IN TOUCH

Phasellus convallis elit id ullamcorper pulvinar. Duis aliquam turpis mauris, eu
ultricies erat malesuada quis. Aliquam dapibus, lacus eget hendrerit bibendum,
urna est aliquam sem, sit amet imperdiet est velit quis lorem.

Name
Email
Message
 * Send Message


 * ADDRESS
   
   12345 Somewhere Road #654
   Nashville, TN 00000-0000
   USA


 * EMAIL
   
   user@untitled.tld


 * PHONE
   
   (000) 000-0000


 * SOCIAL
   
   * Twitter
   * Facebook
   * GitHub
   * Instagram
   * LinkedIn

 * © Untitled. All rights reserved.
 * Design: HTML5 UP