forefront.ai Open in urlscan Pro
2606:4700:20::ac43:47dd  Public Scan

Submitted URL: http://forefront.ai/
Effective URL: https://forefront.ai/
Submission: On May 17 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

ModelsDatasets
PricingDocumentation
Login

Start for free



New

Forefront Beta is now live! 🎉


BUILD WITH
OPEN-SOURCE AI.

A better way to run & fine-tune open-source models on your data.
Your data, your models, your AI.

Start for free

Fine-tune models

Evaluate performance

Run with an API

Forefront enables developers to build on open-source AI with the familiar
experience of leading closed-source platforms.

Forget deprecated models, inconsistent performance, arbitrary usage policies,
and lack of control & transparency.

Don’t settle for AI you don’t own. The future is open.

Try Forefront for free


MODELS DESIGNED TO BE
YOUR OWN.

Start fine-tuning models on your data in minutes.
Fine-tune models for any use case.

Choose your model. Customize leading open-source models with your private data.



Achieve higher accuracy. Optimize your model performance on validation sets and
evals.



Deploy with confidence. Test your model in the Playground then integrate the
API.



No data? No problem. Start with the best model for your use case. Use our API to
store the responses. Then seamlessly fine-tune a model when you’re ready.

from openai import OpenAI
from forefront import ff

openai = OpenAI(api_key="OPENAI_API_KEY")
pipe = ff.pipelines.get_by_id("PIPELINE_ID")

messages = [{
    "role": "user",
    "content": "What is the meaning of 42?"
}]

completion = openai.complete(
    engine="gpt-4",
    messages=messages
)

messages.append({
    "role": "assistant",
    "content": completion["choices"][0]["text"]
})

pipe.add(messages)

Validate model performance. Assess how your fine-tuned model performs on a
validation set.

Validation results

Sample



of 10




User

What's a creative way to serve dinner?

Validation

Try a 'make-your-own-pizza' night where guests can choose their toppings. It's
fun, interactive, and caters to different tastes.

Your model

Organize a themed dinner party, like a 'Taco Tuesday', and decorate your space
accordingly. It makes the meal more exciting and memorable.

Watch your model learn. Analyze built-in loss charts as your model trains.

Training loss

0.132

1357911131517192123252729313336Epoch 1Epoch 2Epoch 3Epoch 400.450.91.351.8

Step undefined

-

Loss

-

Running Average

-

Evaluations made easy. Choose from a variety of evals to automatically run your
model on.

Evals



MMLU

58.0%

TruthfulQA

56.2%

MT-Bench

62.3%

ARC

75.6%

HumanEval

75.6%

AGIEval

75.6%

AGIEval

75.6%


RUN AI WITH AN API.

Inference with serverless endpoints for every model.
Run models in a few lines of code or experiment in the Playground.

Chat or completion endpoints. Choose the prompt syntax best for your task.

import Forefront from "forefront";

const ff = new Forefront(process.env.FOREFRONT_API_KEY);

try {
    const response = await ff.chat.completions.create({
      model: "team-name/fine-tuned-llm",
      messages: [
        {
            role: "system",
            content: "You are Deep Thought."
        }
        {
          role: "user",
          content: "What is the meaning of life?",
        },
      ],
      max_tokens: 64,
      temperature: 0.5,
      stop: ["\n"],
      stream: false
    });
    const completion = response.choices[0].content
} catch (e) {
    console.log(e);
}

Integration made simple. Three lines of code and you’re good to go.

+3

-3



Take your model and run. Prefer self-hosting or hosting with another provider?
Export your models and host them where you want.

Export model

Import from HuggingFace. Forget loading models into Colab. Just copy and paste
the model string into Forefront and inference in minutes.




YOUR AI DATA WAREHOUSE.

Bring your training, validation, and evaluation data.
Start storing your production data in ready to fine-tune datasets in a few lines
of code.

All your data in a single place. Forefront gives you a single source of truth
for all your AI data.

File name

Purpose

email_summaries.jsonl

Training

validate_email_summaries.jsonl

Validation

enrich_company.jsonl

Training

validate_enrich_company.jsonl

Validation

enrich_contact.jsonl

Training

validate_enrich_contact.jsonl

Validation

email_hooks.jsonl

Training

validate_email_hooks.jsonl

Validation

Build your data moat. Pipe your production data to Forefront in a few lines of
code to store it in ready to fine-tune datasets.

from openai import OpenAI
from forefront import ff

openai = OpenAI(api_key="OPENAI_API_KEY")
pipe = ff.pipelines.get_by_id("PIPELINE_ID")

messages = [{
    "role": "user",
    "content": "What is the meaning of 42?"
}]

completion = openai.complete(
    engine="gpt-4",
    messages=messages
)

messages.append({
    "role": "assistant",
    "content": completion["choices"][0]["text"]
})

pipe.add(messages)

Become one with your data. Navigate your data in the Inspector—built to help you
thoroughly and quickly inspect your samples.

Sample



of 12




User

Divide 129 by 42

Assistant

To divide 129 by 42, you can use long division or a calculator. Here's the long
division method: ```markdown __3_ 42|129 -126 _____ 3 ``` So, 129 divided by 42
is equal to 3 with a remainder of 3. If you want to express it as a mixed
number, it would be 3 and 3/42. To simplify the fraction, you can divide both
the numerator and the denominator by their greatest common divisor, which is 3.
So, the simplified fraction would be 1/14. Thus, the final answer is 3 and 1/14,
or 3.07142857 when expressed as a decimal (rounded to eight decimal places).

Instant insights. Get a sense of your data’s distribution and patterns. Discover
imbalances and biases without painstaking effort.

Tokens per sample

0200500800110015001900230027003100360002.5k5k10k

undefined - NaN tokens

-

Tokens

-

Tokens by label per sample

0200500800110015001900230027003100360002505001k

undefined - NaN tokens

-

System

-

User

-

Assistant

-


FROM ZERO TO IPO.

Designed for every stage of your journey.
From research to startups to enterprises.

Forget about infrastructure. API servers, GPUs, out of memory errors, dependency
hell, CUDA, batching? Don’t bother.

Don't sweat scaling. Lots of traffic? Forefront scales automatically to meet
demand. No traffic? You don’t pay a thing.

Only pay for what you use. Don’t pay for expensive GPUs when you’re not using
them.

Phi-2

$0.0006 / 1k tokens

Mistral-7B

$0.001 / 1k tokens

Mixtral-7Bx8

$0.004 / 1k tokens

Explore pricing


SERIOULY SECURE. PRIVATE BY DESIGN.

We don’t log any requests and never use your data to train models.
For enterprise customers, Forefront offers flexibility to deploy in a variety of
secure clouds.

Start for free


YOUR QUESTIONS, ANSWERED.

Have more questions?

Forefront is constantly evolving and we’re here to help along the way. If you
have additional questions, feel free to reach out.

Talk to an engineer

Can I try Forefront for free?

Can I export my models?

Does Forefront have usage policies?

What does Forefront do with my datasets?


YOUR PATH TO OPEN AI IS READY. ARE YOU?

Start for free

See pricing

© Forefront 2024

All rights reserved

Product

PricingDocumentationBlog

Legal

Terms of servicePrivacy policy




750