www.comet.com
Open in
urlscan Pro
3.92.5.163
Public Scan
Submitted URL: http://go.comet.ml/
Effective URL: https://www.comet.com/site/?utm_source=mkto-fallback
Submission: On October 16 via manual from PT — Scanned from PT
Effective URL: https://www.comet.com/site/?utm_source=mkto-fallback
Submission: On October 16 via manual from PT — Scanned from PT
Form analysis
3 forms found in the DOMGET /site/
<form method="get" class="searchform" action="/site/" __bizdiag="115" __biza="WJ__">
<label>
<span class="screen-reader-text">Search</span>
<input type="search" class="field" name="s" placeholder="Search" autocomplete="off">
</label>
<button type="submit" class="searchform-submit"><span class="ticon ticon-search" aria-hidden="true"></span><span class="screen-reader-text">Submit</span></button>
</form>
<form id="mktoForm_1206" __bizdiag="196353513" __biza="WJ__" novalidate="novalidate" class="mktoForm mktoHasWidth mktoLayoutLeft" style="font-family: Helvetica, Arial, sans-serif; font-size: 13px; color: rgb(51, 51, 51); width: 271px;">
<style type="text/css">
.mktoForm .mktoButtonWrap.mktoSimple .mktoButton {
color: #fff;
border: 1px solid #75ae4c;
padding: 0.4em 1em;
font-size: 1em;
background-color: #99c47c;
background-image: -webkit-gradient(linear, left top, left bottom, from(#99c47c), to(#75ae4c));
background-image: -webkit-linear-gradient(top, #99c47c, #75ae4c);
background-image: -moz-linear-gradient(top, #99c47c, #75ae4c);
background-image: linear-gradient(to bottom, #99c47c, #75ae4c);
}
.mktoForm .mktoButtonWrap.mktoSimple .mktoButton:hover {
border: 1px solid #447f19;
}
.mktoForm .mktoButtonWrap.mktoSimple .mktoButton:focus {
outline: none;
border: 1px solid #447f19;
}
.mktoForm .mktoButtonWrap.mktoSimple .mktoButton:active {
background-color: #75ae4c;
background-image: -webkit-gradient(linear, left top, left bottom, from(#75ae4c), to(#99c47c));
background-image: -webkit-linear-gradient(top, #75ae4c, #99c47c);
background-image: -moz-linear-gradient(top, #75ae4c, #99c47c);
background-image: linear-gradient(to bottom, #75ae4c, #99c47c);
}
#mktoForm_1206 {
width: 100% !important;
}
</style>
<style type="text/css">
.mktoForm .mktoButtonWrap.mktoSimple .mktoButton {
color: #fff;
border: 1px solid #75ae4c;
padding: 0.4em 1em;
font-size: 1em;
background-color: #99c47c;
background-image: -webkit-gradient(linear, left top, left bottom, from(#99c47c), to(#75ae4c));
background-image: -webkit-linear-gradient(top, #99c47c, #75ae4c);
background-image: -moz-linear-gradient(top, #99c47c, #75ae4c);
background-image: linear-gradient(to bottom, #99c47c, #75ae4c);
}
.mktoForm .mktoButtonWrap.mktoSimple .mktoButton:hover {
border: 1px solid #447f19;
}
.mktoForm .mktoButtonWrap.mktoSimple .mktoButton:focus {
outline: none;
border: 1px solid #447f19;
}
.mktoForm .mktoButtonWrap.mktoSimple .mktoButton:active {
background-color: #75ae4c;
background-image: -webkit-gradient(linear, left top, left bottom, from(#75ae4c), to(#99c47c));
background-image: -webkit-linear-gradient(top, #75ae4c, #99c47c);
background-image: -moz-linear-gradient(top, #75ae4c, #99c47c);
background-image: linear-gradient(to bottom, #75ae4c, #99c47c);
}
</style>
<div class="mktoFormRow">
<div class="mktoFieldDescriptor mktoFormCol" style="margin-bottom: 10px;">
<div class="mktoOffset" style="width: 10px;"></div>
<div class="mktoFieldWrap mktoRequiredField"><label id="LblFirstName" for="FirstName" class="mktoLabel mktoHasWidth" style="width: 100px;">
<div class="mktoAsterix">*</div>First Name:
</label>
<div class="mktoGutter mktoHasWidth" style="width: 10px;"></div><input id="FirstName" name="FirstName" maxlength="255" aria-labelledby="LblFirstName InstructFirstName" type="text" class="mktoField mktoTextField mktoHasWidth mktoRequired"
aria-required="true" style="width: 150px;"><span id="InstructFirstName" tabindex="-1" class="mktoInstruction"></span>
<div class="mktoClear"></div>
</div>
<div class="mktoClear"></div>
</div>
<div class="mktoClear"></div>
</div>
<div class="mktoFormRow">
<div class="mktoFieldDescriptor mktoFormCol" style="margin-bottom: 10px;">
<div class="mktoOffset" style="width: 10px;"></div>
<div class="mktoFieldWrap mktoRequiredField"><label id="LblLastName" for="LastName" class="mktoLabel mktoHasWidth" style="width: 100px;">
<div class="mktoAsterix">*</div>Last Name:
</label>
<div class="mktoGutter mktoHasWidth" style="width: 10px;"></div><input id="LastName" name="LastName" maxlength="255" aria-labelledby="LblLastName InstructLastName" type="text" class="mktoField mktoTextField mktoHasWidth mktoRequired"
aria-required="true" style="width: 150px;"><span id="InstructLastName" tabindex="-1" class="mktoInstruction"></span>
<div class="mktoClear"></div>
</div>
<div class="mktoClear"></div>
</div>
<div class="mktoClear"></div>
</div>
<div class="mktoFormRow">
<div class="mktoFieldDescriptor mktoFormCol" style="margin-bottom: 10px;">
<div class="mktoOffset" style="width: 10px;"></div>
<div class="mktoFieldWrap mktoRequiredField"><label id="LblEmail" for="Email" class="mktoLabel mktoHasWidth" style="width: 100px;">
<div class="mktoAsterix">*</div>Email Address:
</label>
<div class="mktoGutter mktoHasWidth" style="width: 10px;"></div><input id="Email" name="Email" maxlength="255" aria-labelledby="LblEmail InstructEmail" type="email" class="mktoField mktoEmailField mktoHasWidth mktoRequired"
aria-required="true" style="width: 150px;"><span id="InstructEmail" tabindex="-1" class="mktoInstruction"></span>
<div class="mktoClear"></div>
</div>
<div class="mktoClear"></div>
</div>
<div class="mktoClear"></div>
</div>
<div class="mktoFormRow hidden-el-row"><input type="hidden" name="utm_campaign__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="" style="margin-bottom: 10px;"><input type="hidden" name="utm_term__c"
class="mktoField mktoFieldDescriptor mktoFormCol" value="" style="margin-bottom: 10px;"><input type="hidden" name="utm_source__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="mkto-fallback" style="margin-bottom: 10px;"><input
type="hidden" name="utm_medium__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="" style="margin-bottom: 10px;"><input type="hidden" name="utm_content__c" class="mktoField mktoFieldDescriptor mktoFormCol" value=""
style="margin-bottom: 10px;">
<div class="mktoClear"></div>
</div>
<div class="mktoButtonRow"><span class="mktoButtonWrap mktoSimple" style="margin-left: 120px;"><button type="submit" class="mktoButton"><span class="theme-button-inner"><span class="theme-button-inner"><span class="theme-button-inner">Get
Updates</span></span></span></button></span></div><input type="hidden" name="formid" class="mktoField mktoFieldDescriptor" value="1206"><input type="hidden" name="munchkinId" class="mktoField mktoFieldDescriptor" value="912-JJP-445">
</form>
<form __bizdiag="584362894" __biza="WJ__" novalidate="novalidate" class="mktoForm mktoHasWidth mktoLayoutLeft"
style="font-family: Helvetica, Arial, sans-serif; font-size: 13px; color: rgb(51, 51, 51); visibility: hidden; position: absolute; top: -500px; left: -1000px; width: 1600px;"></form>
Text Content
skip to Main Content * Enterprise * Products * Opik – LLM Evaluation * Experiment Management * Artifacts * Model Registry * Model Production Monitoring * Docs * Pricing * Customers * Learn * Resources * Blog * Deep Learning Weekly * LLM Course * Company * About Us * News and Events * Events * Press Releases * Careers * Contact Us * Leadership * Login * Get Demo * Try Comet Free Search Submit WHERE AI DEVELOPERS BUILD Comet provides an end-to-end model evaluation platform for AI developers, with best in class LLM evaluations, experiment tracking, and production monitoring. Try Comet Free TRUSTED BY THE MOST INNOVATIVE ML TEAMS WHERE AI DEVELOPERS BUILD Comet provides an end-to-end model evaluation platform for AI developers, with best in class LLM evaluations, experiment tracking, and production monitoring. Try Comet Free TRUSTED BY THE MOST INNOVATIVE ML TEAMS MANAGE ANY ML OR LLM LIFECYCLE, FROM TRAINING THROUGH PRODUCTION MANAGE ANY ML OR LLM LIFECYCLE, FROM TRAINING THROUGH PRODUCTION DEBUG AND EVALUATE YOUR LLM APPLICATIONS WITH OPIK Automatically track all your prompt engineering work. Run automated evaluations on your LLM responses to optimize your applications before and after they hit production. Opik – Open Source LLM Evaluation TRACK AND VISUALIZE YOUR MODEL TRAINING RUNS WITH EXPERIMENT MANAGEMENT Log all your machine learning iteration to a single system of record. Make it easy to reproduce a previous experiment and compare the performances of training runs. Comet Experiment Management MONITOR ML MODEL PERFORMANCE IN PRODUCTION WITH COMET MPM Track data drift on your input and output features after your model is deployed to production. Set customized alerts to capture model performance degradation in real time. Model Production Monitoring STORE AND MANAGE YOUR MODELS WITH MODEL REGISTRY Create a centralized repository of all your model versions with immediate access to how they were trained. Promote models to downstream production systems with webhooks Comet Model Registry CREATE AND VERSION DATASETS WITH ARTIFACTS Know which exact dataset version a model was trained on for auditing and governance purposes. Leverage remote pointers to reference data already stored in the cloud. Comet Artifacts EASY INTEGRATION Add just a few lines of code to your notebook or script and automatically start tracking LLM traces, code, hyperparameters, metrics, model predictions, and more. Try Comet Free OPIK LLM EVALUATION OPENAI LANGCHAIN LLAMAINDEX ANY LLM ML EXPERIMENT MANAGEMENT PYTORCH PYTORCH LIGHTNING HUGGING FACE KERAS TENSORFLOW SCIKIT-LEARN XGBOOST ANY FRAMEWORK ML MODEL MONITORING ANY FRAMEWORK from llama_index.core import VectorStoreIndex, global_handler, set_global_handler from llama_index.core.schema import TextNode # Configure the Opik integration set_global_handler("opik") opik_callback_handler = global_handler node1 = TextNode(text="The cat sat on the mat.", id_="1") node2 = TextNode(text="The dog chased the cat.", id_="2") index = VectorStoreIndex([node1, node2]) # Create a LlamaIndex query engine query_engine = index.as_query_engine() # Query the documents response = query_engine.query("What did the dog do ?") print(response) from pytorch_lightning.loggers import CometLogger # 1. Create your Model # 2. Initialize CometLogger comet_logger = CometLogger() # 3. Train your model trainer = pl.Trainer( logger=[comet_logger], # ...configs ) trainer.fit(model) # 4. View real-time metrics in Comet from comet_ml import Experiment import torch.nn as nn # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Create your model class class RNN(nn.Module): #... Define your Class # 3. Train and test your model while logging everything to Comet with experiment.train(): # ...Train your model and log metrics experiment.log_metric("accuracy", correct / total, step = step) # 4. View real-time metrics in Comet from pytorch_lightning.loggers import CometLogger # 1. Create your Model # 2. Initialize CometLogger comet_logger = CometLogger() # 3. Train your model trainer = pl.Trainer( logger=[comet_logger], # ...configs ) trainer.fit(model) # 4. View real-time metrics in Comet from comet_ml import Experiment from transformers import Trainer # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Train your model trainer = Trainer( model = model, # ...configs ) trainer.train() # 3. View real-time metrics in Comet from comet_ml import Experiment from tensorflow import keras # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define your model model = tf.keras.Model( # ...configs ) # 3. Train your model model.fit( x_train, y_train, validation_data=(x_test, y_test), ) # 4. Track real-time metrics in Comet from comet_ml import Experiment import tensorflow as tf # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define and train your model model.fit(...) # 3. Log additional model metrics and params experiment.log_parameters({'custom_params': True}) experiment.log_metric('custom_metric', 0.95) # 4. Track real-time metrics in Comet from comet_ml import Experiment import tree from sklearn # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Build your model and fit clf = tree.DecisionTreeClassifier( # ...configs ) clf.fit(X_train_scaled, y_train) params = {...} metrics = {...} # 3. Log additional metrics and params experiment.log_parameters(params) experiment.log_metrics(metrics) # 4. Track model performance in Comet from comet_ml import Experiment import xgboost as xgb # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define your model and fit xg_reg = xgb.XGBRegressor( # ...configs ) xg_reg.fit( X_train, y_train, eval_set=[(X_train, y_train), (X_test, y_test)], eval_metric="rmse", ) # 3. Track model performance in Comet # Utilize Comet in any environment from comet_ml import Experiment # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Model training here # 3. Log metrics or params over time experiment.log_metrics(metrics) #4. Track real-time metrics in Comet from openai import OpenAI from opik.integrations.openai import track_openai openai_client = OpenAI() openai_client = track_openai(openai_client) response = openai_client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "Hello, world!"} ] ) from langchain_openai import ChatOpenAI from opik.integrations.langchain import OpikTracer # Initialize the tracer opik_tracer = OpikTracer() # Create the LLM Chain using LangChain llm = ChatOpenAI(temperature=0) # Configure the Opik integration llm = llm.with_config({"callbacks": [opik_tracer]}) llm.invoke("Hello, how are you?") from opik import track @track def llm_chain(user_question): context = get_context(user_question) response = call_llm(user_question, context) return response @track def get_context(user_question): # Logic that fetches the context, hard coded here return ["The dog chased the cat.", "The cat was called Luky."] @track def call_llm(user_question, context): # LLM call, can be combined with any Opik integration return "The dog chased the cat Luky." response = llm_chain("What did the dog do ?") print(response) # Utilize Comet in any environment from comet_mpm import CometMPM # 1. Create the MPM logger MPM = CometMPM() # 2. Add your inference logic here # 3. Log metrics or params over time MPM.log_event( prediction_id="...", input_features=input_features, output_value=prediction, output_probability=probability, ) ML EXPERIMENT MANAGEMENT PYTORCH from comet_ml import Experiment import torch.nn as nn # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Create your model class class RNN(nn.Module): #... Define your Class # 3. Train and test your model while logging everything to Comet with experiment.train(): # ...Train your model and log metrics experiment.log_metric("accuracy", correct / total, step = step) # 4. View real-time metrics in Comet PYTORCH LIGHTNING from pytorch_lightning.loggers import CometLogger # 1. Create your Model # 2. Initialize CometLogger comet_logger = CometLogger() # 3. Train your model trainer = pl.Trainer( logger=[comet_logger], # ...configs ) trainer.fit(model) # 4. View real-time metrics in Comet HUGGING FACE from comet_ml import Experiment from transformers import Trainer # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Train your model trainer = Trainer( model = model, # ...configs ) trainer.train() # 3. View real-time metrics in Comet KERAS from comet_ml import Experiment from tensorflow import keras # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define your model model = tf.keras.Model( # ...configs ) # 3. Train your model model.fit( x_train, y_train, validation_data=(x_test, y_test), ) # 4. Track real-time metrics in Comet TENSORFLOW from comet_ml import Experiment import tensorflow as tf # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define and train your model model.fit(...) # 3. Log additional model metrics and params experiment.log_parameters({'custom_params': True}) experiment.log_metric('custom_metric', 0.95) # 4. Track real-time metrics in Comet SCIKIT-LEARN from comet_ml import Experiment import tree from sklearn # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Build your model and fit clf = tree.DecisionTreeClassifier( # ...configs ) clf.fit(X_train_scaled, y_train) params = {...} metrics = {...} # 3. Log additional metrics and params experiment.log_parameters(params) experiment.log_metrics(metrics) # 4. Track model performance in Comet XGBOOST from comet_ml import Experiment import xgboost as xgb # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define your model and fit xg_reg = xgb.XGBRegressor( # ...configs ) xg_reg.fit( X_train, y_train, eval_set=[(X_train, y_train), (X_test, y_test)], eval_metric="rmse", ) # 3. Track model performance in Comet ANY FRAMEWORK # Utilize Comet in any environment from comet_ml import Experiment # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Model training here # 3. Log metrics or params over time experiment.log_metrics(metrics) #4. Track real-time metrics in Comet OPIK LLM EVALUATION OPENAI import comet_llm from openai import OpenAI #1. Initialize Comet comet_llm.init( api_key="YOUR_COMET_API_KEY", project="openai-example", ) #2. Send Prompts to OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ] ) #3. View Prompt Responses in Comet LANGCHAIN import comet_llm from langchain.callbacks.tracers.comet import CometTracer from langchain.chat_models import ChatOpenAI from langchain.schema import HumanMessage #1. Initialize Comet comet_llm.init( api_key="YOUR_COMET_API_KEY", project="langchain_example", ) #2. Send Prompts to LLM callbacks = [CometTracer()] chat_model = ChatOpenAI(callbacks=callbacks) text = "What would be a good company name for a company that makes colorful socks?" messages = [HumanMessage(content=text)] chat_model.invoke(messages) #3. View Prompt Responses and Chains in Comet ANY FRAMEWORK #Utilize Comet in any environment import comet_llm #1. Initialize Comet comet_llm.init( api_key="YOUR_API_KEY", project="YOUR_LLM_PROJECT", ) #2. Log prompt to Comet comet_llm.log_prompt( prompt = "", output = "", ) #3. View Prompt History in Comet MODEL MONITORING ANY FRAMEWORK # Utilize Comet in any environment from comet_mpm import CometMPM # 1. Create the MPM logger MPM = CometMPM() # 2. Add your inference logic here # 3. Log metrics or params over time MPM.log_event( prediction_id="...", input_features=input_features, output_value=prediction, output_probability=probability, ) AN END-TO-END MODEL EVALUATION PLATFORM Comet’s end-to-end model evaluation platform for developers focuses on shipping AI features, including open source LLM tracing, ML unit-testing, evaluations, experiment tracking and production monitoring. Track and compare your training runs, log and evaluate your LLM responses, version your models and training data, and monitor your models in production – all in one platform. EASY INTEGRATION Add just a few lines of code to your notebook or script and automatically start tracking LLM traces, code, hyperparameters, metrics, model predictions, and more. Try Comet Free OPIK LLM EVALUATION OPENAI LANGCHAIN LLAMAINDEX ANY LLM ML EXPERIMENT MANAGEMENT PYTORCH PYTORCH LIGHTNING HUGGING FACE KERAS TENSORFLOW SCIKIT-LEARN XGBOOST ANY FRAMEWORK ML MODEL MONITORING ANY FRAMEWORK from comet_ml import Experiment import torch.nn as nn # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Create your model class class RNN(nn.Module): #... Define your Class # 3. Train and test your model while logging everything to Comet with experiment.train(): # ...Train your model and log metrics experiment.log_metric("accuracy", correct / total, step = step) # 4. View real-time metrics in Comet from pytorch_lightning.loggers import CometLogger # 1. Create your Model # 2. Initialize CometLogger comet_logger = CometLogger() # 3. Train your model trainer = pl.Trainer( logger=[comet_logger], # ...configs ) trainer.fit(model) # 4. View real-time metrics in Comet from comet_ml import Experiment from transformers import Trainer # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Train your model trainer = Trainer( model = model, # ...configs ) trainer.train() # 3. View real-time metrics in Comet from comet_ml import Experiment from tensorflow import keras # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define your model model = tf.keras.Model( # ...configs ) # 3. Train your model model.fit( x_train, y_train, validation_data=(x_test, y_test), ) # 4. Track real-time metrics in Comet from comet_ml import Experiment import tensorflow as tf # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define and train your model model.fit(...) # 3. Log additional model metrics and params experiment.log_parameters({'custom_params': True}) experiment.log_metric('custom_metric', 0.95) # 4. Track real-time metrics in Comet from comet_ml import Experiment import tree from sklearn # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Build your model and fit clf = tree.DecisionTreeClassifier( # ...configs ) clf.fit(X_train_scaled, y_train) params = {...} metrics = {...} # 3. Log additional metrics and params experiment.log_parameters(params) experiment.log_metrics(metrics) # 4. Track model performance in Comet from comet_ml import Experiment import xgboost as xgb # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define your model and fit xg_reg = xgb.XGBRegressor( # ...configs ) xg_reg.fit( X_train, y_train, eval_set=[(X_train, y_train), (X_test, y_test)], eval_metric="rmse", ) # 3. Track model performance in Comet # Utilize Comet in any environment from comet_ml import Experiment # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Model training here # 3. Log metrics or params over time experiment.log_metrics(metrics) #4. Track real-time metrics in Comet from openai import OpenAI from opik.integrations.openai import track_openai openai_client = OpenAI() openai_client = track_openai(openai_client) response = openai_client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "Hello, world!"} ] ) from langchain_openai import ChatOpenAI from opik.integrations.langchain import OpikTracer # Initialize the tracer opik_tracer = OpikTracer() # Create the LLM Chain using LangChain llm = ChatOpenAI(temperature=0) # Configure the Opik integration llm = llm.with_config({"callbacks": [opik_tracer]}) llm.invoke("Hello, how are you?") from opik import track @track def llm_chain(user_question): context = get_context(user_question) response = call_llm(user_question, context) return response @track def get_context(user_question): # Logic that fetches the context, hard coded here return ["The dog chased the cat.", "The cat was called Luky."] @track def call_llm(user_question, context): # LLM call, can be combined with any Opik integration return "The dog chased the cat Luky." response = llm_chain("What did the dog do ?") print(response) from llama_index.core import VectorStoreIndex, global_handler, set_global_handler from llama_index.core.schema import TextNode # Configure the Opik integration set_global_handler("opik") opik_callback_handler = global_handler node1 = TextNode(text="The cat sat on the mat.", id_="1") node2 = TextNode(text="The dog chased the cat.", id_="2") index = VectorStoreIndex([node1, node2]) # Create a LlamaIndex query engine query_engine = index.as_query_engine() # Query the documents response = query_engine.query("What did the dog do ?") print(response) OPIK LLM EVALUATION OPENAI from openai import OpenAI from opik.integrations.openai import track_openai openai_client = OpenAI() openai_client = track_openai(openai_client) response = openai_client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "Hello, world!"} ] ) LANGCHAIN from langchain_openai import ChatOpenAI from opik.integrations.langchain import OpikTracer # Initialize the tracer opik_tracer = OpikTracer() # Create the LLM Chain using LangChain llm = ChatOpenAI(temperature=0) # Configure the Opik integration llm = llm.with_config({"callbacks": [opik_tracer]}) llm.invoke("Hello, how are you?") LLAMAINDEX from llama_index.core import VectorStoreIndex, global_handler, set_global_handler from llama_index.core.schema import TextNode # Configure the Opik integration set_global_handler("opik") opik_callback_handler = global_handler node1 = TextNode(text="The cat sat on the mat.", id_="1") node2 = TextNode(text="The dog chased the cat.", id_="2") index = VectorStoreIndex([node1, node2]) # Create a LlamaIndex query engine query_engine = index.as_query_engine() # Query the documents response = query_engine.query("What did the dog do ?") print(response) ANY FRAMEWORK from opik import track @track def llm_chain(user_question): context = get_context(user_question) response = call_llm(user_question, context) return response @track def get_context(user_question): # Logic that fetches the context, hard coded here return ["The dog chased the cat.", "The cat was called Luky."] @track def call_llm(user_question, context): # LLM call, can be combined with any Opik integration return "The dog chased the cat Luky." response = llm_chain("What did the dog do ?") print(response) ML EXPERIMENT MANAGEMENT PYTORCH from comet_ml import Experiment import torch.nn as nn # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Create your model class class RNN(nn.Module): #... Define your Class # 3. Train and test your model while logging everything to Comet with experiment.train(): # ...Train your model and log metrics experiment.log_metric("accuracy", correct / total, step = step) # 4. View real-time metrics in Comet PYTORCH LIGHTNING from pytorch_lightning.loggers import CometLogger # 1. Create your Model # 2. Initialize CometLogger comet_logger = CometLogger() # 3. Train your model trainer = pl.Trainer( logger=[comet_logger], # ...configs ) trainer.fit(model) # 4. View real-time metrics in Comet HUGGING FACE from comet_ml import Experiment from transformers import Trainer # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Train your model trainer = Trainer( model = model, # ...configs ) trainer.train() # 3. View real-time metrics in Comet KERAS from comet_ml import Experiment from tensorflow import keras # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define your model model = tf.keras.Model( # ...configs ) # 3. Train your model model.fit( x_train, y_train, validation_data=(x_test, y_test), ) # 4. Track real-time metrics in Comet TENSORFLOW from comet_ml import Experiment import tensorflow as tf # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define and train your model model.fit(...) # 3. Log additional model metrics and params experiment.log_parameters({'custom_params': True}) experiment.log_metric('custom_metric', 0.95) # 4. Track real-time metrics in Comet SCIKIT-LEARN from comet_ml import Experiment import tree from sklearn # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Build your model and fit clf = tree.DecisionTreeClassifier( # ...configs ) clf.fit(X_train_scaled, y_train) params = {...} metrics = {...} # 3. Log additional metrics and params experiment.log_parameters(params) experiment.log_metrics(metrics) # 4. Track model performance in Comet XGBOOST from comet_ml import Experiment import xgboost as xgb # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Define your model and fit xg_reg = xgb.XGBRegressor( # ...configs ) xg_reg.fit( X_train, y_train, eval_set=[(X_train, y_train), (X_test, y_test)], eval_metric="rmse", ) # 3. Track model performance in Comet ANY FRAMEWORK # Utilize Comet in any environment from comet_ml import Experiment # 1. Define a new experiment experiment = Experiment(project_name="YOUR PROJECT") # 2. Model training here # 3. Log metrics or params over time experiment.log_metrics(metrics) #4. Track real-time metrics in Comet ML MODEL MONITORING ANY FRAMEWORK # Utilize Comet in any environment from comet_mpm import CometMPM # 1. Create the MPM logger MPM = CometMPM() # 2. Add your inference logic here # 3. Log metrics or params over time MPM.log_event( prediction_id="...", input_features=input_features, output_value=prediction, output_probability=probability, ) AN END-TO-END MODEL EVALUATION PLATFORM Comet’s end-to-end model evaluation platform for developers focuses on shipping AI features, including open source LLM tracing, ML unit-testing, evaluations, experiment tracking and production monitoring. Track and compare your training runs, log and evaluate your LLM responses, version your models and training data, and monitor your models in production – all in one platform. WHERE AI DEVELOPERS BUILD Run Comet’s end-to-end evaluation platform on any infrastructure to see firsthand how Comet’s reshapes your workflow. Bring your existing software and data stack. Use code panels to create visualizations in your preferred user interfaces. WHERE AI DEVELOPERS BUILD Run Comet’s end-to-end evaluation platform on any infrastructure to see firsthand how Comet’s reshapes your workflow. Bring your existing software and data stack. Use code panels to create visualizations in your preferred user interfaces. * Before Comet * After Comet BEFORE COMET AFTER COMET INFRASTRUCTURE AN AI PLATFORM BUILT FOR ENTERPRISE, DRIVEN BY COMMUNITY Comet’s end-to-end evaluation platform is trusted by innovative data scientists, ML practitioners, and engineers in the most demanding enterprise environments. AN AI PLATFORM BUILT FOR ENTERPRISE, DRIVEN BY COMMUNITY Comet’s end-to-end evaluation platform is trusted by innovative data scientists, ML practitioners, and engineers in the most demanding enterprise environments. "Comet has aided our success with ML and serves to further ML development within Zappos." KYLE ANDERSON Director of software engineering "Comet offers the most complete experiment tracking solution on the market. It's brought significant value to our business." OLCAY CIRIT Staff Research and Tech lead "Comet enables us to speed up research cycles and reliably reproduce and collaborate on our modeling projects. It has become an indispensable part of our ML workflow." VICTOR SANH Machine Learning Scientist "None of the other products have the simplicity, ease of use and feature set that Comet has." RONNY HUANG Research Scientist "After discovering Comet, our deep learning team's productivity went up. Comet is easy to set up and allows us to move research faster." GURU RAO Head of AI "We can seamlessly compare and share experiments, debug and stop underperforming models. Comet has improved our efficiency." CAROL ANDERSON Staff Data Scientist GET STARTED TODAY, FREE No credit card required, try Comet with no risk and no commitment. Create Free Account GET STARTED TODAY, FREE No credit card required, try Comet with no risk and no commitment. Create Free Account SUBSCRIBE TO COMET Sign up to receive the Comet newsletter and never miss out on the latest ML updates, news and events! * First Name: * Last Name: * Email Address: Get Updates Thank you for subscribing to Comet’s newsletter! PRODUCTS * Experiment Management * Artifacts * Model Registry * Model Production Monitoring * LLMOps LEARN * Documentation * Resources * Comet Blog * Deep Learning Weekly * Heartbeat * LLM Course COMPANY * About Us * News and Events * Careers * Contact Us PRICING * Pricing * Create a Free Account * Contact Sales FOLLOW US LinkedInTwitterYoutubeFacebook LEARN * Documentation * Resources * Comet Blog * Deep Learning Weekly * Heartbeat * LLM Course PRODUCTS * Experiment Management * Artifacts * Model Registry * Model Production Monitoring * LLMOps COMPANY * About Us * News and Events * Careers * Contact Us PRICING * Pricing * Create a Free Account * Contact Sales FOLLOW US LinkedInTwitterYoutubeFacebook © 2024 Comet ML, Inc. - All Rights Reserved * Terms of Service * Privacy Policy * CCPA Privacy Notice * Cookie Settings Back To Top We use cookies to collect statistical usage information about our website and its visitors and ensure we give you the best experience on our website. Please refer to our Privacy Policy to learn more.OkPrivacy policy ×Close mobile menu * Enterprise * Products * Opik – LLM Evaluation * Experiment Management * Artifacts * Model Registry * Model Production Monitoring * Docs * Pricing * Customers * Learn * Resources * Blog * Company * About Us * News and Events * Events * Press Releases * Careers * Leadership * Contact Us * Login * Get Demo * Try Comet Free 1