openai-status.llm-utils.org Open in urlscan Pro
172.67.157.236  Public Scan

URL: https://openai-status.llm-utils.org/
Submission: On December 06 via api from US — Scanned from SE

Form analysis 0 forms found in the DOM

Text Content

OpenAIUnofficial OpenAI Status


WHY DOES THIS EXIST?

I've found myself checking the official OpenAI status dashboard when I'm
experiencing either a slow model or model errors, but the status dashboard
normally shows all green even when I'm experiencing API/Playground/ChatGPT
issues. So the official status doesn't seem to capture non-catastrophic but
still elevated rates of errors or slowness. This unofficial OpenAI status page
fixes that.


AT A GLANCE

Compare current model performance with previous data. Colors reflect two day
percentiles: lower values and greener shades indicate better performance.

Last updated 15 minutes ago
12/6/2024, 6:00:37 AM GMT+1ModelTokens / Second (Perf, %ile)Difference2 Day
Average3 Hour Error Rate (Status) gpt-4o63.95 tok/s (Slightly Faster,
80th)+13.13 tok/s50.83 tok/s0.00 / hr (Low)gpt-4o-mini57.87 tok/s (Normal,
46th)-1.71 tok/s59.58 tok/s0.00 / hr (Low)gpt-430.61 tok/s (Significantly
Faster, 95th)+8.62 tok/s21.99 tok/s0.00 / hr (Low)gpt-4-turbo39.83 tok/s
(Significantly Faster, 99th)+13.29 tok/s26.54 tok/s0.00 / hr
(Low)gpt-3.5-turbo105.88 tok/s (Significantly Faster, 98th)+26.42 tok/s79.46
tok/s0.00 / hr (Low)


TOKENS PER SECOND

The number of tokens per second the model is generating. Indicates model speed.


2 Days


ERRORS


2 Days


LATENCY

The response time for the OpenAI API's models call, indicating base latency
independent of the model's response time, as called from US-West. Shorter
durations are preferable.


2 Days