openrouter.ai Open in urlscan Pro
2606:4700:10::6816:30bd  Public Scan

URL: https://openrouter.ai/models
Submission: On September 25 via manual from JP — Scanned from JP

Form analysis 0 forms found in the DOM

Text Content

Skip to content
OpenRouter

/
Sign in
 * 


BrowseChatRankingsDocs
Sign in
MODALITY

Text to Text
Text & Image to Text
CONTEXT LENGTH


4K



64K



1M
Reset
PROMPT PRICING


FREE



$0.5



$10+
Reset
SERIES

GPT
Claude
Gemini

More…
CATEGORY


Roleplay

Programming

Programming/Scripting

More…
SUPPORTED PARAMETERS

tools
temperature
top_p

More…


MODELS

232 models
Newest
NewestTop WeeklyPricing: Low to HighPricing: High to LowContext: High to Low
Newest
NewestTop WeeklyPricing: Low to HighPricing: High to LowContext: High to Low
 * Qwen2.5 72B Instruct
   1.15B tokens
   
   Science (#10)
   
   
   Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5
   brings the following improvements upon Qwen2: - Significantly more knowledge
   and has greatly improved capabilities in coding and mathematics, thanks to
   our specialized expert models in these domains. - Significant improvements in
   instruction following, generating long texts (over 8K tokens), understanding
   structured data (e.g, tables), and generating structured outputs especially
   JSON. More resilient to the diversity of system prompts, enhancing role-play
   implementation and condition-setting for chatbots. - Long-context Support up
   to 128K tokens and can generate up to 8K tokens. - Multilingual support for
   over 29 languages, including Chinese, English, French, Spanish, Portuguese,
   German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and
   more. Usage of this model is subject to Tongyi Qianwen LICENSE AGREEMENT.
   
   by qwen131K context$0.35/M input tokens$0.4/M output tokens
   
   
 * Qwen2-VL 72B Instruct
   52.7M tokens
   
   Qwen2 VL 72B is a multimodal LLM from the Qwen Team with the following key
   enhancements: - SoTA understanding of images of various resolution & ratio:
   Qwen2-VL achieves state-of-the-art performance on visual understanding
   benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. -
   Understanding videos of 20min+: Qwen2-VL can understand videos over 20
   minutes for high-quality video-based question answering, dialog, content
   creation, etc. - Agent that can operate your mobiles, robots, etc.: with the
   abilities of complex reasoning and decision making, Qwen2-VL can be
   integrated with devices like mobile phones, robots, etc., for automatic
   operation based on visual environment and text instructions. - Multilingual
   Support: to serve global users, besides English and Chinese, Qwen2-VL now
   supports the understanding of texts in different languages inside images,
   including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
   For more details, see this blog post and GitHub repo. Usage of this model is
   subject to Tongyi Qianwen LICENSE AGREEMENT.
   
   by qwen33K context$0.4/M input tokens$0.4/M output tokens$0.578/K input imgs
   
   
 * Lumimaid v0.2 8B
   34.5M tokens
   
   Lumimaid v0.2 8B is a finetune of Llama 3.1 8B with a "HUGE step up dataset
   wise" compared to Lumimaid v0.1. Sloppy chats output were purged. Usage of
   this model is subject to Meta's Acceptable Use Policy.
   
   by neversleep131K context$0.1875/M input tokens$1.125/M output tokens
   
   
 * OpenAI: o1-mini (2024-09-12)
   89.8M tokens
   
   The latest and strongest model family from OpenAI, o1 is designed to spend
   more time thinking before responding. The o1 models are optimized for math,
   science, programming, and other STEM-related tasks. They consistently exhibit
   PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn
   more in the launch announcement. Note: This model is currently experimental
   and not suitable for production use-cases, and may be heavily rate-limited.
   
   by openai128K context$3/M input tokens$12/M output tokens
   
   
 * OpenAI: o1-mini
   314M tokens
   
   The latest and strongest model family from OpenAI, o1 is designed to spend
   more time thinking before responding. The o1 models are optimized for math,
   science, programming, and other STEM-related tasks. They consistently exhibit
   PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn
   more in the launch announcement. Note: This model is currently experimental
   and not suitable for production use-cases, and may be heavily rate-limited.
   
   by openai128K context$3/M input tokens$12/M output tokens
   
   
 * OpenAI: o1-preview (2024-09-12)
   98.7M tokens
   
   The latest and strongest model family from OpenAI, o1 is designed to spend
   more time thinking before responding. The o1 models are optimized for math,
   science, programming, and other STEM-related tasks. They consistently exhibit
   PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn
   more in the launch announcement. Note: This model is currently experimental
   and not suitable for production use-cases, and may be heavily rate-limited.
   
   by openai128K context$15/M input tokens$60/M output tokens
   
   
 * OpenAI: o1-preview
   344M tokens
   
   The latest and strongest model family from OpenAI, o1 is designed to spend
   more time thinking before responding. The o1 models are optimized for math,
   science, programming, and other STEM-related tasks. They consistently exhibit
   PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn
   more in the launch announcement. Note: This model is currently experimental
   and not suitable for production use-cases, and may be heavily rate-limited.
   
   by openai128K context$15/M input tokens$60/M output tokens
   
   
 * Mistral: Pixtral 12B (free)Free variant
   31.9M tokens
   
   The first image to text model from Mistral AI. Its weight was launched via
   torrent per their tradition:
   https://x.com/mistralai/status/1833758285167722836 These are free,
   rate-limited endpoints for Pixtral 12B. Outputs may be cached. Read about
   rate limits here.
   
   by mistralai4K context$0/M input tokens$0/M output tokens$0/K input imgs
   
   
 * Mistral: Pixtral 12B
   6.34M tokens
   
   The first image to text model from Mistral AI. Its weight was launched via
   torrent per their tradition:
   https://x.com/mistralai/status/1833758285167722836
   
   by mistralai4K context$0.1/M input tokens$0.1/M output tokens$0.1445/K input
   imgs
   
   
 * Cohere: Command R (03-2024)
   28.2M tokens
   
   Command-R is a 35B parameter model that performs conversational language
   tasks at a higher quality, more reliably, and with a longer context than
   previous models. It can be used for complex workflows like code generation,
   retrieval augmented generation (RAG), tool use, and agents. Read the launch
   post here. Use of this model is subject to Cohere's Acceptable Use Policy.
   
   by cohere128K context$0.5/M input tokens$1.5/M output tokens
   
   

© 2023 - 2024 OpenRouter, LLC


StatusPricingPrivacyTerms