docs.aimlapi.com
Open in
urlscan Pro
44.207.116.237
Public Scan
URL:
https://docs.aimlapi.com/
Submission: On February 29 via api from US — Scanned from US
Submission: On February 29 via api from US — Scanned from US
Form analysis
0 forms found in the DOMText Content
* Body * Headers (0) Public Documentation Settings ENVIRONMENT No Environment LAYOUT Double Column LANGUAGE cURL - cURL AI/ML API Quickstart Introduction OpenAI API Compatibility Examples Models Parameters API Key Managmenet AI/ML API QUICKSTART 👋 Welcome to the AI/ML API docs! AI/ML API makes integrating state-of-the-art AI models into your applications effortless, offering seamless compatibility with OpenAI-like interfaces. Features Include: * Inference: Easily evaluate models for text, images, and more using our API. * API Key Management: Securely manage your API keys for controlled access. * Broad Model Selection: Access a diverse range of models for various AI tasks. Get Started with Inference * Register for an Account: Sign up to generate your API key and kickstart with free trial tokens. * Run Your First Model: View More Plain Text import openai system_content = "You are a trav el agent. Be descriptive and helpful." user_content = "Tell me about San Francisco" client = openai.OpenAI( api_key="YOUR_API_KEY", base_url="https://api.aimlapi.com/", ) chat_completion = client.chat.completions.create( model="mistralai/Mixtral-8x7B-Instruct-v0.1", messages=[ {"role": "system", "content": system_content}, {"role": "user", "content": user_content}, ], temperature=0.7, max_tokens=1024, ) response = chat_completion.choices[0].message.content print("Response:\n", response) Next Steps * Explore the AI/ML API Playground to experiment with various models. * Learn how to integrate real-time streaming responses into your applications. * Check out code examples for inspiration on leveraging our API for different use cases. * Discover integrations for a seamless development experience with leading AI frameworks. Dive into the world of AI/ML API and explore the limitless possibilities of AI applications in your projects! OPENAI API COMPATIBILITY AI/ML API fully supports the OpenAI API structure, ensuring easy integration into systems already using OpenAI's standards. PYTHON SDK Integrate with AI/ML API by updating your API key and base_url in the OpenAI Python SDK. Here's an example with our API's chat model endpoint: View More Plain Text import openai system_content = "You are a travel agent. Be descriptive and helpful." user_content = "Tell me about San Francisco" client = openai.OpenAI( api_key="YOUR_AI_ML_API_KEY", base_url="https://api.aimlapi.com", ) chat_completion = client.chat.completions.create( model="mistralai/Mixtral-8x7B-Instruct-v0.1", messages=[ {"role": "system", "content": system_content}, {"role": "user", "content": user_content}, ], temperature=0.7, max_tokens=128, ) response = chat_completion.choices[0].message.content print("AI/ML API:\n", response) STREAMING WITH PYTHON SDK To enable streaming of responses, simply add stream=True to the chat completions create function. Plain Text # ... (previous code setup) stream = client.chat.completions.create( model="your-model-string-here", messages=[ # ... (your messages) ], stream=True, max_tokens=1024, ) for chunk in stream: print(chunk.choices[0].message.content or "", end="", flush=True) NODE.JS SDK If you're using Node.js, you can similarly switch to AI/ML API by updating the apiKey and baseURL in the OpenAI Node.js SDK. Plain Text const { Configuration, OpenAIApi } = require("openai"); const configuration = new Configuration({ apiKey: "YOUR_AI_ML_API_KEY", basePath: "https://api.aimlapi.com/service/prompt", }); const openai = new OpenAIApi(configuration); // ... (your async function to run the code) STREAMING IN NODE.JS For streaming in Node.js, set stream: true in the completions create function. View More Plain Text // ... (previous code setup) const stream = await openai.createChatCompletion({ model: "your-model-string-here", messages: [ // ... (your messages) ], max_tokens: 1024, stream: true, }); stream.on('data', (data) => { console.log(data.choices[0].message.content || ""); }); RESPONSE STRUCTURE The response from AI/ML API's chat completion will have a similar structure to the OpenAI API, allowing for a seamless transition. Plain Text # Assuming 'response' is the JSON object received from the API: assistant_response = response['choices'][0]['message']['content'] print(assistant_response) This code snippet will print the content of the message where the role is 'assistant'. It navigates through the nested JSON structure to reach the desired data. EXAMPLES The AI/ML API is a versatile tool that can be applied to various tasks such as text generation, summarization, conversation, code generation, and image creation. Below, you’ll find examples demonstrating how to use our API for different purposes. TEXT GENERATION EXAMPLE Generate creative text based on a prompt using the AI/ML API. View More Plain Text import requests response = requests.post( 'https://api.aimlapi.com/chat/completions', json={ "model": "zero-one-ai/Yi-34B-Chat", "prompt": "Once upon a time in a virtual world,", "max_tokens": 100, "temperature": 0.7, }, headers={ "Authorization": "Bearer <YOUR_API_KEY>", } ) print(response.json()) SUMMARIZATION EXAMPLE Summarize a lengthy text to its essential points. View More Plain Text response = requests.post( 'https://api.aimlapi.com/chat/completions', json={ "model": "zero-one-ai/Yi-34B-Chat", "prompt": "Summarize the following text: <long text here>", "max_tokens": 50, "stop": ["\n"], }, headers={ "Authorization": "Bearer <YOUR_API_KEY>", } ) print(response.json()) CONVERSATIONAL AI EXAMPLE Create an interactive chatbot that responds to user input. View More Plain Text response = requests.post( 'https://api.aimlapi.com/chat/completions', json={ "model": "openchat/openchat-3.5-1210", "prompt": [ {"role": "system", "content": "You are a witty AI."}, {"role": "user", "content": "Tell me a joke!"}, ], "max_tokens": 30, }, headers={ "Authorization": "Bearer <YOUR_API_KEY>", } ) print(response.json()) CODE GENERATION EXAMPLE Automatically generate code snippets from a description. View More Plain Text response = requests.post( 'https://api.aimlapi.com/chat/completions', json={ "model": "codellama/CodeLlama-70b-Python-hf", "prompt": "Write a Python function to calculate the factorial of a number:", "max_tokens": 50, }, headers={ "Authorization": "Bearer <YOUR_API_KEY>", } ) print(response.json()) IMAGE CREATION EXAMPLE Generate an image based on a descriptive prompt. View More Plain Text response = requests.post( 'https://api.aimlapi.com/images/generations', json={ "model": "stabilityai/stable-diffusion-2-1", "prompt": "A futuristic city skyline at sunset", "n": 1, "size": "1024x1024", }, headers={ "Authorization": "Bearer <YOUR_API_KEY>", } ) print(response.json()) ADDING YOUR OWN MODEL If there’s a specific model you’d like to see integrated into our API, join our Discord community and suggest it there. We’re continually expanding our offerings based on user feedback. POSTCHAT COMPLETION api.aimlapi.com/chat/completions AUTHORIZATIONBearer Token Token <token> HEADERS Content-Type application/json Bodyraw (json) View More json { "model": "Qwen/Qwen1.5-0.5B", "messages": [ { "role": "assistant", "content": "<string>" }, { "role": "assistant", "content": "<string>" } ], "max_tokens": "<number>", "temperature": "<number>", "top_p": "<number>", "repetition_penalty": "<number>", "top_k": "<number>", "stream": "<boolean>" } Example Request Chat Completion View More curl curl --location 'api.aimlapi.com/chat/completions' \ --header 'Content-Type: application/json' \ --data '{ "model": "Qwen/Qwen1.5-0.5B", "messages": [ { "role": "assistant", "content": "<string>" }, { "role": "assistant", "content": "<string>" } ], "max_tokens": "<number>", "temperature": "<number>", "top_p": "<number>", "repetition_penalty": "<number>", "top_k": "<number>", "stream": "<boolean>" }' Example Response * Body * Headers (0) No response body This request doesn't return any response body No response headers This request doesn't return any response headers