AI Inference is currently available to select customers. Contact support to request access.
Features
- API Keys: Generate and manage keys to authenticate your API requests
- Models: Browse available models with pricing, context length, and capabilities
- Playground: Test models interactively before integrating them
- Overview: Monitor usage including requests, tokens, and costs
Managing API keys
Access API keys
Log in to the dashboard, select a project, and navigate to AI > API Keys.
lat_ and can be deleted at any time from the API Keys page.
Browsing models
Access the models page
Log in to the dashboard, select a project, and navigate to AI > Models.
Filter models
Use the search bar to find models by name. Filter by provider or capability (text, vision, code, reasoning).
Using the playground
Access the playground
Log in to the dashboard, select a project, and navigate to AI > Playground.
Configure your request
Select a model from the dropdown in the chat header. In the sidebar, enter your API key and optionally adjust the system prompt, temperature, and max tokens.
Making API requests
The AI Inference API is available athttps://api.lsh.ai and is fully compatible with the OpenAI SDK. You can also make direct HTTP requests.
YOUR_API_KEY with your API key. You can find available model IDs on the Models page.
Viewing metrics
The Overview page shows your AI Inference usage for the last 30 days:- Total Requests: Number of API requests made
- Tokens Used: Total input and output tokens consumed
- Total Cost: Cumulative cost of all API usage