Remedy AI API
Add AI to your applications with a single endpoint. Server-managed conversations, file processing, and web search — all built in.
curl -X POST https://sickliest.com/api/v1/chat \
-H "Authorization: Bearer sk_live_your_key" \
-H "Content-Type: application/json" \
-d '{"message": "Hello, Remedy!"}'
Why Remedy
One Endpoint, One Call
No message arrays to construct. No roles to manage. Send a message, get a response. The API you wish every AI provider offered.
Server-Managed History
Pass a conversationId to continue any conversation. The server remembers the full context — no need to resend prior messages on every request.
Built-In File Processing
Attach images, PDFs, and GIFs. Text-based and scanned documents are processed with high accuracy. No preprocessing pipeline to build.
Live Web Search
Certain models can search the web for current information. Get grounded answers without building RAG infrastructure or managing a search API.
Per-Request Instructions
Customize AI behavior on every call. Set a persona, enforce formatting, or inject reference material — up to 10,000 characters of instructions per request.
Simple Token Budget
No surprise bills. API usage counts against your plan's monthly token budget. Monitor consumption in every response with the tokenUsage field.
Multi-Turn Conversations: Remedy vs. Others
Most AI APIs require you to resend the entire conversation history on every request. With Remedy, just pass a conversation ID.
Typical AI API
// Must resend full history every time
{
"messages": [
{"role": "system", "content": "..."},
{"role": "user", "content": "First msg"},
{"role": "assistant", "content": "..."},
{"role": "user", "content": "Second msg"},
{"role": "assistant", "content": "..."},
{"role": "user", "content": "Third msg"}
]
}
Remedy API
// Server remembers everything
{
"message": "Third msg",
"conversationId": "abc123"
}
// That's it. Full context
// is maintained automatically.
Use Cases
Website Chat Widget
Embed an AI assistant on your site. Store one conversation ID per session — the server handles the rest. Set your brand voice with per-request instructions.
Server-side history makes the frontend trivial.Internal Knowledge Base
Employees ask questions about SOPs, handbooks, or policies. Attach the PDF — the AI reads it and answers. No vector database or embedding pipeline needed.
Built-in document processing eliminates RAG complexity.Invoice & Receipt Processing
Upload a photo of an invoice or receipt. Get structured JSON back: vendor, amount, date, line items. Instructions tell the AI exactly what format you need.
Image + PDF processing with per-request formatting rules.Multi-Step AI Wizard
Tax prep, business plan generators, intake forms — each step builds on the last. Just pass the conversation ID. No growing message arrays to manage.
Server-managed history was built for this pattern.Content with Current Data
Generate blog posts, reports, or competitive analysis grounded in today's information. Models with web search fetch current data automatically.
Built-in search means current answers, not stale training data.Document Review
Upload a contract, employee handbook, or technical spec. Ask questions, drill down with follow-ups. The conversation retains the full document context.
PDF processing + persistent conversations = zero re-uploading.Customer Support Triage
Classify incoming tickets, draft responses, and handle follow-ups. Use instructions to set tone and policy per product line or department.
Conversation IDs tie follow-up tickets to their history.Batch Document Classification
Process folders of contracts, invoices, or reports. Classify each document by type and extract key metadata as JSON. A script, not a project.
One endpoint + file attachments = no infrastructure.CI/CD & Automation
Code review bots, log analysis, PR summaries. A GitHub Action is ~20 lines of curl. Per-request instructions customize the review focus per repository.
Single endpoint means minimal integration code.API Reference
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/v1/chat |
Send a message, get a complete JSON response |
| POST | /api/v1/chat/stream |
Send a message, get a streaming SSE response |
| GET | /api/v1/models |
List models available on your plan |
Base URL: https://sickliest.com
Authentication
All requests require an API key in the Authorization header:
Authorization: Bearer sk_live_your_key_here
Send a Message
curl -X POST https://sickliest.com/api/v1/chat \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "Explain quantum computing in simple terms",
"model": "℞ (Smart)",
"instructions": "Keep your response under 100 words"
}'
const response = await fetch('https://sickliest.com/api/v1/chat', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({
message: 'Explain quantum computing in simple terms',
model: '℞ (Smart)',
instructions: 'Keep your response under 100 words'
})
});
const data = await response.json();
console.log(data.message);
import requests
response = requests.post(
'https://sickliest.com/api/v1/chat',
headers={
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
},
json={
'message': 'Explain quantum computing in simple terms',
'model': '℞ (Smart)',
'instructions': 'Keep your response under 100 words'
}
)
data = response.json()
print(data['message'])
Response
{
"success": true,
"message": "Quantum computing uses qubits that can exist in multiple states...",
"model": "℞ (Smart)",
"conversationId": "abc123def456",
"usage": { "input": 24, "output": 156, "total": 180 },
"tokenUsage": { "percentUsed": 12, "tokensRemaining": 880000, "dailyPercentUsed": 3 }
}
Full documentation — including file attachments, conversation history, error codes, and rate limits — is available in your Patient Portal Developer tab after signing up.
Ready to Build?
API access is included with Plus, Pro, and Pro Max plans. Generate your key and make your first request in minutes. Need higher limits? Contact us about dedicated API tiers.