POST
/
chat
/
completions
Chat completions
curl --request POST \
  --url https://906e-2001-5a8-6cc-e000-55c0-f0b-9da7-10b0.ngrok-free.app/chat/completions \
  --header 'Content-Type: application/json' \
  --data '{
  "model": "<string>",
  "messages": [
    {
      "role": "system",
      "content": "<string>"
    }
  ],
  "search": {
    "domain_filter": [
      "<string>"
    ],
    "recency": "day"
  },
  "stream": true,
  "user_context": {}
}'
{
  "request_id": "<string>",
  "data": {
    "choices": [
      {
        "index": 123,
        "message": {
          "role": "system",
          "content": "<string>"
        }
      }
    ]
  },
  "meta": {
    "api_version": "<string>",
    "processing_ms": 123,
    "tokens_used": 123,
    "cost": {
      "total": 1,
      "breakdown": {}
    }
  }
}
Generate chat completions with optional search augmentation.

Example Request

curl -X POST https://api.agentserp.com/chat/completions \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Hello, who are you?"}]}'

Body

application/json

Input for POST /chat/completions

model
string
required

LLM engine to use

messages
object[]
required
Minimum length: 1

Optional search-control block for chat

stream
boolean

If true, SSE stream; else aggregate JSON

user_context
object

Opaque dict preserved across turns

Response

Chat response

Universal response wrapper

data
object
required

Endpoint-specific payload

request_id
string

Echo of client-supplied ID or server-assigned UUID

meta
object

Metadata block attached to every response