Skip to main content
POST
/
anthropic
/
v1
/
messages
curl --request POST \
  --url https://api.minimax.io/anthropic/v1/messages \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: <content-type>' \
  --data '
{
  "model": "MiniMax-M2.7",
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    }
  ]
}
'
{
  "id": "06379fa1dfdd9047604b8abc088ea75c",
  "type": "message",
  "role": "assistant",
  "model": "MiniMax-M2.7",
  "content": [
    {
      "thinking": "The user says \"Hello\". This is a simple greeting. We should respond politely, greet them back, maybe ask how we can help.\n",
      "signature": "1c3a0ae890922669e9815a201f9b645abdaafe8d8b5a65a5e48f90830c6e0750",
      "type": "thinking"
    },
    {
      "text": "Hello! How can I help you today?",
      "type": "text"
    }
  ],
  "usage": {
    "input_tokens": 39,
    "output_tokens": 40,
    "cache_creation_input_tokens": 0,
    "cache_read_input_tokens": 0
  },
  "stop_reason": "end_turn",
  "base_resp": {
    "status_code": 0,
    "status_msg": "success"
  }
}

Authorizations

Authorization
string
header
required

HTTP: Bearer Auth

  • Security Scheme Type: http
  • HTTP Authorization Scheme: Bearer API_key, used for account verification, can be viewed in Account Management > API Keys

Headers

Content-Type
enum<string>
default:application/json
required

Media type of the request body, should be set to application/json to ensure JSON format

Available options:
application/json

Body

application/json
model
enum<string>
required

Model ID

Available options:
MiniMax-M2.7,
MiniMax-M2.7-highspeed,
MiniMax-M2.5,
MiniMax-M2.1
messages
object[]
required

A list of messages containing the conversation history

system

Set the role and behavior of the model

stream
boolean
default:false

Whether to use streaming output, defaults to false. When set to true, the response will be returned in chunks

max_tokens
integer<int64>

Specifies the upper limit for generated content length (in tokens), maximum is 2048. Content exceeding the limit will be truncated. If generation stops due to length, try increasing this value

Required range: x >= 1
temperature
number<double>
default:1

Temperature coefficient, affects output randomness, value range (0, 1], default value for MiniMax model is 1.0. Higher values produce more random output; lower values produce more deterministic output

Required range: 0 < x <= 1
top_p
number<double>
default:0.95

Sampling strategy, affects output randomness, value range (0, 1], default value for MiniMax model is 0.95

Required range: 0 < x <= 1

Response

id
string

Unique ID of this response

type
enum<string>

Object type, fixed as message

Available options:
message
role
enum<string>

Role, fixed as assistant

Available options:
assistant
model
string

Model ID used for this request

content
object[]

List of response content blocks

stop_reason
enum<string>

Reason for stopping generation:

  • end_turn: Model ended naturally
  • max_tokens: Reached max_tokens limit
  • stop_sequence: Hit a stop sequence
Available options:
end_turn,
max_tokens,
stop_sequence
usage
object

Token usage for this request

base_resp
object

Error status code and details