Skip to main content
POST
/
v1
/
text
/
chatcompletion_v2
curl --request POST \
  --url https://api.minimax.io/v1/text/chatcompletion_v2 \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: <content-type>' \
  --data '
{
  "model": "M2-her",
  "messages": [
    {
      "role": "system",
      "name": "MiniMax AI"
    },
    {
      "role": "user",
      "name": "User",
      "content": "Hello"
    }
  ]
}
'
{
"id": "05b81ca0cde9e60c3ae4ce7f60103250",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Hello! I'm M2-her, an AI assistant developed by MiniMax. Nice to meet you! How can I help you?",
"role": "assistant",
"name": "MiniMax AI",
"audio_content": ""
}
}
],
"created": 1768483232,
"model": "M2-her",
"object": "chat.completion",
"usage": {
"total_tokens": 199,
"total_characters": 0,
"prompt_tokens": 176,
"completion_tokens": 23
},
"input_sensitive": false,
"output_sensitive": false,
"input_sensitive_type": 0,
"output_sensitive_type": 0,
"output_sensitive_int": 0,
"base_resp": {
"status_code": 0,
"status_msg": ""
}
}

Authorizations

Authorization
string
header
required

HTTP: Bearer Auth

  • Security Scheme Type: http
  • HTTP Authorization Scheme: Bearer API_key, used for account verification, can be viewed in Account Management > API Keys.

Headers

Content-Type
enum<string>
default:application/json
required

Media type of the request body, should be set to application/json to ensure JSON format

Available options:
application/json

Body

application/json
model
enum<string>
required

Model ID. Available value: M2-her

Available options:
M2-her
messages
object[]
required

List of conversation messages containing dialogue history. For more message parameter details, please refer to Text Chat Guide

stream
boolean
default:false

Whether to use streaming transmission, defaults to false. When set to true, responses will be returned in batches

max_completion_tokens
integer<int64>

Specifies the maximum length of generated content (in tokens), with an upper limit of 2048. Content exceeding the limit will be truncated. If generation stops due to length reason, try increasing this value.

Required range: x >= 1
temperature
number<double>
default:1

Temperature coefficient affecting output randomness, range (0, 1], default value is 1.0 for M2-her model. Higher values produce more random output; lower values produce more deterministic output

Required range: 0 < x <= 1
top_p
number<double>
default:0.95

Sampling strategy affecting output randomness, range (0, 1], default value is 0.95 for M2-her model

Required range: 0 < x <= 1

Response

id
string

Unique ID of this response

choices
object[]

List of response choices

created
integer<int64>

Unix timestamp (seconds) of response creation

model
string

Model ID used for this request

object
enum<string>

Object type. chat.completion for non-streaming, chat.completion.chunk for streaming

Available options:
chat.completion,
chat.completion.chunk
usage
object

Token usage statistics for this request

input_sensitive
boolean

Whether input content hits sensitive words. If input content seriously violates regulations, the API will return a content violation error message with empty reply content

input_sensitive_type
integer<int64>

Type of sensitive word hit in input, returned when input_sensitive is true. Values: 1 Serious violation; 2 Pornography; 3 Advertisement; 4 Prohibited; 5 Abuse; 6 Violence/Terror; 7 Other

output_sensitive
boolean

Whether output content hits sensitive words. If output content seriously violates regulations, the API will return a content violation error message with empty reply content

output_sensitive_type
integer<int64>

Type of sensitive word hit in output

base_resp
object

Error status code and details