Skip to main content
To meet developers’ needs for the AI SDK ecosystem, MiniMax provides an official community provider. With simple configuration, you can integrate MiniMax capabilities into the AI SDK ecosystem.

Quick Start

1. Install AI SDK and MiniMax Provider

npm install ai vercel-minimax-ai-provider

2. Configure Environment Variables

export MINIMAX_API_KEY=${YOUR_API_KEY}

3. Call API

TypeScript
import { minimax } from 'vercel-minimax-ai-provider';
import { generateText } from 'ai';

const { text, reasoning } = await generateText({
  model: minimax('MiniMax-M2.7'),
  system: 'You are a helpful assistant.',
  prompt: 'Hi, how are you?',
});

if (reasoning) {
  console.log(`Thinking:\n${reasoning}\n`);
}
console.log(`Text:\n${text}\n`);

4. Important Note

In multi-turn function call conversations, the complete model response (i.e., the assistant message) must be appended to the conversation history to maintain the continuity of the reasoning chain.
  • Append the full result.response.messages to the message history (includes all assistant and tool messages)

Supported Models

When using the AI SDK, the MiniMax-M2.7 MiniMax-M2.7-highspeed MiniMax-M2.5 MiniMax-M2.5-highspeed MiniMax-M2.1 MiniMax-M2.1-highspeed MiniMax-M2 model is supported:
Model NameContext WindowDescription
MiniMax-M2.7204,800Beginning the journey of recursive self-improvement (output speed approximately 60 tps)
MiniMax-M2.7-highspeed204,800M2.7 Highspeed: Same performance, faster and more agile (output speed approximately 100 tps)
MiniMax-M2.5204,800Peak Performance. Ultimate Value. Master the Complex (output speed approximately 60 tps)
MiniMax-M2.5-highspeed204,800M2.5 highspeed: Same performance, faster and more agile (output speed approximately 100 tps)
MiniMax-M2.1204,800Powerful Multi-Language Programming Capabilities with Comprehensively Enhanced Programming Experience (output speed approximately 60 tps)
MiniMax-M2.1-highspeed204,800Faster and More Agile (output speed approximately 100 tps)
MiniMax-M2204,800Agentic capabilities, Advanced reasoning
For details on how tps (Tokens Per Second) is calculated, please refer to FAQ > About APIs.
The AI SDK compatibility interface currently only supports the MiniMax-M2.7 MiniMax-M2.7-highspeed MiniMax-M2.5 MiniMax-M2.5-highspeed MiniMax-M2.1 MiniMax-M2.1-highspeed MiniMax-M2 model. For other models, please use the standard MiniMax API interface.

Compatibility

Supported Parameters

When using the AI SDK, we support the following input parameters:
ParameterSupport StatusDescription
modelFully supportedsupports MiniMax-M2.7 MiniMax-M2.7-highspeed MiniMax-M2.5 MiniMax-M2.5-highspeed MiniMax-M2.1 MiniMax-M2.1-highspeed MiniMax-M2 model
messagesPartial supportSupports text and tool calls, no image/document input
maxTokensFully supportedMaximum number of tokens to generate
systemFully supportedSystem prompt
temperatureFully supportedRange (0.0, 1.0], controls output randomness, recommended value: 1
toolChoiceFully supportedTool selection strategy
toolsFully supportedTool definitions
topPFully supportedNucleus sampling parameter

Messages Field Support

Field TypeSupport StatusDescription
role="user"Fully supportedUser text messages
role="assistant"Fully supportedAssistant responses
role="tool"Fully supportedTool call results
type="text"Fully supportedText content
type="tool-call"Fully supportedTool calls
type="tool-result"Fully supportedTool call results
type="image"Not supportedImage input not supported yet
type="file"Not supportedFile input not supported yet

Examples

Streaming Response

TypeScript
import { minimax } from 'vercel-minimax-ai-provider';
import { streamText } from 'ai';

console.log("Starting stream response...\n");
console.log("=".repeat(60));
console.log("Thinking Process:");
console.log("=".repeat(60));

const result = streamText({
  model: minimax('MiniMax-M2.7'),
  system: 'You are a helpful assistant.',
  prompt: 'Hi, how are you?',
  onError({ error }) {
    console.error(error);
  },
});

let inText = false;

for await (const part of result.fullStream) {
  if (part.type === 'reasoning') {
    // Stream output thinking process
    process.stdout.write(part.text);
  } else if (part.type === 'text') {
    if (!inText) {
      inText = true;
      console.log("\n" + "=".repeat(60));
      console.log("Response Content:");
      console.log("=".repeat(60));
    }
    // Stream output text content
    process.stdout.write(part.text);
  }
}

console.log("\n");

Important Notes

  1. The AI SDK compatibility interface currently only supports the MiniMax-M2.7 MiniMax-M2.7-highspeed MiniMax-M2.5 MiniMax-M2.5-highspeed MiniMax-M2.1 MiniMax-M2.1-highspeed MiniMax-M2 model
  2. The temperature parameter range is (0.0, 1.0], values outside this range will return an error
  3. Image and document type inputs are not currently supported
  4. The default minimax provider instance uses the Anthropic-compatible API format. Use minimaxOpenAI if you need the OpenAI-compatible format.
  5. For more information, see the MiniMax AI Provider on AI SDK and the GitHub repository