Common use cases include weather queries, web searches, database lookups, and similar tasks.
Using vLLM for Function Calling (Recommended)
Make sure vLLM is successfully deployed and the service can start normally.MiniMax-M1 integrates the custom
tool_call_parser, so you do not need to manually parse model outputs.When starting vLLM, simply add the following parameters to enable function calling (
--enable-auto-tool-choice, --tool-call-parser minimax, --chat-template examples/tool_chat_template_minimax_m1.jinja):
--tool-call-parser minimax: Key parameter to enable MiniMax-M1 custom parser.--enable-auto-tool-choice: Enables automatic tool selection.--chat-template: Template file must be adapted for tool calling. You can find the template here:
https://github.com/vllm-project/vllm/blob/main/examples/tool_chat_template_minimax_m1.jinja
Using the OpenAI SDK for Function Calling
The following example demonstrates how to implement a weather query with the OpenAI SDK:Function Call Definition Format
Function Definition
Function calls must be defined in thetools field of the request body. Each function includes the following parts:
name: Function name.description: Function purpose.parameters: Function parameter definitions.properties: Parameter attributes, where key is the parameter name, and value contains details.required: List of required parameters.type: Parameter type (usually"object").
Model Internal Processing Format
During internal processing, function definitions are converted into a special format and appended to the input text. Developers do not need to construct this manually. Example during internal processing:Model Output Format
Model outputs function calls in the following format:Manual Parsing of Model Output
We recommend using the OpenAI Chat Completions API, which automatically applies the Chat Template on the server side and is supported by major inference frameworks.If your framework does not support Tool Calling, if you are not using vLLM’s built-in parser, or if you use other inference frameworks (e.g., Transformers, TGI), you can manually parse the model output as shown below.
Applying Chat Template Manually
Example with thetransformers library:
Parsing Function Calls
When parsing manually, extract the<tool_calls> tag content:
Returning Function Results to the Model
After successful parsing, add the execution results back into the conversation history, so the model can use them in follow-up interactions. Single Result If the model callssearch_web, return results in this format:
search_web and get_current_weather, return:
name and text values.
FAQ
Q: Tool Call response is not valid JSONA: Check your request. If the Tool Call guidance in the request is not valid JSON, the wrong chat template is being applied. Please use tool_chat_template_minimax_m1.jinja.
References
Getting Support
If you encounter issues deploying the MiniMax model:- Contact our support team via email at api@minimax.io
- Submit an Issue on our GitHub repository