--- base_model: - Qwen/Qwen3-4B-Instruct-2507 language: - en license: apache-2.0 tags: - agent - Agentic Learning - tool use - BFCL task_categories: - question-answering - text-generation pipeline_tag: text-generation library_name: transformers --- # FunReason-MT Technical Report: Advanced Data Synthesis Solution for Real-world Multi-Turn Tool-use [![arXiv](https://img.shields.io/badge/arXiv-2510.24645-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2510.24645) [![Paper](https://img.shields.io/badge/Hugging%20Face-Paper-yellow?logo=huggingface)](https://huggingface.co/papers/2510.24645) [![Model](https://img.shields.io/badge/Hugging%20Face-Model-yellow?logo=huggingface)](https://huggingface.co/Bingguang/FunReason-MT) [![Dataset](https://img.shields.io/badge/Hugging%20Face-Dataset-yellow?logo=huggingface)](https://huggingface.co/datasets/Bingguang/FunReason-MT) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/inclusionAI/AWorld-RL) [![Project Page](https://img.shields.io/badge/Project-AWorld-green)](https://github.com/inclusionAI/AWorld) ## Model Overview The **FunReason-MT-4B** model is a high-performance **Large Language Model (LLM)** fine-tuned for complex, multi-turn **Function Calling (FC)** and agentic tool-use tasks. Built upon the **Qwen3-4B-Instruct-2507** base model , it has been trained using the novel **FunReason-MT data synthesis framework**. FunReason-MT-4B achieves ssuperior results on the **Berkeley Function-Calling Leaderboard (BFCLv3)** Multi-Turn and Agentic Evaluation benchmarks. This performance demonstrates that high-quality, synthesized data can effectively overcome the complexity barrier in multi-turn FC data generation. - **Base Model:** Qwen3-4B-Instruct-2507 - **Size:** 4 Billion parameters - **Key Capability:** Advanced Multi-Turn Function Calling and Agentic Tool-Use The full usage of the model is in our [BFCL PR](https://github.com/ShishirPatil/gorilla/pull/1229). ## 📊 Evaluation Results The model was rigorously evaluated on the Berkeley Function-Calling Leaderboard (BFCL). ### BFCLv3 Multi-Turn and Single-Turn Performance | Model (4B - 235B) | Multi-Turn (Overall) | Single-Turn (Overall) | | :------------------------------------- | :------------------------------------------: | :------------------------------------------: | | Qwen3-4B-Instruct (Base) | 15.75 | 78.19 | | **Qwen3-4B + FunReason-MT (RL)** | **57.75** | **85.47** | | Claude-Sonnet-4-20250514 | 54.75 | 84.72 | | DeepSeek-R1-0528 | 44.50 | 78.22 | | GPT-4o-2024-11-20 | 42.50 | 77.21 | ### BFCL Agentic Evaluation (BFCLv4 OOD) The FunReason-MT trained model leads in out-of-distribution agentic tasks (Web Search and Memory). | Model | BFCLv4 Overall Score | | :----------------------------- | :------------------------------------------: | | **FunReason-MT-4B (RL)** | **15.10** | | ToolACE-2-8B | 14.83 | | BitAgent-8B | 8.24 | | XLAM-2-3b-fc-r | 7.42 | | watt-tool-8B | 6.30 | ----- ## 💻 Training Data and Framework ### FunReason-MT Dataset The training set comprises **16,000 high-quality multi-turn samples**. This dataset was generated using the three-phase FunReason-MT data synthesis framework, which focuses on generating complex trajectories that require: 1. **Environment-API Graph Interactions** for collecting goal-directed, correct execution traces. 2. **Advanced Tool-Query Synthesis** for creating logical-jump queries that abstract multi-step actions. 3. **Guided Iterative Chain** for enforcing reliable, consistent Chain-of-Thought (CoT) generation using self-correction. ### Training Details The model was fine-tuned with function calling data from APIGen and the FunReason-MT dataset. - **Training Libraries:** LLama-Factory and Verl. - **Methodology:** Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL). - **Hardware:** Conducted on 32 NVIDIA H20 GPUs. ### Usage Here we provide a code snippet of the handler of FunReason-MT. ```python class FunReasonMTHandler(OSSHandler): def __init__(self, model_name, temperature) -> None: super().__init__(model_name, temperature) self.is_fc_model = False self.top_p = 0.7 self.max_output_len = 20000 self.max_context_length = 247000 @override def _query_prompting(self, inference_data: dict): print("overide _query_prompting") # We use the OpenAI Completions API function: list[dict] = inference_data["function"] message: list[dict] = inference_data["message"] formatted_prompt: str = self._format_prompt(message, function) inference_data["inference_input_log"] = {"formatted_prompt": formatted_prompt} # Tokenize the formatted prompt to get token count input_token_count = len(self.tokenizer.tokenize(formatted_prompt)) # Determine the number of tokens to request. Cap it at 4096 if the model has a larger limit. if self.max_context_length < input_token_count + 2: # If the prompt is already at the max length, just request 1000 token, we will get an error anyway leftover_tokens_count = 1000 else: leftover_tokens_count = min( self.max_output_len, self.max_context_length - input_token_count - 2, ) extra_body = {} if hasattr(self, "stop_token_ids"): extra_body["stop_token_ids"] = self.stop_token_ids if hasattr(self, "skip_special_tokens"): extra_body["skip_special_tokens"] = self.skip_special_tokens start_time = time.time() if len(extra_body) > 0: api_response = self.client.completions.create( model=self.model_path_or_id, temperature=self.temperature, top_p=self.top_p, prompt=formatted_prompt, max_tokens=leftover_tokens_count, extra_body=extra_body, timeout=72000, # Avoid timeout errors ) else: api_response = self.client.completions.create( model=self.model_path_or_id, temperature=self.temperature, top_p=self.top_p, prompt=formatted_prompt, max_tokens=leftover_tokens_count, timeout=72000, # Avoid timeout errors ) end_time = time.time() return api_response, end_time - start_time def _process_tool_response(self, tool_response_lst): processed_tool_response = [] for tool_response in tool_response_lst: processed_tool_response.append(tool_response) return processed_tool_response @override def _format_prompt(self, messages, function): new_messages = [] tool_content = [] for idx, message in enumerate(messages): role = message["role"] content = message["content"] if role != "tool": if len(tool_content) != 0: tool_message = { "role": "tool", "content": str(tool_content), } new_messages.append(tool_message) tool_content = [] new_messages.append(message) else: tool_content.append(content) if len(tool_content) != 0: tool_message = { "role": "tool", "content": str(tool_content), } new_messages.append(tool_message) tool_content = [] print("new_messages", new_messages) formatted_prompt = self.tokenizer.apply_chat_template( new_messages, tokenize=False, add_generation_prompt=True ) formatted_prompt += "" print("formated_prompt", formatted_prompt) return formatted_prompt @override def _parse_query_response_prompting(self, api_response: Any) -> dict: reasoning_content = "" model_response = api_response.choices[0].text cleaned_response = "" reasoning_content = "" cleaned_response = model_response if "" in model_response: parts = model_response.split("") reasoning_content = parts[0].rstrip(" ").split("")[-1].lstrip(" ") cleaned_response = parts[-1].lstrip(" ") else: cleaned_response = "response outputs too long or no slash think in response." print("cleaned_response: ", cleaned_response) response_data = { "model_responses": cleaned_response, "model_responses_message_for_chat_history": { "role": "assistant", "content": cleaned_response, }, "reasoning_content": reasoning_content, "input_token": api_response.usage.prompt_tokens, "output_token": api_response.usage.completion_tokens, } # Attach reasoning content to the assistant message for the next turn if present if reasoning_content: response_data["model_responses_message_for_chat_history"][ "reasoning_content" ] = reasoning_content if not reasoning_content: del response_data["reasoning_content"] return response_data ``` ----- ## 🔗 Related Projects and Citation This work is part of the open-source project **[AWorld, InclusionAI](https://github.com/inclusionAI/AWorld/)**. If you use FunReason-MT in your research, please cite the technical report: ``` @article{xu2025funreason, title={FunReason-MT Technical Report: Advanced Data Synthesis Solution for Real-world Multi-Turn Tool-use}, author={Zengzhuang Xu, Bingguang Hao, Zechuan Wang, Yuntao Wen, Xinyi Xu, Yang Liu, Long Chen, Dong Wang, Maolin Wang, Tong Zhao, Yicheng Chen, Cunyin Peng, Jinjie Gu, Leilei Gan, Xiangyu Zhao, Chenyi Zhuang, Shi Gu}, journal={arXiv preprint arXiv:2510.24645}, year={2025} } ``` ### Contact For inquiries, please contact: * `bingguanghao7@gmail.com`