Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

xLAM: A Family of Large Action Models to Empower AI Agent Systems

License

Notifications You must be signed in to change notification settings

SalesforceAIResearch/xLAM


Python 3.9+ License huggingface discord GitHub star chart

Paper | Model Instruction | Framework | Installation | Train | Benchmarks | Acknowledgement


πŸŽ‰πŸŽ‰πŸŽ‰ News


Note: This repository is provided for research purposes only.
Any data related to xLAM is partially released due to internal regulations to support the advancement of the agent research community.


Autonomous agents powered by large language models (LLMs) have garnered significant research attention. However, fully harnessing the potential of LLMs for agent-based tasks presents inherent challenges due to the heterogeneous nature of diverse data sources featuring multi-turn trajectories.

This repo introduces xLAM that aggregates agent trajectories from distinct environments, spanning a wide array of scenarios. It standardizes and unifies these trajectories into a consistent format, streamlining the creation of a generic data loader optimized for agent training. Leveraging the data unification, our training pipeline maintains equilibrium across different data sources and preserves independent randomness across devices during dataset partitioning and model training.




Model Instruction

Model # Total Params Context Length Release Date Category Download Model Download GGUF files
Llama-xLAM-2-70b-fc-r 70B 128k Mar. 26, 2025 Multi-turn Conversation, Function-calling πŸ€— Link NA
Llama-xLAM-2-8b-fc-r 8B 128k Mar. 26, 2025 Multi-turn Conversation, Function-calling πŸ€— Link πŸ€— Link
xLAM-2-32b-fc-r 32B 32k (max 128k)* Mar. 26, 2025 Multi-turn Conversation, Function-calling πŸ€— Link NA
xLAM-2-3b-fc-r 3B 32k (max 128k)* Mar. 26, 2025 Multi-turn Conversation, Function-calling πŸ€— Link πŸ€— Link
xLAM-2-1b-fc-r 1B 32k (max 128k)* Mar. 26, 2025 Multi-turn Conversation, Function-calling πŸ€— Link πŸ€— Link
xLAM-7b-r 7.24B 32k Sep. 5, 2024 General, Function-calling πŸ€— Link --
xLAM-8x7b-r 46.7B 32k Sep. 5, 2024 General, Function-calling πŸ€— Link --
xLAM-8x22b-r 141B 64k Sep. 5, 2024 General, Function-calling πŸ€— Link --
xLAM-1b-fc-r 1.35B 16k July 17, 2024 Function-calling πŸ€— Link πŸ€— Link
xLAM-7b-fc-r 6.91B 4k July 17, 2024 Function-calling πŸ€— Link πŸ€— Link
xLAM-v0.1-r 46.7B 32k Mar. 18, 2024 General, Function-calling πŸ€— Link --

xLAM series are significant better at many things including general tasks and function calling. For the same number of parameters, the model have been fine-tuned across a wide range of agent tasks and scenarios, all while preserving the capabilities of the original model.

πŸ“¦ Model Naming Conventions

  • xLAM-7b-r: A general-purpose v1.0 or v2.0 release of the Large Action Model, fine-tuned for broad agentic capabilities. The -r suffix indicates it is a research release.
  • xLAM-7b-fc-r: A specialized variant where -fc denotes fine-tuning for function calling tasks, also marked for research use.
  • βœ… All models are fully compatible with VLLM, FastChat, and Transformers-based inference frameworks.

Deploying and Interacting with xLAM Models

πŸ€— Use Transformers for Inference

Below is one example on how to use the latest models:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Salesforce/Llama-xLAM-2-3b-fc-r")
model = AutoModelForCausalLM.from_pretrained("Salesforce/Llama-xLAM-2-3b-fc-r", torch_dtype=torch.bfloat16, device_map="auto")

# Example conversation with a tool call
messages = [
    {"role": "user", "content": "Hi, how are you?"},
    {"role": "assistant", "content": "Thanks. I am doing well. How can I help you?"},
    {"role": "user", "content": "What's the weather like in London?"},
]

tools = [
    {
        "name": "get_weather",
        "description": "Get the current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"},
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"], "description": "The unit of temperature to return"}
            },
            "required": ["location"]
        }
    }
]

print("====== prompt after applying chat template ======")
print(tokenizer.apply_chat_template(messages, tools=tools, add_generation_prompt=True, tokenize=False))

inputs = tokenizer.apply_chat_template(messages, tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
input_ids_len = inputs["input_ids"].shape[-1] # Get the length of the input tokens
inputs = {k: v.to(model.device) for k, v in inputs.items()}
print("====== model response ======")
outputs = model.generate(**inputs, max_new_tokens=256)
generated_tokens = outputs[:, input_ids_len:] # Slice the output to get only the newly generated tokens
print(tokenizer.decode(generated_tokens[0], skip_special_tokens=True))

Note: You may need to tune the Temperature setting for different applications. Typically, a lower Temperature is helpful for tasks that require deterministic outcomes. Additionally, for tasks demanding adherence to specific formats or function calls, explicitly including formatting instructions is advisable and important.

βš‘πŸ“ˆ Using vLLM for Inference

The xLAM models can also be efficiently served using vLLM for high-throughput inference. Please use vllm>=0.6.5 since earlier versions will cause degraded performance for Qwen-based models.

Setup and Serving

  1. Install vLLM with the required version:
pip install "vllm>=0.6.5"
  1. Download the tool parser plugin to your local path:
wget https://huggingface.co/Salesforce/xLAM-2-1b-fc-r/raw/main/xlam_tool_call_parser.py
  1. Start the OpenAI API-compatible endpoint:
MODEL_NAME_OR_PATH="Salesforce/xLAM-2-1b-fc-r"
ASSIGNED_MODEL_NAME="xlam-2-1b-fc-r" # vLLM uses the assigned model name for reference
NUM_ASSIGNED_GPUS=1 # a 70b model would need 4 GPUs, each with 80GB memory
PORT=8000

vllm serve $MODEL_NAME_OR_PATH \
  --tensor-parallel-size $NUM_ASSIGNED_GPUS \
  --served-model-name $ASSIGNED_MODEL_NAME \
  --port $PORT \
  --gpu-memory-utilization 0.9 \
  --enable-auto-tool-choice \
  --tool-parser-plugin ./xlam_tool_call_parser.py \
  --tool-call-parser xlam 

Note: Ensure that the tool parser plugin file is downloaded and that the path specified in --tool-parser-plugin correctly points to your local copy of the file. The xLAM series models all utilize the same tool call parser, so you only need to download it once for all models.

Testing with OpenAI API

Here's a minimal example to test tool usage with the served endpoint:

import openai
import json

# Configure the client to use your local vLLM endpoint
client = openai.OpenAI(
    base_url="http://localhost:8000/v1",  # Default vLLM server PORT
    api_key="empty"  # Can be any string
)

# Define a tool/function
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA"
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The unit of temperature to return"
                    }
                },
                "required": ["location"]
            }
        }
    }
]
messages = [
  {"role": "system", "content": "You are a helpful assistant that can use tools."},
  {"role": "user", "content": "What's the weather like in San Francisco?"}
]

# Create a chat completion
if tools is None or tools==[]: # chitchat
  response = client.chat.completions.create(
      model="xlam-2-1b-fc-r",  # ASSIGNED_MODEL_NAME
      messages=messages
  )
else: # function calling
  response = client.chat.completions.create(
      model="xlam-2-1b-fc-r",  # ASSIGNED_MODEL_NAME
      messages=messages,
      tools=tools,
      tool_choice="auto"
  )

# Print the response
print("Assistant's response:")
print(json.dumps(response.model_dump(), indent=2))

For more advanced configurations and deployment options, please refer to the vLLM documentation.


🧠 APIGen-MT: Agentic PIpeline for Multi-Turn Data Generation via Simulated Agent-Human Interplay

image image


🧠 ActionStudio: A Lightweight Framework for Agentic Data and Training of Large Action Models



❀️ Please refer ActionStudio.md for more details.

πŸ“¦ Installation

πŸ”§ Dependencies

Install dependencies from the root xLAM directory (where setup.py is located) with:

conda create --name actionstudio python=3.10

bash requirements.sh

πŸš€ Installing ActionStudio

Development Version (Latest):

To use the latest code under active development, install ActionStudio in editable mode from the root xLAM directory (where setup.py is located):

pip install -e .

πŸ—‚οΈ Structure

actionstudio/
β”œβ”€β”€ datasets/                             # Open-source unified trajectory datasets
β”œβ”€β”€ examples/                             # Usage examples and configurations
β”‚   β”œβ”€β”€ data_configs/                     # YAML configs for data mixtures
β”‚   β”œβ”€β”€ deepspeed_configs/                # DeepSpeed training configuration files
β”‚   └── trainings/                        # Bash scripts for various training methods (**`README.md`**)
β”œβ”€β”€ src/                                  # Source code
β”‚   β”œβ”€β”€ data_conversion/                  # Converting trajectories into training data (**`README.md`**)
β”‚   └── criticLAM/                        # Critic Large Action Model implementation (**`README.md`**)
└── foundation_modeling/                  # Core modeling components
    β”œβ”€β”€ data_handlers/
    β”œβ”€β”€ train/
    β”œβ”€β”€ trainers/
    └── utils/

πŸ” Most top-level folders include a README.md with detailed instructions and explanations.

πŸ“œ Licenses

The code is licensed under Apache 2.0, and the datasets are under the CC-BY-NC-4.0 License. The data provided are intended for research purposes only.

πŸ› οΈ Code Updates History

April 14, 2025

  • Updated dependency versions to support the latest models and techniques
  • Added auto calculation and assignment of training steps
  • Enabled automatic checkpoint merging at the end of training.

    πŸ“„ See actionstudio/examples/trainings/README.md for training examples and usage

  • Improved documentation and inline code comments


πŸ† Benchmarks (xLAM-2-fc Series)

Berkeley Function-Calling Leaderboard (BFCL v3)

image

BFCL Results
Performance comparison of different models on BFCL leaderboard. The rank is based on the overall accuracy, which is a weighted average of different evaluation categories. "FC" stands for function-calling mode in contrast to using a customized "prompt" to extract the function calls.

Ο„-bench Benchmark

Tau-bench Results
Success Rate (pass@1) on Ο„-bench benchmark averaged across at least 5 trials. Our xLAM-2-70b-fc-r model achieves an overall success rate of 56.2% on Ο„-bench, significantly outperforming the base Llama 3.1 70B Instruct model (38.2%) and other open-source models like DeepSeek v3 (40.6%). Notably, our best model even outperforms proprietary models such as GPT-4o (52.9%) and approaches the performance of more recent models like Claude 3.5 Sonnet (new) (60.1%).

Pass^k curves
Pass^k curves measuring the probability that all 5 independent trials succeed for a given task, averaged across all tasks for Ο„-retail (left) and Ο„-airline (right) domains. Higher values indicate better consistency of the models.


πŸ† Benchmarks (xLAM 1.0 Series)

Berkeley Function-Calling Leaderboard (BFCL)



Webshop

LLM NameZSZSTReaActPlanActPlanReActBOLAA
Llama-2-70B-chat 0.0089 0.01020.42730.28090.39660.4986
Vicuna-33B 0.1527 0.21220.19710.37660.40320.5618
Mixtral-8x7B-Instruct-v0.1 0.4634 0.45920.56380.47380.33390.5342
GPT-3.5-Turbo 0.4851 0.50580.50470.49300.54360.6354
GPT-3.5-Turbo-Instruct 0.3785 0.41950.43770.36040.48510.5811
GPT-4-06130.50020.4783 0.46160.79500.46350.6129
xLAM-v0.1-r0.52010.52680.64860.65730.66110.6556

HotpotQA

LLM NameZSZSTReaActPlanActPlanReAct
Mixtral-8x7B-Instruct-v0.1 0.3912 0.39710.37140.31950.3039
GPT-3.5-Turbo 0.4196 0.39370.38680.41820.3960
GPT-4-06130.58010.5709 0.61290.57780.5716
xLAM-v0.1-r0.54920.47760.50200.55830.5030

Please note: All prompts provided by AgentLite are considered "unseen prompts" for xLAM-v0.1-r, meaning the model has not been trained with data related to these prompts.

Webshop

LLM NameActReActBOLAA
GPT-3.5-Turbo-16k 0.6158 0.60050.6652
GPT-4-06130.6989 0.67320.7154
xLAM-v0.1-r0.65630.66400.6854

HotpotQA

EasyMediumHard
LLM NameF1 ScoreAccuracyF1 ScoreAccuracyF1 ScoreAccuracy
GPT-3.5-Turbo-16k-0613 0.410 0.3500.3300.250.2830.20
GPT-4-06130.6110.47 0.6100.4800.5270.38
xLAM-v0.1-r0.5320.450.5470.460.4550.36
LLM NameUnseen Insts & Same SetUnseen Tools & Seen CatUnseen Tools & Unseen Cat
TooLlama V2 0.4385 0.43000.4350
GPT-3.5-Turbo-0125 0.5000 0.51500.4900
GPT-4-0125-preview0.54620.54500.5050
xLAM-v0.1-r0.50770.56500.5200
LLM Name1-step2-step3-step4-step5-step
GPT-4-0613----69.45
Claude-Instant-112.1232.2539.2544.3745.90
xLAM-v0.1-r4.1028.5036.0142.6643.96
Claude-2 26.45 35.4936.0139.7639.93
Lemur-70b-Chat-v1 3.75 26.9635.6737.5437.03
GPT-3.5-Turbo-0613 2.7316.8924.0631.7436.18
AgentLM-70b 6.4817.7524.9128.1628.67
CodeLlama-34b 0.1716.2123.0425.9428.16
Llama-2-70b-chat 4.2714.3315.7016.5517.92
LLM NameSuccess RateProgress Rate
xLAM-v0.1-r0.5330.766
DeepSeek-67B 0.400 0.714
GPT-3.5-Turbo-0613 0.367 0.627
GPT-3.5-Turbo-16k 0.3170.591
Lemur-70B 0.2830.720
CodeLlama-13B 0.2500.525
CodeLlama-34B 0.1330.600
Mistral-7B 0.0330.510
Vicuna-13B-16K 0.0330.343
Llama-2-70B 0.0000.483

Licenses

This code is licensed under Apache 2.0. For models based on the deepseek model, which require you to follow the use based restrictions in the linked deepseek license. This is a research only project.


Acknowledgement

We want to acknowledge the work which have made contributions to our paper and the agent research community! If you find our work useful, please consider to cite

@article{zhang2024xlamfamilylargeaction,
  title={xLAM: A Family of Large Action Models to Empower AI Agent Systems}, 
  author={Zhang, Jianguo  and Lan, Tian  and Zhu, Ming  and Liu, Zuxin and Hoang, Thai and Kokane, Shirley and Yao, Weiran and Tan, Juntao and Prabhakar, Akshara and Chen, Haolin and Liu, Zhiwei and Feng, Yihao and Awalgaonkar, Tulika and Murthy, Rithesh and Hu, Eric and Chen, Zeyuan and Xu, Ran and Niebles, Juan Carlos and Heinecke, Shelby and Wang, Huan and Savarese, Silvio and Xiong, Caiming},
  journal={arXiv preprint arXiv:2409.03215}
  year={2024}
}
@article{zhang2025actionstudio,
  title={ActionStudio: A Lightweight Framework for Data and Training of Action Models},
  author={Zhang, Jianguo and Hoang, Thai and Zhu, Ming and Liu, Zuxin and Wang, Shiyu and Awalgaonkar, Tulika and Prabhakar, Akshara and Chen, Haolin and Yao, Weiran and Liu, Zhiwei and others},
  journal={arXiv preprint arXiv:2503.22673},
  year={2025}
}
@article{prabhakar2025apigen,
  title={APIGen-MT: Agentic PIpeline for Multi-Turn Data Generation via Simulated Agent-Human Interplay},
  author={Prabhakar, Akshara and Liu, Zuxin and Zhu, Ming and Zhang, Jianguo and Awalgaonkar, Tulika and Wang, Shiyu and Liu, Zhiwei and Chen, Haolin and Hoang, Thai and others},
  journal={arXiv preprint arXiv:2504.03601},
  year={2025}
}
@article{liu2024apigen,
  title={APIGen: Automated PIpeline for Generating Verifiable and Diverse Function-Calling Datasets},
  author={Liu, Zuxin and Hoang, Thai and Zhang, Jianguo and Zhu, Ming and Lan, Tian and Kokane, Shirley and Tan, Juntao and Yao, Weiran and Liu, Zhiwei and Feng, Yihao and others},
  journal={arXiv preprint arXiv:2406.18518},
  year={2024}
}
@article{zhang2024agentohana,
  title={AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning},
  author={Zhang, Jianguo and Lan, Tian and Murthy, Rithesh and Liu, Zhiwei and Yao, Weiran and Tan, Juntao and Hoang, Thai and Yang, Liangwei and Feng, Yihao and Liu, Zuxin and others},
  journal={arXiv preprint arXiv:2402.15506},
  year={2024}
}

About

xLAM: A Family of Large Action Models to Empower AI Agent Systems

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
Morty Proxy This is a proxified and sanitized view of the page, visit original site.