Website: FireAct Agent

FireAct Llama-2/CodeLlama

FireAct Llama/CodeLlama is a collection of fine-tuned generative text models for performing ReAct with external search tools. Links to other models can be found in the Index section.

Foundation Model Details

Note: As the foundation models, Llama-2 and CodeLlama, are developed by Meta, please also read the guidance and license on their website, Llama-2 and CodeLlama, before using FireAct models.

Model Developers System 2 Research, Cambridge LTL, Monash University, Princeton PLI.

Variations FireAct models including Llama-2-7B full fine-tuned models, and Llama-2-[7B,13B], CodeLlama-[7B,13B,34B] LoRA fine-tuned models. All released models are fine-tuned on multi-task (HotpotQA/StrategyQA/MMLU) and multi-type (ReAct/CoT/Reflexion) data.

Input Models input text only.

Output Models generate text only.

Index

Full Fine-tuned Model

FireAct Llama-2:

LoRA Fine-tuned Model

FireAct Llama-2:

FireAct CodeLlama:

LoRA Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: True
  • load_in_4bit: False
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

Framework versions

  • PEFT 0.4.0
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.