text
stringlengths
63
77.2k
metadata
dict
# Open R1 *A fully open reproduction of DeepSeek-R1. This repo is a work in progress, let's build it together!* **Table of Contents** 1. [Overview](#overview) 2. [Plan of attack](#plan-of-attack) 3. [Installation](#installation) 4. [Training models](#training-models) - [SFT](#sft) - [GRPO](#grpo) 5. [Evaluating models](#evaluating-models) 6. [Reproducing Deepseek's evaluation results](#reproducing-deepseeks-evaluation-results) 7. [Data generation](#data-generation) - [Generate data from a smol distilled R1 model](#generate-data-from-a-smol-distilled-r1-model) - [Generate data from DeepSeek-R1](#generate-data-from-deepseek-r1) 8. [Contributing](#contributing) ## Overview The goal of this repo is to build the missing pieces of the R1 pipeline such that everybody can reproduce and build on top of it. The project is simple by design and mostly consists of: - `src/open_r1`: contains the scripts to train and evaluate models as well as generate synthetic data: - `grpo.py`: trains a model with GRPO on a given dataset. - `sft.py`: performs a simple SFT of a model on a dataset. - `evaluate.py`: evaluates a model on the R1 benchmarks. - `generate.py`: generates synthetic data from a model using [Distilabel](https://github.com/argilla-io/distilabel). - `Makefile`: contains easy-to-run commands for each step in the R1 pipeline leveraging the scripts above. ### Plan of attack We will use the DeepSeek-R1 [tech report](https://github.com/deepseek-ai/DeepSeek-R1) as a guide, which can roughly be broken down into three main steps: * Step 1: replicate the R1-Distill models by distilling a high-quality corpus from DeepSeek-R1. * Step 2: replicate the pure RL pipeline that DeepSeek used to create R1-Zero. This will likely involve curating new, large-scale datasets for math, reasoning, and code. * Step 3: show we can go from base model to RL-tuned via multi-stage training. <center> <img src="assets/plan-of-attack.png" width="500"> </center> ## Installation > [!CAUTION] > Libraries rely on CUDA 12.4. If you see errors related to segmentation faults, double check the version your system is running with `nvcc --version`. To run the code in this project, first, create a Python virtual environment using e.g. `uv`. To install `uv`, follow the [UV Installation Guide](https://docs.astral.sh/uv/getting-started/installation/). ```shell uv venv openr1 --python 3.11 && source openr1/bin/activate && uv pip install --upgrade pip --link-mode=copy ``` Next, install vLLM: ```shell uv pip install vllm==0.7.2 --link-mode=copy ``` This will also install PyTorch `v2.5.1` and it is **very important** to use this version since the vLLM binaries are compiled for it. You can then install the remaining dependencies for your specific use case via `pip install -e .[LIST OF MODES]`. For most contributors, we recommend: ```shell GIT_LFS_SKIP_SMUDGE=1 uv pip install -e ".[dev]" --link-mode=copy ``` Next, log into your Hugging Face and Weights and Biases accounts as follows: ```shell huggingface-cli login wandb login ``` Finally, check whether your system has Git LFS installed so that you can load and push models/datasets to the Hugging Face Hub: ```shell git-lfs --version ``` If it isn't installed, run: ```shell sudo apt-get install git-lfs ``` ## Training models We support training models with either DDP or DeepSpeed (ZeRO-2 and ZeRO-3). For example, to run SFT on a dataset distilled from DeepSeek-R1 with reasoning traces such as [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k), run: ```shell # Train via command line accelerate launch --config_file=recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \ --model_name_or_path Qwen/Qwen2.5-1.5B-Instruct \ --dataset_name HuggingFaceH4/Bespoke-Stratos-17k \ --learning_rate 2.0e-5 \ --num_train_epochs 1 \ --packing \ --max_seq_length 4096 \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 8 \ --gradient_checkpointing \ --bf16 \ --output_dir data/Qwen2.5-1.5B-Open-R1-Distill # Train via YAML config accelerate launch --config_file recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \ --config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml ``` Currently, the following tasks are supported: * Supervised Fine-Tuning `sft` * Group Relative Policy Optimization `grpo` > [!TIP] > If you scale up/down the number of GPUs, we recommend also scaling up the per-device batch size or number of gradient accumulation steps to keep the global batch size constant. By default, these scripts will push each model to your Hugging Face Hub username, i.e. `{username}/{model_name}-{task}`. You can override the parameters in each YAML config by appending them to the command as follows: ```shell # Change batch size, number of epochs etc accelerate launch --config_file recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \ --config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml --per_device_train_batch_size=1 --num_train_epochs=5 ``` If you also wish to override the Weights and Biases default settings, you can do so as follows: ```shell accelerate launch --config_file recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \ --config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml --wandb_entity huggingface --wandb_project open-r1 --run_name Qwen2.5-1.5B-GRPO ``` > [!NOTE] > The training commands below are configured for a node of 8 x H100s (80GB). For different hardware and topologies, you may need to tune the batch size and number of gradient accumulation steps. ### SFT To run SFT on a dataset distilled from DeepSeek-R1 with reasoning traces such as [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k), run: ```shell ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/zero3.yaml \ src/open_r1/sft.py \ --config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml ``` ### GRPO To train via the GRPO trainer, we use one GPU to run vLLM for faster generation and the remaining GPUs for training. For example, one a node with 8 GPUs, use the `recipes/accelerate_configs/zero2.yaml` config and then overwrite `num_processes` to run on 7 devices: ```shell ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/zero2.yaml \ --num_processes=7 src/open_r1/grpo.py \ --config recipes/Qwen2.5-1.5B-Instruct/grpo/config_demo.yaml ``` We provide a minimal reproducible experiment using GRPO for mathematical reasoning, referencing the approach from [SimpleRL-Reason](https://hkust-nlp.notion.site/simplerl-reason) which uses a 7B model trained on 8K examples. Running this on 8 H100 80G GPU takes about 3 hours: ```shell ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/zero2.yaml \ --num_processes=7 src/open_r1/grpo.py \ --config recipes/Qwen2.5-Math-7B/grpo/config_simple_rl.yaml ``` Our final [model](https://huggingface.co/Dongwei/Qwen-2.5-7B_Base_Math_smalllr), while using different learning rates, loss functions and reward structures, achieves 69.4% accuracy on MATH-500, demonstrating a 17%+ improvement over the base model. ### Launching jobs on a Slurm cluster If you have access to a Slurm cluster, we provide a `slurm/train.slurm` script that will automatically queue training jobs for you. Here's how you can use it: ```shell sbatch --job-name=open_r1 --nodes=1 slurm/train.slurm {model_name} {task} {config_suffix} {accelerator} ``` Here `{model_name}` and `{task}` are defined as above, while `{config_suffix}` refers to the specific config and `{accelerator}` refers to the choice of 🤗 Accelerate config in `recipes/accelerate_configs`. If you wish to override the default config parameters, you can provide them by appending a space-separated string like `'--arg1=value1 --arg2=value2'`. Here's a concrete example to run SFT on 1 node of 8 GPUs: ```shell # Launch on Slurm and override default hyperparameters sbatch --job-name=open_r1 --nodes=1 slurm/train.slurm Qwen2.5-1.5B-Instruct sft demo zero3 '--per_device_train_batch_size=1 --num_train_epochs=5' ``` You can scale the number of nodes by increasing the `--nodes` flag. > [!NOTE] > The configuration in `slurm/train.slurm` is optimised for the Hugging Face Compute Cluster and may require tweaking to be adapted to your own compute nodes. ## Evaluating models We use `lighteval` to evaluate models, with custom tasks defined in `src/open_r1/evaluate.py`. For models which fit on a single GPU, run: ```shell MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilisation=0.8" OUTPUT_DIR=data/evals/$MODEL # AIME 2024 TASK=aime24 lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \ --custom-tasks src/open_r1/evaluate.py \ --use-chat-template \ --output-dir $OUTPUT_DIR # MATH-500 TASK=math_500 lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \ --custom-tasks src/open_r1/evaluate.py \ --use-chat-template \ --output-dir $OUTPUT_DIR # GPQA Diamond TASK=gpqa:diamond lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \ --custom-tasks src/open_r1/evaluate.py \ --use-chat-template \ --output-dir $OUTPUT_DIR ``` > [!IMPORTANT] > You must set `max_model_length=32768` in the `vllm` command to align with the `generation_size` we define per eval. Without this, `lighteval` will throw an error. To increase throughput across multiple GPUs, use _data parallel_ as follows: ```shell NUM_GPUS=8 MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,data_parallel_size=$NUM_GPUS,max_model_length=32768,gpu_memory_utilisation=0.8" TASK=aime24 OUTPUT_DIR=data/evals/$MODEL lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \ --custom-tasks src/open_r1/evaluate.py \ --use-chat-template \ --output-dir $OUTPUT_DIR ``` For large models which require sharding across GPUs, use _tensor parallel_ and run: ```shell NUM_GPUS=8 MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,tensor_parallel_size=$NUM_GPUS,max_model_length=32768,gpu_memory_utilisation=0.8" TASK=aime24 OUTPUT_DIR=data/evals/$MODEL export VLLM_WORKER_MULTIPROC_METHOD=spawn lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \ --custom-tasks src/open_r1/evaluate.py \ --use-chat-template \ --output-dir $OUTPUT_DIR ``` You can also launch an evaluation with `make evaluate`, specifying the model, task, and optionally the parallelism technique and number of GPUs. To evaluate on a single GPU: ```shell make evaluate MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B TASK=aime24 ``` To use Data Parallelism: ```shell make evaluate MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B TASK=aime24 PARALLEL=data NUM_GPUS=8 ``` To use Tensor Parallelism: ```shell make evaluate MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B TASK=aime24 PARALLEL=tensor NUM_GPUS=8 ``` ## Reproducing Deepseek's evaluation results > [!NOTE] > The DeepSeek-R1 paper uses sampling with a temperature of 0.6, a top-p value of 0.95, and 64 responses per query to estimate `pass@1`. Below, we report the results from greedy decoding, which likely explains the small 1-3σ discrepancies between our results and theirs. ### MATH-500 We are able to reproduce Deepseek's reported results on the MATH-500 benchmark within ~1-3 standard deviations: | Model | MATH-500 (🤗 LightEval) | MATH-500 (DeepSeek Reported) | |:------------------------------|:-----------------------:|:----------------------------:| | DeepSeek-R1-Distill-Qwen-1.5B | 81.2 | 83.9 | | DeepSeek-R1-Distill-Qwen-7B | 91.8 | 92.8 | | DeepSeek-R1-Distill-Qwen-14B | 94.2 | 93.9 | | DeepSeek-R1-Distill-Qwen-32B | 95.0 | 94.3 | | DeepSeek-R1-Distill-Llama-8B | 85.4 | 89.1 | | DeepSeek-R1-Distill-Llama-70B | 93.4 | 94.5 | To reproduce these results use the following command: ```shell NUM_GPUS=1 # Set to 8 for 32B and 70B models MODEL=deepseek-ai/{model_name} MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilisation=0.8,tensor_parallel_size=$NUM_GPUS" OUTPUT_DIR=data/evals/$MODEL lighteval vllm $MODEL_ARGS "custom|math_500|0|0" \ --custom-tasks src/open_r1/evaluate.py \ --use-chat-template \ --output-dir $OUTPUT_DIR ``` Alternatively, you can launch Slurm jobs as follows: ```shell python scripts/run_benchmarks.py --model-id={model_id} --benchmarks math_500 ``` ### GPQA Diamond We are able to reproduce Deepseek's reported results on the GPQA Diamond benchmark within ~1-3 standard deviations: | Model | GPQA Diamond (🤗 LightEval) | GPQA Diamond (DeepSeek Reported) | |:------------------------------|:---------------------------:|:--------------------------------:| | DeepSeek-R1-Distill-Qwen-1.5B | 33.3 | 33.8 | | DeepSeek-R1-Distill-Qwen-7B | 48.4 | 49.1 | | DeepSeek-R1-Distill-Qwen-14B | 55.6 | 59.1 | | DeepSeek-R1-Distill-Qwen-32B | 58.6 | 62.1 | | DeepSeek-R1-Distill-Llama-8B | 51.0 | 49.0 | | DeepSeek-R1-Distill-Llama-70B | 65.2 | 65.2 | To reproduce these results use the following command: ```shell NUM_GPUS=1 # Set to 8 for 32B and 70B models MODEL=deepseek-ai/{model_name} MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilisation=0.8,tensor_parallel_size=$NUM_GPUS" OUTPUT_DIR=data/evals/$MODEL lighteval vllm $MODEL_ARGS "custom|gpqa:diamond|0|0" \ --custom-tasks src/open_r1/evaluate.py \ --use-chat-template \ --output-dir $OUTPUT_DIR ``` ```shell python scripts/run_benchmarks.py --model-id={model_id} --benchmarks gpqa ``` ## Data generation ### Generate data from a smol distilled R1 model The following example can be run in 1xH100. First install the following dependencies: ```shell uv pip install "distilabel[vllm]>=1.5.2" ``` Now save the following snippet into a file named `pipeline.py` and run it with `python pipeline.py`. It will generate 4 outputs for each of the 10 examples (change the username for the repository to your org/user name): ```python from datasets import load_dataset from distilabel.models import vLLM from distilabel.pipeline import Pipeline from distilabel.steps.tasks import TextGeneration prompt_template = """\ You will be given a problem. Please reason step by step, and put your final answer within \boxed{}: {{ instruction }}""" dataset = load_dataset("AI-MO/NuminaMath-TIR", split="train").select(range(10)) model_id = "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B" # Exchange with another smol distilled r1 with Pipeline( name="distill-qwen-7b-r1", description="A pipeline to generate data from a distilled r1 model", ) as pipeline: llm = vLLM( model=model_id, tokenizer=model_id, extra_kwargs={ "tensor_parallel_size": 1, "max_model_len": 8192, }, generation_kwargs={ "temperature": 0.6, "max_new_tokens": 8192, }, ) prompt_column = "problem" text_generation = TextGeneration( llm=llm, template=prompt_template, num_generations=4, input_mappings={"instruction": prompt_column} if prompt_column is not None else {} ) if __name__ == "__main__": distiset = pipeline.run(dataset=dataset) distiset.push_to_hub(repo_id="username/numina-deepseek-r1-qwen-7b") ``` Take a look at the sample dataset at [HuggingFaceH4/numina-deepseek-r1-qwen-7b](https://huggingface.co/datasets/HuggingFaceH4/numina-deepseek-r1-qwen-7b). ### Generate data from DeepSeek-R1 To run the bigger DeepSeek-R1, we used 2 nodes, each with 8×H100 GPUs using the slurm file present in this repo at `slurm/generate.slurm`. First, install the dependencies: (for now we need to install the vllm dev wheel that [fixes the R1 cuda graph capture](https://github.com/vllm-project/vllm/commits/221d388cc5a836fa189305785ed7e887cea8b510/csrc/moe/moe_align_sum_kernels.cu)) ```shell pip install https://wheels.vllm.ai/221d388cc5a836fa189305785ed7e887cea8b510/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu121 uv pip install "distilabel[vllm,ray,openai]>=1.5.2" ``` And then run the following command: ```shell sbatch slurm/generate.slurm \ --hf-dataset AI-MO/NuminaMath-TIR \ --temperature 0.6 \ --prompt-column problem \ --model deepseek-ai/DeepSeek-R1 \ --hf-output-dataset username/r1-dataset ``` > [!NOTE] > While the job is running, you can setup an SSH tunnel through the cluster login node to access the Ray dashboard from your computer running `ssh -L 8265:ray_ip_head_node:8265 <login_node>`, then browsing `http://localhost:8265` ## Contributing Contributions are welcome. Please refer to https://github.com/huggingface/open-r1/issues/23.
{ "source": "huggingface/open-r1", "title": "README.md", "url": "https://github.com/huggingface/open-r1/blob/main/README.md", "date": "2025-01-24T15:44:11", "stars": 19596, "description": "Fully open reproduction of DeepSeek-R1", "file_size": 17501 }
**TODO:** we will add more recipes in the future, just like alignment-handbook, this is the purpose of adding recipes to this project.
{ "source": "huggingface/open-r1", "title": "recipes/README.md", "url": "https://github.com/huggingface/open-r1/blob/main/recipes/README.md", "date": "2025-01-24T15:44:11", "stars": 19596, "description": "Fully open reproduction of DeepSeek-R1", "file_size": 134 }
## Serving DeepSeek-R1 on 2x8 H100 SLURM nodes with SGLang 1. Set up the environment (adjust for your cuda version): ```bash conda create -n sglang124 python=3.11 conda activate sglang124 pip install torch=2.5.1 --index-url https://download.pytorch.org/whl/cu124 pip install sgl-kernel --force-reinstall --no-deps pip install "sglang[all]>=0.4.2.post4" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer/ ``` 2. Run the server and wait for the model to load: ```bash sbatch slurm/serve_r1.slurm -m "/fsx/deepseek-r1-checkpoint" -e "sglang124" ``` 3. Run the data generation script: ```bash python scripts/generate_reasoning.py \ --dataset-name "AI-MO/NuminaMath-1.5" \ --output-file "numinamath_r1_generations.jsonl" \ --prompt-column "problem" \ --uuid-column "problem" \ --api-addr "<SGLANG_SERVER_ADDRESS>:39877" \ --num-generations 2 \ --max-tokens 16384 \ --max-concurrent 200 ```
{ "source": "huggingface/open-r1", "title": "slurm/README.md", "url": "https://github.com/huggingface/open-r1/blob/main/slurm/README.md", "date": "2025-01-24T15:44:11", "stars": 19596, "description": "Fully open reproduction of DeepSeek-R1", "file_size": 937 }
# RagaAI Catalyst&nbsp; ![GitHub release (latest by date)](https://img.shields.io/github/v/release/raga-ai-hub/ragaai-catalyst) ![GitHub stars](https://img.shields.io/github/stars/raga-ai-hub/ragaai-catalyst?style=social) ![Issues](https://img.shields.io/github/issues/raga-ai-hub/ragaai-catalyst) RagaAI Catalyst is a comprehensive platform designed to enhance the management and optimization of LLM projects. It offers a wide range of features, including project management, dataset management, evaluation management, trace management, prompt management, synthetic data generation, and guardrail management. These functionalities enable you to efficiently evaluate, and safeguard your LLM applications. ## Table of Contents - [RagaAI Catalyst](#ragaai-catalyst) - [Table of Contents](#table-of-contents) - [Installation](#installation) - [Configuration](#configuration) - [Usage](#usage) - [Project Management](#project-management) - [Dataset Management](#dataset-management) - [Evaluation Management](#evaluation) - [Trace Management](#trace-management) - [Prompt Management](#prompt-management) - [Synthetic Data Generation](#synthetic-data-generation) - [Guardrail Management](#guardrail-management) - [Agentic Tracing](#agentic-tracing) ## Installation To install RagaAI Catalyst, you can use pip: ```bash pip install ragaai-catalyst ``` ## Configuration Before using RagaAI Catalyst, you need to set up your credentials. You can do this by setting environment variables or passing them directly to the `RagaAICatalyst` class: ```python from ragaai_catalyst import RagaAICatalyst catalyst = RagaAICatalyst( access_key="YOUR_ACCESS_KEY", secret_key="YOUR_SECRET_KEY", base_url="BASE_URL" ) ``` **Note**: Authetication to RagaAICatalyst is necessary to perform any operations below ## Usage ### Project Management Create and manage projects using RagaAI Catalyst: ```python # Create a project project = catalyst.create_project( project_name="Test-RAG-App-1", usecase="Chatbot" ) # Get project usecases catalyst.project_use_cases() # List projects projects = catalyst.list_projects() print(projects) ``` ### Dataset Management Manage datasets efficiently for your projects: ```py from ragaai_catalyst import Dataset # Initialize Dataset management for a specific project dataset_manager = Dataset(project_name="project_name") # List existing datasets datasets = dataset_manager.list_datasets() print("Existing Datasets:", datasets) # Create a dataset from CSV dataset_manager.create_from_csv( csv_path='path/to/your.csv', dataset_name='MyDataset', schema_mapping={'column1': 'schema_element1', 'column2': 'schema_element2'} ) # Get project schema mapping dataset_manager.get_schema_mapping() ``` For more detailed information on Dataset Management, including CSV schema handling and advanced usage, please refer to the [Dataset Management documentation](docs/dataset_management.md). ### Evaluation Create and manage metric evaluation of your RAG application: ```python from ragaai_catalyst import Evaluation # Create an experiment evaluation = Evaluation( project_name="Test-RAG-App-1", dataset_name="MyDataset", ) # Get list of available metrics evaluation.list_metrics() # Add metrics to the experiment schema_mapping={ 'Query': 'prompt', 'response': 'response', 'Context': 'context', 'expectedResponse': 'expected_response' } # Add single metric evaluation.add_metrics( metrics=[ {"name": "Faithfulness", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"gte": 0.232323}}, "column_name": "Faithfulness_v1", "schema_mapping": schema_mapping}, ] ) # Add multiple metrics evaluation.add_metrics( metrics=[ {"name": "Faithfulness", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"gte": 0.323}}, "column_name": "Faithfulness_gte", "schema_mapping": schema_mapping}, {"name": "Hallucination", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"lte": 0.323}}, "column_name": "Hallucination_lte", "schema_mapping": schema_mapping}, {"name": "Hallucination", "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"eq": 0.323}}, "column_name": "Hallucination_eq", "schema_mapping": schema_mapping}, ] ) # Get the status of the experiment status = evaluation.get_status() print("Experiment Status:", status) # Get the results of the experiment results = evaluation.get_results() print("Experiment Results:", results) ``` ### Trace Management Record and analyze traces of your RAG application: ```python from ragaai_catalyst import Tracer # Start a trace recording tracer = Tracer( project_name="Test-RAG-App-1", dataset_name="tracer_dataset_name", metadata={"key1": "value1", "key2": "value2"}, tracer_type="langchain", pipeline={ "llm_model": "gpt-4o-mini", "vector_store": "faiss", "embed_model": "text-embedding-ada-002", } ).start() # Your code here # Stop the trace recording tracer.stop() # Get upload status tracer.get_upload_status() ``` ### Prompt Management Manage and use prompts efficiently in your projects: ```py from ragaai_catalyst import PromptManager # Initialize PromptManager prompt_manager = PromptManager(project_name="Test-RAG-App-1") # List available prompts prompts = prompt_manager.list_prompts() print("Available prompts:", prompts) # Get default prompt by prompt_name prompt_name = "your_prompt_name" prompt = prompt_manager.get_prompt(prompt_name) # Get specific version of prompt by prompt_name and version prompt_name = "your_prompt_name" version = "v1" prompt = prompt_manager.get_prompt(prompt_name,version) # Get variables in a prompt variable = prompt.get_variables() print("variable:",variable) # Get prompt content prompt_content = prompt.get_prompt_content() print("prompt_content:", prompt_content) # Compile the prompt with variables compiled_prompt = prompt.compile(query="What's the weather?", context="sunny", llm_response="It's sunny today") print("Compiled prompt:", compiled_prompt) # implement compiled_prompt with openai import openai def get_openai_response(prompt): client = openai.OpenAI() response = client.chat.completions.create( model="gpt-4o-mini", messages=prompt ) return response.choices[0].message.content openai_response = get_openai_response(compiled_prompt) print("openai_response:", openai_response) # implement compiled_prompt with litellm import litellm def get_litellm_response(prompt): response = litellm.completion( model="gpt-4o-mini", messages=prompt ) return response.choices[0].message.content litellm_response = get_litellm_response(compiled_prompt) print("litellm_response:", litellm_response) ``` For more detailed information on Prompt Management, please refer to the [Prompt Management documentation](docs/prompt_management.md). ### Synthetic Data Generation ```py from ragaai_catalyst import SyntheticDataGeneration # Initialize Synthetic Data Generation sdg = SyntheticDataGeneration() # Process your file text = sdg.process_document(input_data="file_path") # Generate results result = sdg.generate_qna(text, question_type ='complex',model_config={"provider":"openai","model":"openai/gpt-3.5-turbo"},n=5) print(result.head()) # Get supported Q&A types sdg.get_supported_qna() # Get supported providers sdg.get_supported_providers() ``` ### Guardrail Management ```py from ragaai_catalyst import GuardrailsManager # Initialize Guardrails Manager gdm = GuardrailsManager(project_name=project_name) # Get list of Guardrails available guardrails_list = gdm.list_guardrails() print('guardrails_list:', guardrails_list) # Get list of fail condition for guardrails fail_conditions = gdm.list_fail_condition() print('fail_conditions;', fail_conditions) #Get list of deployment ids deployment_list = gdm.list_deployment_ids() print('deployment_list:', deployment_list) # Get specific deployment id with guardrails information deployment_id_detail = gdm.get_deployment(17) print('deployment_id_detail:', deployment_id_detail) # Add guardrails to a deployment id guardrails_config = {"guardrailFailConditions": ["FAIL"], "deploymentFailCondition": "ALL_FAIL", "alternateResponse": "Your alternate response"} guardrails = [ { "displayName": "Response_Evaluator", "name": "Response Evaluator", "config":{ "mappings": [{ "schemaName": "Text", "variableName": "Response" }], "params": { "isActive": {"value": False}, "isHighRisk": {"value": True}, "threshold": {"eq": 0}, "competitors": {"value": ["Google","Amazon"]} } } }, { "displayName": "Regex_Check", "name": "Regex Check", "config":{ "mappings": [{ "schemaName": "Text", "variableName": "Response" }], "params":{ "isActive": {"value": False}, "isHighRisk": {"value": True}, "threshold": {"lt1": 1} } } } ] gdm.add_guardrails(deployment_id, guardrails, guardrails_config) # Import GuardExecutor from ragaai_catalyst import GuardExecutor # Initialise GuardExecutor with required params and Evaluate executor = GuardExecutor(deployment_id,gdm,field_map={'context':'document'}) message={'role':'user', 'content':'What is the capital of France' } prompt_params={'document':' France'} model_params = {'temperature':.7,'model':'gpt-4o-mini'} llm_caller = 'litellm' executor([message],prompt_params,model_params,llm_caller) ``` ### Agentic Tracing The Agentic Tracing module provides comprehensive monitoring and analysis capabilities for AI agent systems. It helps track various aspects of agent behavior including: - LLM interactions and token usage - Tool utilization and execution patterns - Network activities and API calls - User interactions and feedback - Agent decision-making processes The module includes utilities for cost tracking, performance monitoring, and debugging agent behavior. This helps in understanding and optimizing AI agent performance while maintaining transparency in agent operations. ```python from ragaai_catalyst import AgenticTracer # Initialize tracer tracer = AgenticTracer( project_name="project_name", dataset_name="dataset_name", tracer_type="agentic", ) # Define tracers @tracer.trace_agents("agent_name") # Agent Definition @tracer.trace_llm("llm_name") # LLM Definition @tracer.trace_tool("tool_name") # Tool Definition # Perform tracing with tracer: # Agent execution code pass
{ "source": "raga-ai-hub/RagaAI-Catalyst", "title": "README.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/README.md", "date": "2024-08-26T12:13:15", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 10938 }
# Pull Request Template ## Description [Provide a brief description of the changes in this PR] ## Related Issue [If applicable, reference the GitHub issue this PR addresses] ## Type of Change Please delete options that are not relevant. - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] This change requires a documentation update ## How Has This Been Tested? [Describe the tests that you ran to verify your changes. Provide instructions so we can reproduce.] ## Checklist: - [ ] My code follows the style guidelines of this project - [ ] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have made corresponding changes to the documentation - [ ] My changes generate no new warnings - [ ] I have added tests that prove my fix is effective or that my feature works - [ ] New and existing unit tests pass locally with my changes - [ ] Any dependent changes have been merged and published in downstream modules ## Additional Context [Add any other context or screenshots about the pull request here.] ## Impact on Roadmap [If applicable, describe how this PR impacts or aligns with the project roadmap]
{ "source": "raga-ai-hub/RagaAI-Catalyst", "title": ".github/PULL_REQUEST_TEMPLATE.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/.github/PULL_REQUEST_TEMPLATE.md", "date": "2024-08-26T12:13:15", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 1365 }
## Dataset Management Create and manage datasets easily for your projects using the `ragaai_catalyst` library. This guide provides steps to list, create, and manage datasets efficiently. #### - Initialize Dataset Management To start managing datasets for a specific project, initialize the `Dataset` class with your project name. ```python from ragaai_catalyst import Dataset # Initialize Dataset management for a specific project dataset_manager = Dataset(project_name="project_name") # List existing datasets datasets = dataset_manager.list_datasets() print("Existing Datasets:", datasets) ``` #### 1. Create a New Dataset from Trace Create a dataset by applying filters to trace data. Below is an example of creating a dataset with specific criteria. ```python dataset_manager.create_from_trace( dataset_name='Test-dataset-1', filter_list=[ { "name": "llm_model", "values": ["gpt-3.5-turbo", "gpt-4"] }, { "name": "prompt_length", "lte": 27, "gte": 23 } ] ) ``` #### 2. Create a New Dataset from CSV You can create a new dataset by uploading a CSV file and mapping its columns to the required schema elements. ##### a. Retrieve CSV Schema Elements with `get_csv_schema()` This function retrieves the valid schema elements that the CSV column names must map to. It helps ensure that your CSV column names align correctly with the expected schema. ###### Returns - A dictionary containing schema information: - `success`: A Boolean indicating whether the schema elements were fetched successfully. - `data['schemaElements']`: A list of valid schema column names. ```python schemaElements = dataset_manager.get_csv_schema()['data']['schemaElements'] print('Supported column names: ', schemaElements) ``` ##### b. Create a Dataset from CSV with `create_from_csv()` Uploads the CSV file to the server, performs schema mapping, and creates a new dataset. ###### Parameters - `csv_path` (str): Path to the CSV file. - `dataset_name` (str): The name you want to assign to the new dataset created from the CSV. - `schema_mapping` (dict): A dictionary that maps CSV columns to schema elements in the format `{csv_column: schema_element}`. Example usage: ```python dataset_manager.create_from_csv( csv_path='path/to/your.csv', dataset_name='MyDataset', schema_mapping={'column1': 'schema_element1', 'column2': 'schema_element2'} ) ``` #### Understanding `schema_mapping` The `schema_mapping` parameter is crucial when creating datasets from a CSV file. It ensures that the data in your CSV file correctly maps to the expected schema format required by the system. ##### Explanation of `schema_mapping` - **Keys**: The keys in the `schema_mapping` dictionary represent the column names in your CSV file. - **Values**: The values correspond to the expected schema elements that the columns should map to. These schema elements define how the data is stored and interpreted in the dataset. ##### Example of `schema_mapping` Suppose your CSV file has columns `user_id` and `response_time`. If the valid schema elements for these are `user_identifier` and `response_duration`, your `schema_mapping` would look like this: ```python schema_mapping = { 'user_id': 'user_identifier', 'response_time': 'response_duration' } ``` This mapping ensures that when the CSV is uploaded, the data in `user_id` is understood as `user_identifier`, and `response_time` is understood as `response_duration`, aligning the data with the system's expectations.
{ "source": "raga-ai-hub/RagaAI-Catalyst", "title": "docs/dataset_management.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/docs/dataset_management.md", "date": "2024-08-26T12:13:15", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 3585 }
# Prompt Management The Prompt Management feature in RagaAI Catalyst allows you to efficiently manage, retrieve, and use prompts in your projects. ## Table of Contents 1. [Library Detail](#library-detail) 2. [Error Handling](#error-handling) 3. [FAQs](#faqs) ## Library Detail ### 1. Initialize RagaAICatalyst and PromptManager First, set up your RagaAICatalyst instance and create a PromptManager for your project: ```python from ragaai_catalyst import RagaAICatalyst from ragaai_catalyst.prompt_manager import PromptManager catalyst = RagaAICatalyst( access_key="your_access_key", secret_key="your_secret_key", base_url="https://your-api-base-url.com/api" ) ``` Create a PromptManager for your project: ```python project_name = "your-project-name" prompt_manager = PromptManager(project_name) ``` ### 2. List Available Prompts ```python prompts = prompt_manager.list_prompts() print("Available prompts:", prompts) ``` ### 3. List Prompt Versions ```python prompt_name = "your_prompt_name" versions = prompt_manager.list_prompt_versions(prompt_name) ``` ### 4. Get a Prompt Object Retrieve a prompt object by name: ```python prompt_name = "your_prompt_name" prompt = prompt_manager.get_prompt(prompt_name) ``` Retrieve a specific prompt object by name and version: ```python prompt_name = "your_prompt_name" version = "your_version" prompt = prompt_manager.get_prompt(prompt_name, version) ``` ### 5. Get Prompt Variables ```python prompt_variables = prompt.get_variables() print("prompt_variables: ",prompt_variables) ``` ### 6. Compile Prompt Once you have a prompt, you can compile it with variables: ```python compiled_prompt = prompt.compile(query="What's the weather?", context="sunny", llm_response="It's sunny today") print("Compiled prompt:", compiled_prompt) ``` ### 7. Get Parameters ```python parameters = prompt.get_parameters() print("parameters: ",parameters) ``` ## Error Handling ### 1. Project Not Found If the project you are trying to access does not exist, the `PromptManager` will raise a `ValueError`: ```python prompt_manager = PromptManager("non_existent_project") # Error: Project not found. Please enter a valid project name ``` ### 2. Prompt Not Found If the prompt you are trying to access does not exist, the `get_prompt` method will raise a `ValueError`: ```python prompt = prompt_manager.get_prompt("non_existent_prompt") # Error: Prompt not found. Please enter a valid Prompt name ``` ### 3. Prompt Version Not Found If the prompt version you are trying to access does not exist, the `get_prompt` method will raise a `ValueError`: ```python prompt = prompt_manager.get_prompt("your_prompt_name", "non_existent_version") # Error: Version not found. Please enter a valid version name ``` ### 4. Missing Variables in Compile If the variables you are trying to compile the prompt with are not found, the `compile` method will raise a `ValueError`: ```python prompt = prompt_manager.get_prompt("your_prompt_name", "your_version") prompt.get_variables() compiled_prompt = prompt.compile(query="What's the weather?") # Error: Missing variable(s): context, llm_response ``` ### 5. Extra Variables in Compile If the variables you are trying to compile the prompt with are not found, the `compile` method will raise a `ValueError`: ```python prompt = prompt_manager.get_prompt("your_prompt_name", "your_version") compiled_prompt = prompt.compile(query="What's the weather?", context="sunny", llm_response="It's sunny today", expected_response="The weather is sunny") # Error: Extra variable(s) provided: expected_response ``` ### 6. Types of variable not str If the variables you are trying to compile the prompt with are not 'str', the `compile` method will raise a `ValueError`: ```python prompt = prompt_manager.get_prompt("your_prompt_name", "your_version") compiled_prompt = prompt.compile(query=True, context="sunny", llm_response="It's sunny today") # Error: Value for variable 'query' must be a string, not bool ``` ## FAQs ### 1. How do I get the list of prompts in a project? You can get the list of prompts in a project by using the `list_prompts()` method in the `PromptManager`. This method allows you to retrieve the list of prompts in a project. ### 2. How do I get the versions of a prompt? You can get the versions of a prompt by using the `list_prompt_versions(prompt_name)` method in the `PromptManager`. This method allows you to retrieve the versions of a prompt. ### 3. How do I get the default version of a prompt? You can get the default version of a prompt by using the `get_prompt(prompt_name)` method in the `PromptManager`. This method allows you to retrieve the default version of a prompt. Then you can use `compile` method to get the prompt with default variables. ### 4. How do I get the specific versions of a prompt? You can get the versions of a prompt by using the `get_prompt(prompt_name, version)` method in the `PromptManager`. This method allows you to retrieve the versions of a prompt. Then you can use `compile` method to get the prompt with default variables. ### 5. How do I get the variables of a prompt? You can get the variables of a prompt by using the `get_variables()` method. This method allows you to retrieve the variables of a prompt. ### 6. How do I get my parameters? You can get the parameters of a prompt by using the `get_parameters()` method. This method allows you to retrieve the parameters of a prompt.
{ "source": "raga-ai-hub/RagaAI-Catalyst", "title": "docs/prompt_management.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/docs/prompt_management.md", "date": "2024-08-26T12:13:15", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 5460 }
--- name: Bug report about: Create a report to help us improve title: "[BUG]: " labels: '' assignees: '' --- # Bug Report **Describe the Bug** A clear and concise description of the problem. **To Reproduce** Steps or code snippets to reproduce the behavior, like: ``` 1. Install AgentNeo using `pip install agentneo` 2. Run the following code: # Your code here 3. Launch the dashboard using `launch_dashboard(port=3000)` 4. Observe the error or unexpected behavior. ``` **Expected Behavior** A clear and concise description of what you expected to happen. **Actual Behavior** Describe what actually happened, including any error messages or unexpected results. **Logs and Screenshots** If applicable, add logs, stack traces, or screenshots to help explain the issue. **Environment Details** - **Operating System**: [e.g., Windows 10, Ubuntu 20.04, macOS Catalina] - **Python Version**: [e.g., 3.9.10] - **AgentNeo Version**: [e.g., 1.0.0] - **Relevant Packages**: [e.g., OpenAI SDK 0.9.0, LiteLLM 1.2.3] **AgentNeo Configuration** Provide any custom configuration settings or code modifications: ```python # Your custom configuration or code here ``` **Additional Context** Add any other information about the problem here, such as: - Network configuration - Firewall settings - Previous attempts to fix the issue
{ "source": "raga-ai-hub/RagaAI-Catalyst", "title": ".github/ISSUE_TEMPLATE/bug_report.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/.github/ISSUE_TEMPLATE/bug_report.md", "date": "2024-08-26T12:13:15", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 1326 }
--- name: Feature request about: Suggest an idea for this project title: '' labels: '' assignees: '' --- **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
{ "source": "raga-ai-hub/RagaAI-Catalyst", "title": ".github/ISSUE_TEMPLATE/feature_request.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/.github/ISSUE_TEMPLATE/feature_request.md", "date": "2024-08-26T12:13:15", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 594 }
# Agentic Tracing This module provides tracing functionality for agentic AI systems, helping track and analyze various aspects of AI agent behavior including LLM interactions, tool usage, and network activities. ## Directory Structure ``` agentic_tracing/ ├── tracers/ # Core tracing implementations │ ├── main_tracer.py # Main tracing functionality │ ├── agent_tracer.py # Agent behavior tracing │ ├── base.py # Base tracing classes │ ├── llm_tracer.py # Language model interaction tracing │ ├── network_tracer.py # Network activity tracing │ ├── tool_tracer.py # Tool usage tracing │ ├── user_interaction_tracer.py # User interaction tracing │ └── __init__.py # Tracer module initialization ├── data/ # Data structures and classes │ ├── data_classes.py # Data class definitions │ └── __init__.py # Data module initialization ├── utils/ # Utility functions and helpers │ ├── api_utils.py # API-related utilities │ ├── file_name_tracker.py # Tracks file names and paths │ ├── generic.py # Generic utility functions │ ├── llm_utils.py # LLM-specific utilities │ ├── model_costs.json # Model cost configurations │ ├── trace_utils.py # General tracing utilities │ ├── unique_decorator.py # Unique ID generation │ ├── zip_list_of_unique_files.py # File handling utilities │ └── __init__.py # Utils module initialization ├── tests/ # Test suites and examples │ ├── ai_travel_agent.py # Travel agent test implementation │ ├── unique_decorator_test.py # Tests for unique decorator │ ├── TravelPlanner.ipynb # Travel planner example notebook │ ├── FinancialAnalysisSystem.ipynb # Financial analysis example │ ├── GameActivityEventPlanner.ipynb # Game event planner example │ └── __init__.py # Tests module initialization ├── upload/ # Upload functionality │ ├── upload_code.py # Code upload utilities │ └── __init__.py # Upload module initialization └── __init__.py # Package initialization ``` ## Components ### Tracers Different types of tracers for various aspects of agent behavior: - Main Tracer: Core tracing functionality for managing and coordinating different trace types - Agent Tracer: Tracks agent behavior, decisions, and state changes - Base Tracer: Provides base classes and common functionality for all tracers - LLM Tracer: Monitors language model interactions, including: - Token usage tracking - Cost calculation - Input/output monitoring - Model parameter tracking - Network Tracer: Tracks network activities and API calls - Tool Tracer: Monitors tool usage and execution - User Interaction Tracer: Tracks user interactions and feedback ### Data Core data structures and classes: - Data Classes: Defines structured data types for: - LLM calls - Network requests - Tool executions - Trace components - Agent states - User interactions ### Utils Helper functions and utilities: - API Utils: Handles API-related operations and configurations - LLM Utils: Utilities for handling LLM-specific operations: - Model name extraction - Token usage calculation - Cost computation - Parameter sanitization - Generic Utils: Common utility functions used across modules - Trace Utils: General tracing utilities - File Name Tracker: Manages file paths and names - Unique Decorator: Generates unique identifiers for trace components - Model Costs: Configuration for different model pricing - Zip List of Unique Files: Handles file compression and unique file management ### Tests Test suites and example implementations: - AI Travel Agent: Test implementation of a travel planning agent - Unique Decorator Tests: Unit tests for unique ID generation - Example Notebooks: - Travel Planner: Example of travel planning implementation - Financial Analysis: Example of financial system analysis - Game Event Planner: Example of game activity planning ### Upload Components for uploading and managing trace data: - Code Upload: Handles uploading of traced code and execution data - Supports various data formats and trace types
{ "source": "raga-ai-hub/RagaAI-Catalyst", "title": "ragaai_catalyst/tracers/agentic_tracing/README.md", "url": "https://github.com/raga-ai-hub/RagaAI-Catalyst/blob/main/ragaai_catalyst/tracers/agentic_tracing/README.md", "date": "2024-08-26T12:13:15", "stars": 10374, "description": "Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view ", "file_size": 4255 }
# Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at feedback@huggingface.co. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. [homepage]: https://www.contributor-covenant.org [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html [Mozilla CoC]: https://github.com/mozilla/diversity [FAQ]: https://www.contributor-covenant.org/faq [translations]: https://www.contributor-covenant.org/translations
{ "source": "huggingface/smolagents", "title": "CODE_OF_CONDUCT.md", "url": "https://github.com/huggingface/smolagents/blob/main/CODE_OF_CONDUCT.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5487 }
<!--- Copyright 2025 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Contribute to smolagents Everyone is welcome to contribute, and we value everybody's contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable. It also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply ⭐️ the repository to say thank you. However you choose to contribute, please be mindful and respect our [code of conduct](https://github.com/huggingface/smolagents/blob/main/CODE_OF_CONDUCT.md). **This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).** ## Ways to contribute There are several ways you can contribute to smolagents. * Fix outstanding issues with the existing code. * Submit issues related to bugs or desired new features. * Contribute to the examples or to the documentation. > All contributions are equally valuable to the community. 🥰 ## Fixing outstanding issues If you notice an issue with the existing code and have a fix in mind, feel free to [start contributing](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) and open a Pull Request! ## Submitting a bug-related issue or feature request Do your best to follow these guidelines when submitting a bug-related issue or a feature request. It will make it easier for us to come back to you quickly and with good feedback. ### Did you find a bug? The smolagents library is robust and reliable thanks to users who report the problems they encounter. Before you report an issue, we would really appreciate it if you could **make sure the bug was not already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. Once you've confirmed the bug hasn't already been reported, please include the following information in your issue so we can quickly resolve it: * Your **OS type and version**, as well as your environment versions (versions of rust, python, and dependencies). * A short, self-contained, code snippet that allows us to reproduce the bug. * The *full* traceback if an exception is raised. * Attach any other additional information, like screenshots, you think may help. ### Do you want a new feature? If there is a new feature you'd like to see in smolagents, please open an issue and describe: 1. What is the *motivation* behind this feature? Is it related to a problem or frustration with the library? Is it a feature related to something you need for a project? Is it something you worked on and think it could benefit the community? Whatever it is, we'd love to hear about it! 2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we'll be able to help you. 3. Provide a *code snippet* that demonstrates the feature's usage. 4. If the feature is related to a paper, please include a link. If your issue is well written we're already 80% of the way there by the time you create it. ## Do you want to add documentation? We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We'll be happy to make the changes or help you make a contribution if you're interested! ## I want to become a maintainer of the project. How do I get there? smolagents is a project led and managed by Hugging Face. We are more than happy to have motivated individuals from other organizations join us as maintainers with the goal of helping smolagents make a dent in the world of Agents. If you are such an individual (or organization), please reach out to us and let's collaborate.
{ "source": "huggingface/smolagents", "title": "CONTRIBUTING.md", "url": "https://github.com/huggingface/smolagents/blob/main/CONTRIBUTING.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 4640 }
<!--- Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <!-- Uncomment when CircleCI is set up <a href="https://circleci.com/gh/huggingface/accelerate"><img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"></a> --> <a href="https://github.com/huggingface/smolagents/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/smolagents.svg?color=blue"></a> <a href="https://huggingface.co/docs/smolagents"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/smolagents/index.html.svg?down_color=red&down_message=offline&up_message=online"></a> <a href="https://github.com/huggingface/smolagents/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/smolagents.svg"></a> <a href="https://github.com/huggingface/smolagents/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a> </p> <h3 align="center"> <div style="display:flex;flex-direction:row;"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/mascot.png" alt="Hugging Face mascot as James Bond" width=100px> <p>smolagents - a smol library to build great agents!</p> </div> </h3> `smolagents` is a library that enables you to run powerful agents in a few lines of code. It offers: ✨ **Simplicity**: the logic for agents fits in 1,000 lines of code (see [agents.py](https://github.com/huggingface/smolagents/blob/main/src/smolagents/agents.py)). We kept abstractions to their minimal shape above raw code! 🧑‍💻 **First-class support for Code Agents**. Our [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) writes its actions in code (as opposed to "agents being used to write code"). To make it secure, we support executing in sandboxed environments via [E2B](https://e2b.dev/). 🤗 **Hub integrations**: you can [share/pull tools to/from the Hub](https://huggingface.co/docs/smolagents/reference/tools#smolagents.Tool.from_hub), and more is to come! 🌐 **Model-agnostic**: smolagents supports any LLM. It can be a local `transformers` or `ollama` model, one of [many providers on the Hub](https://huggingface.co/blog/inference-providers), or any model from OpenAI, Anthropic and many others via our [LiteLLM](https://www.litellm.ai/) integration. 👁️ **Modality-agnostic**: Agents support text, vision, video, even audio inputs! Cf [this tutorial](https://huggingface.co/docs/smolagents/examples/web_browser) for vision. 🛠️ **Tool-agnostic**: you can use tools from [LangChain](https://huggingface.co/docs/smolagents/reference/tools#smolagents.Tool.from_langchain), [Anthropic's MCP](https://huggingface.co/docs/smolagents/reference/tools#smolagents.ToolCollection.from_mcp), you can even use a [Hub Space](https://huggingface.co/docs/smolagents/reference/tools#smolagents.Tool.from_space) as a tool. Full documentation can be found [here](https://huggingface.co/docs/smolagents/index). > [!NOTE] > Check the our [launch blog post](https://huggingface.co/blog/smolagents) to learn more about `smolagents`! ## Quick demo First install the package. ```bash pip install smolagents ``` Then define your agent, give it the tools it needs and run it! ```py from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel model = HfApiModel() agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model) agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?") ``` https://github.com/user-attachments/assets/cd0226e2-7479-4102-aea0-57c22ca47884 Our library is LLM-agnostic: you could switch the example above to any inference provider. <details> <summary> <b>HfApiModel, gateway for 4 inference providers</b></summary> ```py from smolagents import HfApiModel model = HfApiModel( model_id="deepseek-ai/DeepSeek-R1", provider="together", ) ``` </details> <details> <summary> <b>LiteLLM to access 100+ LLMs</b></summary> ```py from smolagents import LiteLLMModel model = LiteLLMModel( "anthropic/claude-3-5-sonnet-latest", temperature=0.2, api_key=os.environ["ANTHROPIC_API_KEY"] ) ``` </details> <details> <summary> <b>OpenAI-compatible servers</b></summary> ```py import os from smolagents import OpenAIServerModel model = OpenAIServerModel( model_id="deepseek-ai/DeepSeek-R1", api_base="https://api.together.xyz/v1/", # Leave this blank to query OpenAI servers. api_key=os.environ["TOGETHER_API_KEY"], # Switch to the API key for the server you're targeting. ) ``` </details> <details> <summary> <b>Local `transformers` model</b></summary> ```py from smolagents import TransformersModel model = TransformersModel( model_id="Qwen/Qwen2.5-Coder-32B-Instruct", max_new_tokens=4096, device_map="auto" ) ``` </details> <details> <summary> <b>Azure models</b></summary> ```py import os from smolagents import AzureOpenAIServerModel model = AzureOpenAIServerModel( model_id = os.environ.get("AZURE_OPENAI_MODEL"), azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"), api_key=os.environ.get("AZURE_OPENAI_API_KEY"), api_version=os.environ.get("OPENAI_API_VERSION") ) ``` </details> ## Command Line Interface You can run agents from CLI using two commands: `smolagent` and `webagent`. `smolagent` is a generalist command to run a multi-step `CodeAgent` that can be equipped with various tools, meanwhile `webagent` is a specific web-browsing agent using [helium](https://github.com/mherrmann/helium). **Web Browser Agent in CLI** `webagent` allows users to automate web browsing tasks. It uses the [helium](https://github.com/mherrmann/helium) library to interact with web pages and uses defined tools to browse the web. Read more about this agent [here](https://github.com/huggingface/smolagents/blob/main/src/smolagents/vision_web_browser.py). Run the following command to get started: ```bash webagent {YOUR_PROMPT_HERE} --model-type "LiteLLMModel" --model-id "gpt-4o" ``` For instance: ```bash webagent "go to xyz.com/women, get to sale section, click the first clothing item you see. Get the product details, and the price, return them. note that I'm shopping from France" ``` We redacted the website here, modify it with the website of your choice. **CodeAgent in CLI** Use `smolagent` to run a multi-step agent with [tools](https://huggingface.co/docs/smolagents/en/reference/tools). It uses web search tool by default. You can easily get started with `$ smolagent {YOUR_PROMPT_HERE}`. You can customize this as follows (more details [here](https://github.com/huggingface/smolagents/blob/main/src/smolagents/cli.py)). ```bash smolagent {YOUR_PROMPT_HERE} --model-type "HfApiModel" --model-id "Qwen/Qwen2.5-Coder-32B-Instruct" --imports "pandas numpy" --tools "web_search translation" ``` For instance: ```bash smolagent "Plan a trip to Tokyo, Kyoto and Osaka between Mar 28 and Apr 7. Allocate time according to number of public attraction in each, and optimize for distance and travel time. Bring all the public transportation options." ``` ## Code agents? In our [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent), the LLM engine writes its actions in code. This approach is demonstrated to work better than the current industry practice of letting the LLM output a dictionary of the tools it wants to calls: [uses 30% fewer steps](https://huggingface.co/papers/2402.01030) (thus 30% fewer LLM calls) and [reaches higher performance on difficult benchmarks](https://huggingface.co/papers/2411.01747). Head to [our high-level intro to agents](https://huggingface.co/docs/smolagents/conceptual_guides/intro_agents) to learn more on that. Especially, since code execution can be a security concern (arbitrary code execution!), we provide options at runtime: - a secure python interpreter to run code more safely in your environment (more secure than raw code execution but still risky) - a sandboxed environment using [E2B](https://e2b.dev/) (removes the risk to your own system). On top of this [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) class, we still support the standard [`ToolCallingAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.ToolCallingAgent) that writes actions as JSON/text blobs. But we recommend always using `CodeAgent`. ## How smol is this library? We strived to keep abstractions to a strict minimum: the main code in `agents.py` has <1,000 lines of code. Still, we implement several types of agents: `CodeAgent` writes its actions as Python code snippets, and the more classic `ToolCallingAgent` leverages built-in tool calling methods. We also have multi-agent hierarchies, import from tool collections, remote code execution, vision models... By the way, why use a framework at all? Well, because a big part of this stuff is non-trivial. For instance, the code agent has to keep a consistent format for code throughout its system prompt, its parser, the execution. So our framework handles this complexity for you. But of course we still encourage you to hack into the source code and use only the bits that you need, to the exclusion of everything else! ## How strong are open models for agentic workflows? We've created [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) instances with some leading models, and compared them on [this benchmark](https://huggingface.co/datasets/m-ric/agents_medium_benchmark_2) that gathers questions from a few different benchmarks to propose a varied blend of challenges. [Find the benchmarking code here](https://github.com/huggingface/smolagents/blob/main/examples/benchmark.ipynb) for more detail on the agentic setup used, and see a comparison of using LLMs code agents compared to vanilla (spoilers: code agents works better). <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/benchmark_code_agents.jpeg" alt="benchmark of different models on agentic workflows. Open model DeepSeek-R1 beats closed-source models." width=60% max-width=500px> </p> This comparison shows that open-source models can now take on the best closed models! ## Contribute To contribute, follow our [contribution guide](https://github.com/huggingface/smolagents/blob/main/CONTRIBUTING.md). At any moment, feel welcome to open an issue, citing your exact error traces and package versions if it's a bug. It's often even better to open a PR with your proposed fixes/changes! To install dev dependencies, run: ``` pip install -e ".[dev]" ``` When making changes to the codebase, please check that it follows the repo's code quality requirements by running: To check code quality of the source code: ``` make quality ``` If the checks fail, you can run the formatter with: ``` make style ``` And commit the changes. To run tests locally, run this command: ```bash make test ``` </details> ## Cite smolagents If you use `smolagents` in your publication, please cite it by using the following BibTeX entry. ```bibtex @Misc{smolagents, title = {`smolagents`: a smol library to build great agentic systems.}, author = {Aymeric Roucher and Albert Villanova del Moral and Thomas Wolf and Leandro von Werra and Erik Kaunismäki}, howpublished = {\url{https://github.com/huggingface/smolagents}}, year = {2025} } ```
{ "source": "huggingface/smolagents", "title": "README.md", "url": "https://github.com/huggingface/smolagents/blob/main/README.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 12154 }
<!--- Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Generating the documentation To generate the documentation, you have to build it. Several packages are necessary to build the doc. First, you need to install the project itself by running the following command at the root of the code repository: ```bash pip install -e . ``` You also need to install 2 extra packages: ```bash # `hf-doc-builder` to build the docs pip install git+https://github.com/huggingface/doc-builder@main # `watchdog` for live reloads pip install watchdog ``` --- **NOTE** You only need to generate the documentation to inspect it locally (if you're planning changes and want to check how they look before committing for instance). You don't have to commit the built documentation. --- ## Building the documentation Once you have setup the `doc-builder` and additional packages with the pip install command above, you can generate the documentation by typing the following command: ```bash doc-builder build smolagents docs/source/en/ --build_dir ~/tmp/test-build ``` You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite Markdown editor. ## Previewing the documentation To preview the docs, run the following command: ```bash doc-builder preview smolagents docs/source/en/ ``` The docs will be viewable at [http://localhost:5173](http://localhost:5173). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives. --- **NOTE** The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again). --- ## Adding a new element to the navigation bar Accepted files are Markdown (.md). Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/smolagents/blob/main/docs/source/_toctree.yml) file. ## Renaming section headers and moving sections It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information. Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor. So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file: ``` Sections that were moved: [ <a href="#section-b">Section A</a><a id="section-a"></a> ] ``` and of course, if you moved it to another file, then: ``` Sections that were moved: [ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ] ``` Use the relative style to link to the new file so that the versioned docs continue to work. For an example of a rich moved section set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md). ## Writing Documentation - Specification The `huggingface/smolagents` documentation follows the [Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings, although we can write them directly in Markdown. ### Adding a new tutorial Adding a new tutorial or section is done in two steps: - Add a new Markdown (.md) file under `./source`. - Link that file in `./source/_toctree.yml` on the correct toc-tree. Make sure to put your new file under the proper section. If you have a doubt, feel free to ask in a Github Issue or PR. ### Translating When translating, refer to the guide at [./TRANSLATING.md](https://github.com/huggingface/smolagents/blob/main/docs/TRANSLATING.md). ### Writing source documentation Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names and objects like True, None, or any strings should usually be put in `code`. When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or function to be in the main package. If you want to create a link to some internal class or function, you need to provide its path. For instance: \[\`utils.ModelOutput\`\]. This will be converted into a link with `utils.ModelOutput` in the description. To get rid of the path and only keep the name of the object you are linking to in the description, add a ~: \[\`~utils.ModelOutput\`\] will generate a link with `ModelOutput` in the description. The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\]. #### Defining arguments in a method Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its description: ``` Args: n_layers (`int`): The number of layers of the model. ``` If the description is too long to fit in one line, another indentation is necessary before writing the description after the argument. Here's an example showcasing everything so far: ``` Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and [`~PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) ``` For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the following signature: ``` def my_function(x: str = None, a: float = 1): ``` then its documentation should look like this: ``` Args: x (`str`, *optional*): This argument controls ... a (`float`, *optional*, defaults to 1): This argument is used to ... ``` Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even if the first line describing your argument type and its default gets long, you can't break it on several lines. You can however write as many lines as you want in the indented description (see the example above with `input_ids`). #### Writing a multi-line code block Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown: ```` ``` # first line of code # second line # etc ``` ```` #### Writing a return block The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation. The first line should be the type of the return, followed by a line return. No need to indent further for the elements building the return. Here's an example of a single value return: ``` Returns: `List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token. ``` Here's an example of a tuple return, comprising several objects: ``` Returns: `tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs: - ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` -- Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss. - **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) -- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). ``` #### Adding an image Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images). If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images to this dataset. #### Writing documentation examples The syntax for Example docstrings can look as follows: ``` Example: ```python >>> from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") >>> model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") >>> # audio file is decoded on the fly >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> # transcribe speech >>> transcription = processor.batch_decode(predicted_ids) >>> transcription[0] 'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL' ``` ``` The docstring should give a minimal, clear example of how the respective model is to be used in inference and also include the expected (ideally sensible) output. Often, readers will try out the example before even going through the function or class definitions. Therefore, it is of utmost importance that the example works as expected.
{ "source": "huggingface/smolagents", "title": "docs/README.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/README.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 11044 }
--- name: Bug report about: The clearer your bug report, the faster it will be fixed! title: "[BUG]" labels: bug assignees: '' --- **Describe the bug** A clear and concise description of what the bug is. **Code to reproduce the error** The simplest code snippet that produces your bug. **Error logs (if any)** Provide error logs if there are any. **Expected behavior** A clear and concise description of what you expected to happen. **Packages version:** Run `pip freeze | grep smolagents` and paste it here. **Additional context** Add any other context about the problem here.
{ "source": "huggingface/smolagents", "title": ".github/ISSUE_TEMPLATE/bug_report.md", "url": "https://github.com/huggingface/smolagents/blob/main/.github/ISSUE_TEMPLATE/bug_report.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 584 }
--- name: Custom issue template about: Describe this issue template's purpose here. title: '' labels: '' assignees: '' ---
{ "source": "huggingface/smolagents", "title": ".github/ISSUE_TEMPLATE/custom.md", "url": "https://github.com/huggingface/smolagents/blob/main/.github/ISSUE_TEMPLATE/custom.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 123 }
--- name: Feature request about: Suggest an idea for this project title: '' labels: enhancement assignees: '' --- **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Is this not possible with the current options.** Make sure to consider if what you're requesting can be done with current abstractions. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
{ "source": "huggingface/smolagents", "title": ".github/ISSUE_TEMPLATE/feature_request.md", "url": "https://github.com/huggingface/smolagents/blob/main/.github/ISSUE_TEMPLATE/feature_request.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 742 }
# Open Deep Research Welcome to this open replication of [OpenAI's Deep Research](https://openai.com/index/introducing-deep-research/)! Read more about this implementation's goal and methods [in our blog post](https://huggingface.co/blog/open-deep-research). This agent achieves 55% pass@1 on GAIA validation set, vs 67% for Deep Research. To install it, first run ```bash pip install -r requirements.txt ``` And install smolagents dev version ```bash pip install smolagents[dev] ``` Then you're good to go! Run the run.py script, as in: ```bash python run.py --model-id "o1" "Your question here!" ```
{ "source": "huggingface/smolagents", "title": "examples/open_deep_research/README.md", "url": "https://github.com/huggingface/smolagents/blob/main/examples/open_deep_research/README.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 607 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Agents - Guided tour [[open-in-colab]] In this guided visit, you will learn how to build an agent, how to run it, and how to customize it to make it work better for your use-case. ### Building your agent To initialize a minimal agent, you need at least these two arguments: - `model`, a text-generation model to power your agent - because the agent is different from a simple LLM, it is a system that uses a LLM as its engine. You can use any of these options: - [`TransformersModel`] takes a pre-initialized `transformers` pipeline to run inference on your local machine using `transformers`. - [`HfApiModel`] leverages a `huggingface_hub.InferenceClient` under the hood and supports all Inference Providers on the Hub. - [`LiteLLMModel`] similarly lets you call 100+ different models and providers through [LiteLLM](https://docs.litellm.ai/)! - [`AzureOpenAIServerModel`] allows you to use OpenAI models deployed in [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service). - [`MLXModel`] creates a [mlx-lm](https://pypi.org/project/mlx-lm/) pipeline to run inference on your local machine. - `tools`, a list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`. Once you have these two arguments, `tools` and `model`, you can create an agent and run it. You can use any LLM you'd like, either through [Inference Providers](https://huggingface.co/blog/inference-providers), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), [LiteLLM](https://www.litellm.ai/), [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service), or [mlx-lm](https://pypi.org/project/mlx-lm/). <hfoptions id="Pick a LLM"> <hfoption id="HF Inference API"> HF Inference API is free to use without a token, but then it will have a rate limit. To access gated models or rise your rate limits with a PRO account, you need to set the environment variable `HF_TOKEN` or pass `token` variable upon initialization of `HfApiModel`. You can get your token from your [settings page](https://huggingface.co/settings/tokens) ```python from smolagents import CodeAgent, HfApiModel model_id = "meta-llama/Llama-3.3-70B-Instruct" model = HfApiModel(model_id=model_id, token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") # You can choose to not pass any model_id to HfApiModel to use a default free model # you can also specify a particular provider e.g. provider="together" or provider="sambanova" agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="Local Transformers Model"> ```python # !pip install smolagents[transformers] from smolagents import CodeAgent, TransformersModel model_id = "meta-llama/Llama-3.2-3B-Instruct" model = TransformersModel(model_id=model_id) agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="OpenAI or Anthropic API"> To use `LiteLLMModel`, you need to set the environment variable `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`, or pass `api_key` variable upon initialization. ```python # !pip install smolagents[litellm] from smolagents import CodeAgent, LiteLLMModel model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", api_key="YOUR_ANTHROPIC_API_KEY") # Could use 'gpt-4o' agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="Ollama"> ```python # !pip install smolagents[litellm] from smolagents import CodeAgent, LiteLLMModel model = LiteLLMModel( model_id="ollama_chat/llama3.2", # This model is a bit weak for agentic behaviours though api_base="http://localhost:11434", # replace with 127.0.0.1:11434 or remote open-ai compatible server if necessary api_key="YOUR_API_KEY", # replace with API key if necessary num_ctx=8192, # ollama default is 2048 which will fail horribly. 8192 works for easy tasks, more is better. Check https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator to calculate how much VRAM this will need for the selected model. ) agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="Azure OpenAI"> To connect to Azure OpenAI, you can either use `AzureOpenAIServerModel` directly, or use `LiteLLMModel` and configure it accordingly. To initialize an instance of `AzureOpenAIServerModel`, you need to pass your model deployment name and then either pass the `azure_endpoint`, `api_key`, and `api_version` arguments, or set the environment variables `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`. ```python # !pip install smolagents[openai] from smolagents import CodeAgent, AzureOpenAIServerModel model = AzureOpenAIServerModel(model_id="gpt-4o-mini") agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` Similarly, you can configure `LiteLLMModel` to connect to Azure OpenAI as follows: - pass your model deployment name as `model_id`, and make sure to prefix it with `azure/` - make sure to set the environment variable `AZURE_API_VERSION` - either pass the `api_base` and `api_key` arguments, or set the environment variables `AZURE_API_KEY`, and `AZURE_API_BASE` ```python import os from smolagents import CodeAgent, LiteLLMModel AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="gpt-35-turbo-16k-deployment" # example of deployment name os.environ["AZURE_API_KEY"] = "" # api_key os.environ["AZURE_API_BASE"] = "" # "https://example-endpoint.openai.azure.com" os.environ["AZURE_API_VERSION"] = "" # "2024-10-01-preview" model = LiteLLMModel(model_id="azure/" + AZURE_OPENAI_CHAT_DEPLOYMENT_NAME) agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="mlx-lm"> ```python # !pip install smolagents[mlx-lm] from smolagents import CodeAgent, MLXModel mlx_model = MLXModel("mlx-community/Qwen2.5-Coder-32B-Instruct-4bit") agent = CodeAgent(model=mlx_model, tools=[], add_base_tools=True) agent.run("Could you give me the 118th number in the Fibonacci sequence?") ``` </hfoption> </hfoptions> #### CodeAgent and ToolCallingAgent The [`CodeAgent`] is our default agent. It will write and execute python code snippets at each step. By default, the execution is done in your local environment. This should be safe because the only functions that can be called are the tools you provided (especially if it's only tools by Hugging Face) and a set of predefined safe functions like `print` or functions from the `math` module, so you're already limited in what can be executed. The Python interpreter also doesn't allow imports by default outside of a safe list, so all the most obvious attacks shouldn't be an issue. You can authorize additional imports by passing the authorized modules as a list of strings in argument `additional_authorized_imports` upon initialization of your [`CodeAgent`]: ```py model = HfApiModel() agent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4']) agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?") ``` > [!WARNING] > The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports! The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent. You can also use [E2B code executor](https://e2b.dev/docs#what-is-e2-b) instead of a local Python interpreter by first [setting the `E2B_API_KEY` environment variable](https://e2b.dev/dashboard?tab=keys) and then passing `use_e2b_executor=True` upon agent initialization. > [!TIP] > Learn more about code execution [in this tutorial](tutorials/secure_code_execution). We also support the widely-used way of writing actions as JSON-like blobs: this is [`ToolCallingAgent`], it works much in the same way like [`CodeAgent`], of course without `additional_authorized_imports` since it doesn't execute code: ```py from smolagents import ToolCallingAgent agent = ToolCallingAgent(tools=[], model=model) agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?") ``` ### Inspecting an agent run Here are a few useful attributes to inspect what happened after a run: - `agent.logs` stores the fine-grained logs of the agent. At every step of the agent's run, everything gets stored in a dictionary that then is appended to `agent.logs`. - Running `agent.write_memory_to_messages()` writes the agent's memory as list of chat messages for the Model to view. This method goes over each step of the log and only stores what it's interested in as a message: for instance, it will save the system prompt and task in separate messages, then for each step it will store the LLM output as a message, and the tool call output as another message. Use this if you want a higher-level view of what has happened - but not every log will be transcripted by this method. ## Tools A tool is an atomic function to be used by an agent. To be used by an LLM, it also needs a few attributes that constitute its API and will be used to describe to the LLM how to call this tool: - A name - A description - Input types and descriptions - An output type You can for instance check the [`PythonInterpreterTool`]: it has a name, a description, input descriptions, an output type, and a `forward` method to perform the action. When the agent is initialized, the tool attributes are used to generate a tool description which is baked into the agent's system prompt. This lets the agent know which tools it can use and why. ### Default toolbox Transformers comes with a default toolbox for empowering agents, that you can add to your agent upon initialization with argument `add_base_tools = True`: - **DuckDuckGo web search***: performs a web search using DuckDuckGo browser. - **Python code interpreter**: runs your LLM generated Python code in a secure environment. This tool will only be added to [`ToolCallingAgent`] if you initialize it with `add_base_tools=True`, since code-based agent can already natively execute Python code - **Transcriber**: a speech-to-text pipeline built on Whisper-Turbo that transcribes an audio to text. You can manually use a tool by calling it with its arguments. ```python from smolagents import DuckDuckGoSearchTool search_tool = DuckDuckGoSearchTool() print(search_tool("Who's the current president of Russia?")) ``` ### Create a new tool You can create your own tool for use cases not covered by the default tools from Hugging Face. For example, let's create a tool that returns the most downloaded model for a given task from the Hub. You'll start with the code below. ```python from huggingface_hub import list_models task = "text-classification" most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) print(most_downloaded_model.id) ``` This code can quickly be converted into a tool, just by wrapping it in a function and adding the `tool` decorator: This is not the only way to build the tool: you can directly define it as a subclass of [`Tool`], which gives you more flexibility, for instance the possibility to initialize heavy class attributes. Let's see how it works for both options: <hfoptions id="build-a-tool"> <hfoption id="Decorate a function with @tool"> ```py from smolagents import tool @tool def model_download_tool(task: str) -> str: """ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint. Args: task: The task for which to get the download count. """ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return most_downloaded_model.id ``` The function needs: - A clear name. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`. - Type hints on both inputs and output - A description, that includes an 'Args:' part where each argument is described (without a type indication this time, it will be pulled from the type hint). Same as for the tool name, this description is an instruction manual for the LLM powering you agent, so do not neglect it. All these elements will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible! > [!TIP] > This definition format is the same as tool schemas used in `apply_chat_template`, the only difference is the added `tool` decorator: read more on our tool use API [here](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template). </hfoption> <hfoption id="Subclass Tool"> ```py from smolagents import Tool class ModelDownloadTool(Tool): name = "model_download_tool" description = "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint." inputs = {"task": {"type": "string", "description": "The task for which to get the download count."}} output_type = "string" def forward(self, task: str) -> str: most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return most_downloaded_model.id ``` The subclass needs the following attributes: - A clear `name`. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`. - A `description`. Same as for the `name`, this description is an instruction manual for the LLM powering you agent, so do not neglect it. - Input types and descriptions - Output type All these attributes will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible! </hfoption> </hfoptions> Then you can directly initialize your agent: ```py from smolagents import CodeAgent, HfApiModel agent = CodeAgent(tools=[model_download_tool], model=HfApiModel()) agent.run( "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?" ) ``` You get the following logs: ```text ╭──────────────────────────────────────── New run ─────────────────────────────────────────╮ │ │ │ Can you give me the name of the model that has the most downloads in the 'text-to-video' │ │ task on the Hugging Face Hub? │ │ │ ╰─ HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮ │ 1 model_name = model_download_tool(task="text-to-video") │ │ 2 print(model_name) │ ╰──────────────────────────────────────────────────────────────────────────────────────────╯ Execution logs: ByteDance/AnimateDiff-Lightning Out: None [Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮ │ 1 final_answer("ByteDance/AnimateDiff-Lightning") │ ╰──────────────────────────────────────────────────────────────────────────────────────────╯ Out - Final answer: ByteDance/AnimateDiff-Lightning [Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148] Out[20]: 'ByteDance/AnimateDiff-Lightning' ``` > [!TIP] > Read more on tools in the [dedicated tutorial](./tutorials/tools#what-is-a-tool-and-how-to-build-one). ## Multi-agents Multi-agent systems have been introduced with Microsoft's framework [Autogen](https://huggingface.co/papers/2308.08155). In this type of framework, you have several agents working together to solve your task instead of only one. It empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization. For instance, why fill the memory of the code generating agent with all the content of webpages visited by the web search agent? It's better to keep them separate. You can easily build hierarchical multi-agent systems with `smolagents`. To do so, just ensure your agent has `name` and`description` attributes, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools. Then you can pass this managed agent in the parameter managed_agents upon initialization of the manager agent. Here's an example of making an agent that managed a specific web search agent using our [`DuckDuckGoSearchTool`]: ```py from smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool model = HfApiModel() web_agent = CodeAgent( tools=[DuckDuckGoSearchTool()], model=model, name="web_search", description="Runs web searches for you. Give it your query as an argument." ) manager_agent = CodeAgent( tools=[], model=model, managed_agents=[web_agent] ) manager_agent.run("Who is the CEO of Hugging Face?") ``` > [!TIP] > For an in-depth example of an efficient multi-agent implementation, see [how we pushed our multi-agent system to the top of the GAIA leaderboard](https://huggingface.co/blog/beating-gaia). ## Talk with your agent and visualize its thoughts in a cool Gradio interface You can use `GradioUI` to interactively submit tasks to your agent and observe its thought and execution process, here is an example: ```py from smolagents import ( load_tool, CodeAgent, HfApiModel, GradioUI ) # Import tool from Hub image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True) model = HfApiModel(model_id) # Initialize the agent with the image generation tool agent = CodeAgent(tools=[image_generation_tool], model=model) GradioUI(agent).launch() ``` Under the hood, when the user types a new answer, the agent is launched with `agent.run(user_request, reset=False)`. The `reset=False` flag means the agent's memory is not flushed before launching this new task, which lets the conversation go on. You can also use this `reset=False` argument to keep the conversation going in any other agentic application. ## Next steps For more in-depth usage, you will then want to check out our tutorials: - [the explanation of how our code agents work](./tutorials/secure_code_execution) - [this guide on how to build good agents](./tutorials/building_good_agents). - [the in-depth guide for tool usage](./tutorials/building_good_agents).
{ "source": "huggingface/smolagents", "title": "docs/source/en/guided_tour.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/guided_tour.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 20616 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # `smolagents` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/license_to_call.png" width=100%/> </div> This library is the simplest framework out there to build powerful agents! By the way, wtf are "agents"? We provide our definition [in this page](conceptual_guides/intro_agents), where you'll also find tips for when to use them or not (spoilers: you'll often be better off without agents). This library offers: ✨ **Simplicity**: the logic for agents fits in ~thousand lines of code. We kept abstractions to their minimal shape above raw code! 🌐 **Support for any LLM**: it supports models hosted on the Hub loaded in their `transformers` version or through our inference API and Inference providers, but also models from OpenAI, Anthropic... it's really easy to power an agent with any LLM. 🧑‍💻 **First-class support for Code Agents**, i.e. agents that write their actions in code (as opposed to "agents being used to write code"), [read more here](tutorials/secure_code_execution). 🤗 **Hub integrations**: you can share and load Gradio Spaces as tools to/from the Hub, and more is to come! <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guided_tour" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Guided tour</div> <p class="text-gray-700">Learn the basics and become familiar with using Agents. Start here if you are using Agents for the first time!</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./examples/text_to_sql" ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div> <p class="text-gray-700">Practical guides to help you achieve a specific goal: create an agent to generate and test SQL queries!</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/intro_agents" ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div> <p class="text-gray-700">High-level explanations for building a better understanding of important topics.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/building_good_agents" ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div> <p class="text-gray-700">Horizontal tutorials that cover important aspects of building agents.</p> </a> </div> </div>
{ "source": "huggingface/smolagents", "title": "docs/source/en/index.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/index.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 3841 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Agents - गाइडेड टूर [[open-in-colab]] इस गाइडेड विजिट में, आप सीखेंगे कि एक एजेंट कैसे बनाएं, इसे कैसे चलाएं, और अपने यूज-केस के लिए बेहतर काम करने के लिए इसे कैसे कस्टमाइज़ करें। ### अपना Agent बनाना एक मिनिमल एजेंट को इनिशियलाइज़ करने के लिए, आपको कम से कम इन दो आर्ग्यूमेंट्स की आवश्यकता है: - `model`, आपके एजेंट को पावर देने के लिए एक टेक्स्ट-जनरेशन मॉडल - क्योंकि एजेंट एक सिंपल LLM से अलग है, यह एक सिस्टम है जो LLM को अपने इंजन के रूप में उपयोग करता है। आप इनमें से कोई भी विकल्प उपयोग कर सकते हैं: - [`TransformersModel`] `transformers` पाइपलाइन को पहले से इनिशियलाइज़ करता है जो `transformers` का उपयोग करके आपकी लोकल मशीन पर इन्फरेंस चलाने के लिए होता है। - [`HfApiModel`] अंदर से `huggingface_hub.InferenceClient` का लाभ उठाता है। - [`LiteLLMModel`] आपको [LiteLLM](https://docs.litellm.ai/) के माध्यम से 100+ अलग-अलग मॉडल्स को कॉल करने देता है! - `tools`, `Tools` की एक लिस्ट जिसे एजेंट टास्क को हल करने के लिए उपयोग कर सकता है। यह एक खाली लिस्ट हो सकती है। आप ऑप्शनल आर्ग्यूमेंट `add_base_tools=True` को परिभाषित करके अपनी `tools` लिस्ट के ऊपर डिफ़ॉल्ट टूलबॉक्स भी जोड़ सकते हैं। एक बार जब आपके पास ये दो आर्ग्यूमेंट्स, `tools` और `model` हैं, तो आप एक एजेंट बना सकते हैं और इसे चला सकते हैं। आप कोई भी LLM उपयोग कर सकते हैं, या तो [Hugging Face API](https://huggingface.co/docs/api-inference/en/index), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), या [LiteLLM](https://www.litellm.ai/) के माध्यम से। <hfoptions id="एक LLM चुनें"> <hfoption id="Hugging Face API"> Hugging Face API टोकन के बिना उपयोग करने के लिए मुफ्त है, लेकिन फिर इसमें रेट लिमिटेशन होगी। गेटेड मॉडल्स तक पहुंचने या PRO अकाउंट के साथ अपनी रेट लिमिट्स बढ़ाने के लिए, आपको एनवायरनमेंट वेरिएबल `HF_TOKEN` सेट करना होगा या `HfApiModel` के इनिशियलाइजेशन पर `token` वेरिएबल पास करना होगा। ```python from smolagents import CodeAgent, HfApiModel model_id = "meta-llama/Llama-3.3-70B-Instruct" model = HfApiModel(model_id=model_id, token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="Local Transformers Model"> ```python from smolagents import CodeAgent, TransformersModel model_id = "meta-llama/Llama-3.2-3B-Instruct" model = TransformersModel(model_id=model_id) agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="OpenAI या Anthropic API"> `LiteLLMModel` का उपयोग करने के लिए, आपको एनवायरनमेंट वेरिएबल `ANTHROPIC_API_KEY` या `OPENAI_API_KEY` सेट करना होगा, या इनिशियलाइजेशन पर `api_key` वेरिएबल पास करना होगा। ```python from smolagents import CodeAgent, LiteLLMModel model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", api_key="YOUR_ANTHROPIC_API_KEY") # Could use 'gpt-4o' agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="Ollama"> ```python from smolagents import CodeAgent, LiteLLMModel model = LiteLLMModel( model_id="ollama_chat/llama3.2", # This model is a bit weak for agentic behaviours though api_base="http://localhost:11434", # replace with 127.0.0.1:11434 or remote open-ai compatible server if necessary api_key="YOUR_API_KEY" # replace with API key if necessary num_ctx=8192 # ollama default is 2048 which will fail horribly. 8192 works for easy tasks, more is better. Check https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator to calculate how much VRAM this will need for the selected model. ) agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> </hfoptions> #### CodeAgent और ToolCallingAgent [`CodeAgent`] हमारा डिफ़ॉल्ट एजेंट है। यह हर स्टेप पर पायथन कोड स्निपेट्स लिखेगा और एक्जीक्यूट करेगा। डिफ़ॉल्ट रूप से, एक्जीक्यूशन आपके लोकल एनवायरनमेंट में किया जाता है। यह सुरक्षित होना चाहिए क्योंकि केवल वही फ़ंक्शंस कॉल किए जा सकते हैं जो आपने प्रदान किए हैं (विशेष रूप से यदि यह केवल Hugging Face टूल्स हैं) और पूर्व-परिभाषित सुरक्षित फ़ंक्शंस जैसे `print` या `math` मॉड्यूल से फ़ंक्शंस, इसलिए आप पहले से ही सीमित हैं कि क्या एक्जीक्यूट किया जा सकता है। पायथन इंटरप्रेटर डिफ़ॉल्ट रूप से सेफ लिस्ट के बाहर इम्पोर्ट की अनुमति नहीं देता है, इसलिए सबसे स्पष्ट अटैक समस्या नहीं होनी चाहिए। आप अपने [`CodeAgent`] के इनिशियलाइजेशन पर आर्ग्यूमेंट `additional_authorized_imports` में स्ट्रिंग्स की लिस्ट के रूप में अतिरिक्त मॉड्यूल्स को अधिकृत कर सकते हैं। ```py model = HfApiModel() agent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4']) agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?") ``` > [!WARNING] > LLM आर्बिट्ररी कोड जनरेट कर सकता है जो फिर एक्जीक्यूट किया जाएगा: कोई असुरक्षित इम्पोर्ट न जोड़ें! एक्जीक्यूशन किसी भी कोड पर रुक जाएगा जो एक अवैध ऑपरेशन करने का प्रयास करता है या यदि एजेंट द्वारा जनरेट किए गए कोड में एक रेगुलर पायथन एरर है। आप [E2B कोड एक्जीक्यूटर](https://e2b.dev/docs#what-is-e2-b) का उपयोग लोकल पायथन इंटरप्रेटर के बजाय कर सकते हैं, पहले [`E2B_API_KEY` एनवायरनमेंट वेरिएबल सेट करके](https://e2b.dev/dashboard?tab=keys) और फिर एजेंट इनिशियलाइजेशन पर `use_e2b_executor=True` पास करके। > [!TIP] > कोड एक्जीक्यूशन के बारे में और जानें [इस ट्यूटोरियल में](tutorials/secure_code_execution)। हम JSON-जैसे ब्लॉब्स के रूप में एक्शन लिखने के व्यापक रूप से उपयोग किए जाने वाले तरीके का भी समर्थन करते हैं: यह [`ToolCallingAgent`] है, यह बहुत कुछ [`CodeAgent`] की तरह ही काम करता है, बेशक `additional_authorized_imports` के बिना क्योंकि यह कोड एक्जीक्यूट नहीं करता। ```py from smolagents import ToolCallingAgent agent = ToolCallingAgent(tools=[], model=model) agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?") ``` ### एजेंट रन का निरीक्षण रन के बाद क्या हुआ यह जांचने के लिए यहाँ कुछ उपयोगी एट्रिब्यूट्स हैं: - `agent.logs` एजेंट के फाइन-ग्रेन्ड लॉग्स को स्टोर करता है। एजेंट के रन के हर स्टेप पर, सब कुछ एक डिक्शनरी में स्टोर किया जाता है जो फिर `agent.logs` में जोड़ा जाता है। - `agent.write_memory_to_messages()` चलाने से LLM के लिए एजेंट के लॉग्स की एक इनर मेमोरी बनती है, चैट मैसेज की लिस्ट के रूप में। यह मेथड लॉग के प्रत्येक स्टेप पर जाता है और केवल वही स्टोर करता है जिसमें यह एक मैसेज के रूप में रुचि रखता है: उदाहरण के लिए, यह सिस्टम प्रॉम्प्ट और टास्क को अलग-अलग मैसेज के रूप में सेव करेगा, फिर प्रत्येक स्टेप के लिए यह LLM आउटपुट को एक मैसेज के रूप में और टूल कॉल आउटपुट को दूसरे मैसेज के रूप में स्टोर करेगा। ## टूल्स टूल एक एटॉमिक फ़ंक्शन है जिसे एजेंट द्वारा उपयोग किया जाता है। LLM द्वारा उपयोग किए जाने के लिए, इसे कुछ एट्रिब्यूट्स की भी आवश्यकता होती है जो इसकी API बनाते हैं और LLM को यह बताने के लिए उपयोग किए जाएंगे कि इस टूल को कैसे कॉल करें: - एक नाम - एक विवरण - इनपुट प्रकार और विवरण - एक आउटपुट प्रकार आप उदाहरण के लिए [`PythonInterpreterTool`] को चेक कर सकते हैं: इसमें एक नाम, विवरण, इनपुट विवरण, एक आउटपुट प्रकार, और एक्शन करने के लिए एक `forward` मेथड है। जब एजेंट इनिशियलाइज़ किया जाता है, टूल एट्रिब्यूट्स का उपयोग एक टूल विवरण जनरेट करने के लिए किया जाता है जो एजेंट के सिस्टम प्रॉम्प्ट में बेक किया जाता है। यह एजेंट को बताता है कि वह कौन से टूल्स उपयोग कर सकता है और क्यों। ### डिफ़ॉल्ट टूलबॉक्स Transformers एजेंट्स को सशक्त बनाने के लिए एक डिफ़ॉल्ट टूलबॉक्स के साथ आता है, जिसे आप आर्ग्यूमेंट `add_base_tools = True` के साथ अपने एजेंट में इनिशियलाइजेशन पर जोड़ सकते हैं: - **DuckDuckGo वेब सर्च**: DuckDuckGo ब्राउज़र का उपयोग करके वेब सर्च करता है। - **पायथन कोड इंटरप्रेटर**: आपका LLM जनरेटेड पायथन कोड एक सुरक्षित एनवायरनमेंट में चलाता है। यह टूल [`ToolCallingAgent`] में केवल तभी जोड़ा जाएगा जब आप इसे `add_base_tools=True` के साथ इनिशियलाइज़ करते हैं, क्योंकि कोड-बेस्ड एजेंट पहले से ही नेटिव रूप से पायथन कोड एक्जीक्यूट कर सकता है - **ट्रांसक्राइबर**: Whisper-Turbo पर बनाया गया एक स्पीच-टू-टेक्स्ट पाइपलाइन जो ऑडियो को टेक्स्ट में ट्रांसक्राइब करता है। आप मैन्युअल रूप से एक टूल का उपयोग उसके आर्ग्यूमेंट्स के साथ कॉल करके कर सकते हैं। ```python from smolagents import DuckDuckGoSearchTool search_tool = DuckDuckGoSearchTool() print(search_tool("Who's the current president of Russia?")) ``` ### अपने कस्टम टूल बनाएं आप ऐसे उपयोग के मामलों के लिए अपने खुद के टूल बना सकते हैं जो Hugging Face के डिफ़ॉल्ट टूल्स द्वारा कवर नहीं किए गए हैं। उदाहरण के लिए, चलिए एक टूल बनाते हैं जो दिए गए कार्य (task) के लिए हब से सबसे अधिक डाउनलोड किए गए मॉडल को रिटर्न करता है। आप नीचे दिए गए कोड से शुरुआत करेंगे। ```python from huggingface_hub import list_models task = "text-classification" most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) print(most_downloaded_model.id) ``` यह कोड आसानी से टूल में बदला जा सकता है, बस इसे एक फ़ंक्शन में रैप करें और `tool` डेकोरेटर जोड़ें: यह टूल बनाने का एकमात्र तरीका नहीं है: आप इसे सीधे [`Tool`] का सबक्लास बनाकर भी परिभाषित कर सकते हैं, जो आपको अधिक लचीलापन प्रदान करता है, जैसे भारी क्लास एट्रिब्यूट्स को इनिशियलाइज़ करने की संभावना। चलो देखते हैं कि यह दोनों विकल्पों के लिए कैसे काम करता है: <hfoptions id="build-a-tool"> <hfoption id="@tool के साथ एक फ़ंक्शन को डेकोरेट करें"> ```py from smolagents import tool @tool def model_download_tool(task: str) -> str: """ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint. Args: task: The task for which to get the download count. """ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return most_downloaded_model.id ``` फ़ंक्शन को चाहिए: - एक स्पष्ट नाम: नाम टूल के कार्य को स्पष्ट रूप से बताने वाला होना चाहिए ताकि इसे चलाने वाले LLM को आसानी हो। चूंकि यह टूल कार्य के लिए सबसे अधिक डाउनलोड किए गए मॉडल को लौटाता है, इसका नाम `model_download_tool` रखा गया है। - इनपुट और आउटपुट पर टाइप हिंट्स। - एक विवरण: इसमें 'Args:' भाग शामिल होना चाहिए, जिसमें प्रत्येक आर्ग्युमेंट का वर्णन (बिना टाइप संकेत के) किया गया हो। यह विवरण एक निर्देश मैनुअल की तरह होता है जो LLM को टूल चलाने में मदद करता है। इसे अनदेखा न करें। इन सभी तत्वों को एजेंट की सिस्टम प्रॉम्प्ट में स्वचालित रूप से शामिल किया जाएगा: इसलिए इन्हें यथासंभव स्पष्ट बनाने का प्रयास करें! > [!TIP] > यह परिभाषा प्रारूप `apply_chat_template` में उपयोग की गई टूल स्कीमा जैसा ही है, केवल अतिरिक्त `tool` डेकोरेटर जोड़ा गया है: हमारे टूल उपयोग API के बारे में अधिक पढ़ें [यहाँ](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template)। </hfoption> <hfoption id="सबक्लास टूल"> ```py from smolagents import Tool class ModelDownloadTool(Tool): name = "model_download_tool" description = "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint." inputs = {"task": {"type": "string", "description": "The task for which to get the download count."}} output_type = "string" def forward(self, task: str) -> str: most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return most_downloaded_model.id ``` सबक्लास को निम्नलिखित एट्रिब्यूट्स की आवश्यकता होती है: - एक स्पष्ट `name`: नाम टूल के कार्य को स्पष्ट रूप से बताने वाला होना चाहिए। - एक `description`: यह भी LLM के लिए निर्देश मैनुअल की तरह काम करता है। - इनपुट प्रकार और उनके विवरण। - आउटपुट प्रकार। इन सभी एट्रिब्यूट्स को एजेंट की सिस्टम प्रॉम्प्ट में स्वचालित रूप से शामिल किया जाएगा, इन्हें स्पष्ट और विस्तृत बनाएं। </hfoption> </hfoptions> आप सीधे अपने एजेंट को इनिशियलाइज़ कर सकते हैं: ```py from smolagents import CodeAgent, HfApiModel agent = CodeAgent(tools=[model_download_tool], model=HfApiModel()) agent.run( "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?" ) ``` लॉग्स इस प्रकार होंगे: ```text ╭──────────────────────────────────────── New run ─────────────────────────────────────────╮ │ │ │ Can you give me the name of the model that has the most downloads in the 'text-to-video' │ │ task on the Hugging Face Hub? │ │ │ ╰─ HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮ │ 1 model_name = model_download_tool(task="text-to-video") │ │ 2 print(model_name) │ ╰──────────────────────────────────────────────────────────────────────────────────────────╯ Execution logs: ByteDance/AnimateDiff-Lightning Out: None [Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮ │ 1 final_answer("ByteDance/AnimateDiff-Lightning") │ ╰──────────────────────────────────────────────────────────────────────────────────────────╯ Out - Final answer: ByteDance/AnimateDiff-Lightning [Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148] Out[20]: 'ByteDance/AnimateDiff-Lightning' ``` [!TIP] > टूल्स के बारे में अधिक पढ़ें [dedicated tutorial](./tutorials/tools#टूल-क्या-है-और-इसे-कैसे-बनाएं) में। ## मल्टी-एजेंट्स Microsoft के फ्रेमवर्क [Autogen](https://huggingface.co/papers/2308.08155) के साथ मल्टी-एजेंट सिस्टम्स की शुरुआत हुई। इस प्रकार के फ्रेमवर्क में, आपके कार्य को हल करने के लिए कई एजेंट्स एक साथ काम करते हैं, न कि केवल एक। यह अधिकांश बेंचमार्क्स पर बेहतर प्रदर्शन देता है। इसका कारण यह है कि कई कार्यों के लिए, एक सर्व-समावेशी प्रणाली के बजाय, आप उप-कार्यों पर विशेषज्ञता रखने वाली इकाइयों को पसंद करेंगे। इस तरह, अलग-अलग टूल सेट्स और मेमोरी वाले एजेंट्स के पास विशेषकरण की अधिक कुशलता होती है। उदाहरण के लिए, कोड उत्पन्न करने वाले एजेंट की मेमोरी को वेब सर्च एजेंट द्वारा देखे गए वेबपेजों की सभी सामग्री से क्यों भरें? इन्हें अलग रखना बेहतर है। आप `smolagents` का उपयोग करके आसानी से श्रेणीबद्ध मल्टी-एजेंट सिस्टम्स बना सकते हैं। ऐसा करने के लिए, एजेंट को [`ManagedAgent`] ऑब्जेक्ट में समाहित करें। यह ऑब्जेक्ट `agent`, `name`, और एक `description` जैसे तर्कों की आवश्यकता होती है, जो फिर मैनेजर एजेंट की सिस्टम प्रॉम्प्ट में एम्बेड किया जाता है यहां एक एजेंट बनाने का उदाहरण दिया गया है जो हमारे [`DuckDuckGoSearchTool`] का उपयोग करके एक विशिष्ट वेब खोज एजेंट को प्रबंधित करता है। ```py from smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool, ManagedAgent model = HfApiModel() web_agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model) managed_web_agent = ManagedAgent( agent=web_agent, name="web_search", description="Runs web searches for you. Give it your query as an argument." ) manager_agent = CodeAgent( tools=[], model=model, managed_agents=[managed_web_agent] ) manager_agent.run("Who is the CEO of Hugging Face?") ``` > [!TIP] > कुशल मल्टी-एजेंट इंप्लीमेंटेशन का एक विस्तृत उदाहरण देखने के लिए, [कैसे हमने अपने मल्टी-एजेंट सिस्टम को GAIA लीडरबोर्ड के शीर्ष पर पहुंचाया](https://huggingface.co/blog/beating-gaia) पर जाएं। ## अपने एजेंट से बात करें और उसके विचारों को एक शानदार Gradio इंटरफेस में विज़ुअलाइज़ करें आप `GradioUI` का उपयोग करके अपने एजेंट को इंटरैक्टिव तरीके से कार्य सौंप सकते हैं और उसके सोचने और निष्पादन की प्रक्रिया को देख सकते हैं। नीचे एक उदाहरण दिया गया है: ```py from smolagents import ( load_tool, CodeAgent, HfApiModel, GradioUI ) # Import tool from Hub image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True) model = HfApiModel(model_id) # Initialize the agent with the image generation tool agent = CodeAgent(tools=[image_generation_tool], model=model) GradioUI(agent).launch() ``` अंदरूनी तौर पर, जब यूजर एक नया उत्तर टाइप करता है, तो एजेंट को `agent.run(user_request, reset=False)` के साथ लॉन्च किया जाता है। यहाँ `reset=False` फ्लैग का मतलब है कि एजेंट की मेमोरी इस नए कार्य को लॉन्च करने से पहले क्लियर नहीं होती, जिससे बातचीत जारी रहती है। आप इस `reset=False` आर्ग्युमेंट का उपयोग किसी भी अन्य एजेंटिक एप्लिकेशन में बातचीत जारी रखने के लिए कर सकते हैं। ## अगले कदम अधिक गहन उपयोग के लिए, आप हमारे ट्यूटोरियल्स देख सकते हैं: - [हमारे कोड एजेंट्स कैसे काम करते हैं इसका विवरण](./tutorials/secure_code_execution) - [अच्छे एजेंट्स बनाने के लिए यह गाइड](./tutorials/building_good_agents) - [टूल उपयोग के लिए इन-डेप्थ गाइड ](./tutorials/building_good_agents)।
{ "source": "huggingface/smolagents", "title": "docs/source/hi/guided_tour.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/guided_tour.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 17734 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # `smolagents` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/license_to_call.png" width=100%/> </div> यह लाइब्रेरी पावरफुल एजेंट्स बनाने के लिए सबसे सरल फ्रेमवर्क है! वैसे, "एजेंट्स" हैं क्या? हम अपनी परिभाषा [इस पेज पर](conceptual_guides/intro_agents) प्रदान करते हैं, जहाँ आपको यह भी पता चलेगा कि इन्हें कब उपयोग करें या न करें (स्पॉइलर: आप अक्सर एजेंट्स के बिना बेहतर काम कर सकते हैं)। यह लाइब्रेरी प्रदान करती है: ✨ **सरलता**: Agents का लॉजिक लगभग एक हजार लाइन्स ऑफ़ कोड में समाहित है। हमने रॉ कोड के ऊपर एब्स्ट्रैक्शन को न्यूनतम आकार में रखा है! 🌐 **सभी LLM के लिए सपोर्ट**: यह हब पर होस्ट किए गए मॉडल्स को उनके `transformers` वर्जन में या हमारे इन्फरेंस API के माध्यम से सपोर्ट करता है, साथ ही OpenAI, Anthropic से भी... किसी भी LLM से एजेंट को पावर करना वास्तव में आसान है। 🧑‍💻 **कोड Agents के लिए फर्स्ट-क्लास सपोर्ट**, यानी ऐसे एजेंट्स जो अपनी एक्शन्स को कोड में लिखते हैं (कोड लिखने के लिए उपयोग किए जाने वाले एजेंट्स के विपरीत), [यहाँ और पढ़ें](tutorials/secure_code_execution)। 🤗 **हब इंटीग्रेशन**: आप टूल्स को हब पर शेयर और लोड कर सकते हैं, और आगे और भी बहुत कुछ आने वाला है! ! <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guided_tour" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">गाइडेड टूर</div> <p class="text-gray-700">बेसिक्स सीखें और एजेंट्स का उपयोग करने में परिचित हों। यदि आप पहली बार एजेंट्स का उपयोग कर रहे हैं तो यहाँ से शुरू करें!</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./examples/text_to_sql" ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">हाउ-टू गाइड्स</div> <p class="text-gray-700">एक विशिष्ट लक्ष्य प्राप्त करने में मदद के लिए गाइड: SQL क्वेरी जनरेट और टेस्ट करने के लिए एजेंट बनाएं!</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/intro_agents" ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">कॉन्सेप्चुअल गाइड्स</div> <p class="text-gray-700">महत्वपूर्ण विषयों की बेहतर समझ बनाने के लिए उच्च-स्तरीय व्याख्याएं।</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/building_good_agents" ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">ट्यूटोरियल्स</div> <p class="text-gray-700">एजेंट्स बनाने के महत्वपूर्ण पहलुओं को कवर करने वाले क्ट्यूटोरियल्स।</p> </a> </div> </div>
{ "source": "huggingface/smolagents", "title": "docs/source/hi/index.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/index.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 3837 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Agents - 导览 [[open-in-colab]] 在本导览中,您将学习如何构建一个 agent(智能体),如何运行它,以及如何自定义它以使其更好地适应您的使用场景。 > [!TIP] > 译者注:Agent 的业内术语是“智能体”。本译文将保留 agent,不作翻译,以带来更高效的阅读体验。(在中文为主的文章中,It's easier to 注意到英文。Attention Is All You Need!) > [!TIP] > 中文社区发布了关于 smolagents 的介绍和实践讲解视频(来源:[Issue#80](https://github.com/huggingface/smolagents/issues/80)),你可以访问[这里](https://www.youtube.com/watch?v=wwN3oAugc4c)进行观看! ### 构建您的 agent 要初始化一个最小化的 agent,您至少需要以下两个参数: - `model`,一个为您的 agent 提供动力的文本生成模型 - 因为 agent 与简单的 LLM 不同,它是一个使用 LLM 作为引擎的系统。您可以使用以下任一选项: - [`TransformersModel`] 使用预初始化的 `transformers` 管道在本地机器上运行推理 - [`HfApiModel`] 在底层使用 `huggingface_hub.InferenceClient` - [`LiteLLMModel`] 让您通过 [LiteLLM](https://docs.litellm.ai/) 调用 100+ 不同的模型! - `tools`,agent 可以用来解决任务的 `Tools` 列表。它可以是一个空列表。您还可以通过定义可选参数 `add_base_tools=True` 在您的 `tools` 列表之上添加默认工具箱。 一旦有了这两个参数 `tools` 和 `model`,您就可以创建一个 agent 并运行它。您可以使用任何您喜欢的 LLM,无论是通过 [Hugging Face API](https://huggingface.co/docs/api-inference/en/index)、[transformers](https://github.com/huggingface/transformers/)、[ollama](https://ollama.com/),还是 [LiteLLM](https://www.litellm.ai/)。 <hfoptions id="选择一个LLM"> <hfoption id="Hugging Face API"> Hugging Face API 可以免费使用而无需 token,但会有速率限制。 要访问受限模型或使用 PRO 账户提高速率限制,您需要设置环境变量 `HF_TOKEN` 或在初始化 `HfApiModel` 时传递 `token` 变量。 ```python from smolagents import CodeAgent, HfApiModel model_id = "meta-llama/Llama-3.3-70B-Instruct" model = HfApiModel(model_id=model_id, token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="本地Transformers模型"> ```python # !pip install smolagents[transformers] from smolagents import CodeAgent, TransformersModel model_id = "meta-llama/Llama-3.2-3B-Instruct" model = TransformersModel(model_id=model_id) agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="OpenAI或Anthropic API"> 要使用 `LiteLLMModel`,您需要设置环境变量 `ANTHROPIC_API_KEY` 或 `OPENAI_API_KEY`,或者在初始化时传递 `api_key` 变量。 ```python # !pip install smolagents[litellm] from smolagents import CodeAgent, LiteLLMModel model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", api_key="YOUR_ANTHROPIC_API_KEY") # 也可以使用 'gpt-4o' agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> <hfoption id="Ollama"> ```python # !pip install smolagents[litellm] from smolagents import CodeAgent, LiteLLMModel model = LiteLLMModel( model_id="ollama_chat/llama3.2", # 这个模型对于 agent 行为来说有点弱 api_base="http://localhost:11434", # 如果需要可以替换为远程 open-ai 兼容服务器 api_key="YOUR_API_KEY" # 如果需要可以替换为 API key num_ctx=8192 # https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator ) agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.run( "Could you give me the 118th number in the Fibonacci sequence?", ) ``` </hfoption> </hfoptions> #### CodeAgent 和 ToolCallingAgent [`CodeAgent`] 是我们的默认 agent。它将在每一步编写并执行 Python 代码片段。 默认情况下,执行是在您的本地环境中完成的。 这应该是安全的,因为唯一可以调用的函数是您提供的工具(特别是如果只有 Hugging Face 的工具)和一组预定义的安全函数,如 `print` 或 `math` 模块中的函数,所以您已经限制了可以执行的内容。 Python 解释器默认也不允许在安全列表之外导入,所以所有最明显的攻击都不应该成为问题。 您可以通过在初始化 [`CodeAgent`] 时将授权模块作为字符串列表传递给参数 `additional_authorized_imports` 来授权额外的导入: ```py from smolagents import CodeAgent agent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4']) agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?") ``` > [!WARNING] > LLM 可以生成任意代码然后执行:不要添加任何不安全的导入! 如果生成的代码尝试执行非法操作或出现常规 Python 错误,执行将停止。 您也可以使用 [E2B 代码执行器](https://e2b.dev/docs#what-is-e2-b) 而不是本地 Python 解释器,首先 [设置 `E2B_API_KEY` 环境变量](https://e2b.dev/dashboard?tab=keys),然后在初始化 agent 时传递 `use_e2b_executor=True`。 > [!TIP] > 在 [该教程中](tutorials/secure_code_execution) 了解更多关于代码执行的内容。 我们还支持广泛使用的将动作编写为 JSON-like 块的方式:[`ToolCallingAgent`],它的工作方式与 [`CodeAgent`] 非常相似,当然没有 `additional_authorized_imports`,因为它不执行代码: ```py from smolagents import ToolCallingAgent agent = ToolCallingAgent(tools=[], model=model) agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?") ``` ### 检查 agent 运行 以下是一些有用的属性,用于检查运行后发生了什么: - `agent.logs` 存储 agent 的细粒度日志。在 agent 运行的每一步,所有内容都会存储在一个字典中,然后附加到 `agent.logs` 中。 - 运行 `agent.write_memory_to_messages()` 会为 LLM 创建一个 agent 日志的内部内存,作为聊天消息列表。此方法会遍历日志的每一步,并仅存储它感兴趣的内容作为消息:例如,它会将系统提示和任务存储为单独的消息,然后对于每一步,它会将 LLM 输出存储为一条消息,工具调用输出存储为另一条消息。如果您想要更高级别的视图 - 但不是每个日志都会被此方法转录。 ## 工具 工具是 agent 使用的原子函数。为了被 LLM 使用,它还需要一些构成其 API 的属性,这些属性将用于向 LLM 描述如何调用此工具: - 名称 - 描述 - 输入类型和描述 - 输出类型 例如,您可以查看 [`PythonInterpreterTool`]:它有一个名称、描述、输入描述、输出类型和一个执行操作的 `forward` 方法。 当 agent 初始化时,工具属性用于生成工具描述,该描述被嵌入到 agent 的系统提示中。这让 agent 知道它可以使用哪些工具以及为什么。 ### 默认工具箱 Transformers 附带了一个用于增强 agent 的默认工具箱,您可以在初始化时通过参数 `add_base_tools = True` 将其添加到您的 agent 中: - **DuckDuckGo 网页搜索**:使用 DuckDuckGo 浏览器执行网页搜索。 - **Python 代码解释器**:在安全环境中运行 LLM 生成的 Python 代码。只有在使用 `add_base_tools=True` 初始化 [`ToolCallingAgent`] 时才会添加此工具,因为基于代码的 agent 已经可以原生执行 Python 代码 - **转录器**:基于 Whisper-Turbo 构建的语音转文本管道,将音频转录为文本。 您可以通过调用 [`load_tool`] 函数和要执行的任务手动使用工具。 ```python from smolagents import DuckDuckGoSearchTool search_tool = DuckDuckGoSearchTool() print(search_tool("Who's the current president of Russia?")) ``` ### 创建一个新工具 您可以创建自己的工具,用于 Hugging Face 默认工具未涵盖的用例。 例如,让我们创建一个工具,返回 Hub 上给定任务下载量最多的模型。 您将从以下代码开始。 ```python from huggingface_hub import list_models task = "text-classification" most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) print(most_downloaded_model.id) ``` 这段代码可以通过将其包装在一个函数中并添加 `tool` 装饰器快速转换为工具: 这不是构建工具的唯一方法:您可以直接将其定义为 [`Tool`] 的子类,这为您提供了更多的灵活性,例如初始化重型类属性的可能性。 让我们看看这两种选项的工作原理: <hfoptions id="构建工具"> <hfoption id="使用@tool装饰一个函数"> ```py from smolagents import tool @tool def model_download_tool(task: str) -> str: """ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint. Args: task: The task for which to get the download count. """ most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return most_downloaded_model.id ``` 该函数需要: - 一个清晰的名称。名称应该足够描述此工具的功能,以帮助为 agent 提供动力的 LLM。由于此工具返回任务下载量最多的模型,我们将其命名为 `model_download_tool`。 - 输入和输出的类型提示 - 一个描述,其中包括一个 'Args:' 部分,其中每个参数都被描述(这次没有类型指示,它将从类型提示中提取)。与工具名称一样,此描述是为您的 agent 提供动力的 LLM 的说明书,所以不要忽视它。 所有这些元素将在初始化时自动嵌入到 agent 的系统提示中:因此要努力使它们尽可能清晰! > [!TIP] > 此定义格式与 `apply_chat_template` 中使用的工具模式相同,唯一的区别是添加了 `tool` 装饰器:[这里](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template) 了解更多关于我们的工具使用 API。 </hfoption> <hfoption id="子类化Tool"> ```py from smolagents import Tool class ModelDownloadTool(Tool): name = "model_download_tool" description = "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint." inputs = {"task": {"type": "string", "description": "The task for which to get the download count."}} output_type = "string" def forward(self, task: str) -> str: most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return most_downloaded_model.id ``` 子类需要以下属性: - 一个清晰的 `name`。名称应该足够描述此工具的功能,以帮助为 agent 提供动力的 LLM。由于此工具返回任务下载量最多的模型,我们将其命名为 `model_download_tool`。 - 一个 `description`。与 `name` 一样,此描述是为您的 agent 提供动力的 LLM 的说明书,所以不要忽视它。 - 输入类型和描述 - 输出类型 所有这些属性将在初始化时自动嵌入到 agent 的系统提示中:因此要努力使它们尽可能清晰! </hfoption> </hfoptions> 然后您可以直接初始化您的 agent: ```py from smolagents import CodeAgent, HfApiModel agent = CodeAgent(tools=[model_download_tool], model=HfApiModel()) agent.run( "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?" ) ``` 您将获得以下日志: ```text ╭──────────────────────────────────────── New run ─────────────────────────────────────────╮ │ │ │ Can you give me the name of the model that has the most downloads in the 'text-to-video' │ │ task on the Hugging Face Hub? │ │ │ ╰─ HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct ───────────────────────────────────────────╯ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮ │ 1 model_name = model_download_tool(task="text-to-video") │ │ 2 print(model_name) │ ╰──────────────────────────────────────────────────────────────────────────────────────────╯ Execution logs: ByteDance/AnimateDiff-Lightning Out: None [Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ╭─ Executing this code: ───────────────────────────────────────────────────────────────────╮ │ 1 final_answer("ByteDance/AnimateDiff-Lightning") │ ╰──────────────────────────────────────────────────────────────────────────────────────────╯ Out - Final answer: ByteDance/AnimateDiff-Lightning [Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148] Out[20]: 'ByteDance/AnimateDiff-Lightning' ``` > [!TIP] > 在 [专用教程](./tutorials/tools#what-is-a-tool-and-how-to-build-one) 中了解更多关于工具的内容。 ## 多 agent 多 agent 系统是随着微软的框架 [Autogen](https://huggingface.co/papers/2308.08155) 引入的。 在这种类型的框架中,您有多个 agent 一起工作来解决您的任务,而不是只有一个。 经验表明,这在大多数基准测试中表现更好。这种更好表现的原因在概念上很简单:对于许多任务,与其使用一个全能系统,您更愿意将单元专门用于子任务。在这里,拥有具有单独工具集和内存的 agent 可以实现高效的专业化。例如,为什么要用网页搜索 agent 访问的所有网页内容填充代码生成 agent 的内存?最好将它们分开。 您可以使用 `smolagents` 轻松构建分层多 agent 系统。 为此,将 agent 封装在 [`ManagedAgent`] 对象中。此对象需要参数 `agent`、`name` 和 `description`,这些参数将嵌入到管理 agent 的系统提示中,以让它知道如何调用此托管 agent,就像我们对工具所做的那样。 以下是一个使用我们的 [`DuckDuckGoSearchTool`] 制作一个管理特定网页搜索 agent 的 agent 的示例: ```py from smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool, ManagedAgent model = HfApiModel() web_agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=model) managed_web_agent = ManagedAgent( agent=web_agent, name="web_search", description="Runs web searches for you. Give it your query as an argument." ) manager_agent = CodeAgent( tools=[], model=model, managed_agents=[managed_web_agent] ) manager_agent.run("Who is the CEO of Hugging Face?") ``` > [!TIP] > 有关高效多 agent 实现的深入示例,请参阅 [我们如何将多 agent 系统推向 GAIA 排行榜的顶部](https://huggingface.co/blog/beating-gaia)。 ## 与您的 agent 交谈并在酷炫的 Gradio 界面中可视化其思考过程 您可以使用 `GradioUI` 交互式地向您的 agent 提交任务并观察其思考和执行过程,以下是一个示例: ```py from smolagents import ( load_tool, CodeAgent, HfApiModel, GradioUI ) # 从 Hub 导入工具 image_generation_tool = load_tool("m-ric/text-to-image") model = HfApiModel(model_id) # 使用图像生成工具初始化 agent agent = CodeAgent(tools=[image_generation_tool], model=model) GradioUI(agent).launch() ``` 在底层,当用户输入新答案时,agent 会以 `agent.run(user_request, reset=False)` 启动。 `reset=False` 标志意味着在启动此新任务之前不会刷新 agent 的内存,这使得对话可以继续。 您也可以在其他 agent 化应用程序中使用此 `reset=False` 参数来保持对话继续。 ## 下一步 要更深入地使用,您将需要查看我们的教程: - [我们的代码 agent 如何工作的解释](./tutorials/secure_code_execution) - [本指南关于如何构建好的 agent](./tutorials/building_good_agents)。 - [工具使用的深入指南](./tutorials/tools)。
{ "source": "huggingface/smolagents", "title": "docs/source/zh/guided_tour.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/guided_tour.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 12438 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # `smolagents` 这是构建强大 agent 的最简单框架!顺便问一下,什么是 "agent"?我们在[此页面](conceptual_guides/intro_agents)提供了我们的定义,您还可以找到关于何时使用或不使用它们的建议(剧透:通常不使用 agent 会更好)。 > [!TIP] > 译者注:Agent 的业内术语是“智能体”。本译文将保留 agent,不作翻译,以带来更高效的阅读体验。(在中文为主的文章中,It's easier to 注意到英文。Attention Is All You Need!) 本库提供: ✨ **简洁性**:Agent 逻辑仅需约千行代码。我们将抽象保持在原始代码之上的最小形态! 🌐 **支持任何 LLM**:支持通过 Hub 托管的模型,使用其 `transformers` 版本或通过我们的推理 API 加载,也支持 OpenAI、Anthropic 等模型。使用任何 LLM 为 agent 提供动力都非常容易。 🧑‍💻 **一流的代码 agent 支持**,即编写代码作为其操作的 agent(与"用于编写代码的 agent"相对),[在此了解更多](tutorials/secure_code_execution)。 🤗 **Hub 集成**:您可以在 Hub 上共享和加载工具,更多功能即将推出! <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guided_tour" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">导览</div> <p class="text-gray-700">学习基础知识并熟悉使用 agent。如果您是第一次使用 agent,请从这里开始!</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./examples/text_to_sql" ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">操作指南</div> <p class="text-gray-700">实用指南,帮助您实现特定目标:创建一个生成和测试 SQL 查询的 agent!</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/intro_agents" ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">概念指南</div> <p class="text-gray-700">高级解释,帮助您更好地理解重要主题。</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/building_good_agents" ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">教程</div> <p class="text-gray-700">涵盖构建 agent 重要方面的横向教程。</p> </a> </div> </div>
{ "source": "huggingface/smolagents", "title": "docs/source/zh/index.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/index.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2962 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Introduction to Agents ## 🤔 What are agents? Any efficient system using AI will need to provide LLMs some kind of access to the real world: for instance the possibility to call a search tool to get external information, or to act on certain programs in order to solve a task. In other words, LLMs should have ***agency***. Agentic programs are the gateway to the outside world for LLMs. > [!TIP] > AI Agents are **programs where LLM outputs control the workflow**. Any system leveraging LLMs will integrate the LLM outputs into code. The influence of the LLM's input on the code workflow is the level of agency of LLMs in the system. Note that with this definition, "agent" is not a discrete, 0 or 1 definition: instead, "agency" evolves on a continuous spectrum, as you give more or less power to the LLM on your workflow. See in the table below how agency can vary across systems: | Agency Level | Description | How that's called | Example Pattern | | ------------ | ------------------------------------------------------- | ----------------- | -------------------------------------------------- | | ☆☆☆ | LLM output has no impact on program flow | Simple Processor | `process_llm_output(llm_response)` | | ★☆☆ | LLM output determines an if/else switch | Router | `if llm_decision(): path_a() else: path_b()` | | ★★☆ | LLM output determines function execution | Tool Caller | `run_function(llm_chosen_tool, llm_chosen_args)` | | ★★★ | LLM output controls iteration and program continuation | Multi-step Agent | `while llm_should_continue(): execute_next_step()` | | ★★★ | One agentic workflow can start another agentic workflow | Multi-Agent | `if llm_trigger(): execute_agent()` | The multi-step agent has this code structure: ```python memory = [user_defined_task] while llm_should_continue(memory): # this loop is the multi-step part action = llm_get_next_action(memory) # this is the tool-calling part observations = execute_action(action) memory += [action, observations] ``` This agentic system runs in a loop, executing a new action at each step (the action can involve calling some pre-determined *tools* that are just functions), until its observations make it apparent that a satisfactory state has been reached to solve the given task. Here’s an example of how a multi-step agent can solve a simple math question: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif"/> </div> ## ✅ When to use agents / ⛔ when to avoid them Agents are useful when you need an LLM to determine the workflow of an app. But they’re often overkill. The question is: do I really need flexibility in the workflow to efficiently solve the task at hand? If the pre-determined workflow falls short too often, that means you need more flexibility. Let's take an example: say you're making an app that handles customer requests on a surfing trip website. You could know in advance that the requests will belong to either of 2 buckets (based on user choice), and you have a predefined workflow for each of these 2 cases. 1. Want some knowledge on the trips? ⇒ give them access to a search bar to search your knowledge base 2. Wants to talk to sales? ⇒ let them type in a contact form. If that deterministic workflow fits all queries, by all means just code everything! This will give you a 100% reliable system with no risk of error introduced by letting unpredictable LLMs meddle in your workflow. For the sake of simplicity and robustness, it's advised to regularize towards not using any agentic behaviour. But what if the workflow can't be determined that well in advance? For instance, a user wants to ask: `"I can come on Monday, but I forgot my passport so risk being delayed to Wednesday, is it possible to take me and my stuff to surf on Tuesday morning, with a cancellation insurance?"` This question hinges on many factors, and probably none of the predetermined criteria above will suffice for this request. If the pre-determined workflow falls short too often, that means you need more flexibility. That is where an agentic setup helps. In the above example, you could just make a multi-step agent that has access to a weather API for weather forecasts, Google Maps API to compute travel distance, an employee availability dashboard and a RAG system on your knowledge base. Until recently, computer programs were restricted to pre-determined workflows, trying to handle complexity by piling up if/else switches. They focused on extremely narrow tasks, like "compute the sum of these numbers" or "find the shortest path in this graph". But actually, most real-life tasks, like our trip example above, do not fit in pre-determined workflows. Agentic systems open up the vast world of real-world tasks to programs! ## Why `smolagents`? For some low-level agentic use cases, like chains or routers, you can write all the code yourself. You'll be much better that way, since it will let you control and understand your system better. But once you start going for more complicated behaviours like letting an LLM call a function (that's "tool calling") or letting an LLM run a while loop ("multi-step agent"), some abstractions become necessary: - For tool calling, you need to parse the agent's output, so this output needs a predefined format like "Thought: I should call tool 'get_weather'. Action: get_weather(Paris).", that you parse with a predefined function, and system prompt given to the LLM should notify it about this format. - For a multi-step agent where the LLM output determines the loop, you need to give a different prompt to the LLM based on what happened in the last loop iteration: so you need some kind of memory. See? With these two examples, we already found the need for a few items to help us: - Of course, an LLM that acts as the engine powering the system - A list of tools that the agent can access - A parser that extracts tool calls from the LLM output - A system prompt synced with the parser - A memory But wait, since we give room to LLMs in decisions, surely they will make mistakes: so we need error logging and retry mechanisms. All these elements need tight coupling to make a well-functioning system. That's why we decided we needed to make basic building blocks to make all this stuff work together. ## Code agents In a multi-step agent, at each step, the LLM can write an action, in the form of some calls to external tools. A common format (used by Anthropic, OpenAI, and many others) for writing these actions is generally different shades of "writing actions as a JSON of tools names and arguments to use, which you then parse to know which tool to execute and with which arguments". [Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingface.co/papers/2411.01747) [papers](https://huggingface.co/papers/2401.00812) have shown that having the tool calling LLMs in code is much better. The reason for this simply that *we crafted our code languages specifically to be the best possible way to express actions performed by a computer*. If JSON snippets were a better expression, JSON would be the top programming language and programming would be hell on earth. The figure below, taken from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030), illustrates some advantages of writing actions in code: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png"> Writing actions in code rather than JSON-like snippets provides better: - **Composability:** could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function? - **Object management:** how do you store the output of an action like `generate_image` in JSON? - **Generality:** code is built to express simply anything you can have a computer do. - **Representation in LLM training data:** plenty of quality code actions are already included in LLMs’ training data which means they’re already trained for this!
{ "source": "huggingface/smolagents", "title": "docs/source/en/conceptual_guides/intro_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/conceptual_guides/intro_agents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 9161 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # How do multi-step agents work? The ReAct framework ([Yao et al., 2022](https://huggingface.co/papers/2210.03629)) is currently the main approach to building agents. The name is based on the concatenation of two words, "Reason" and "Act." Indeed, agents following this architecture will solve their task in as many steps as needed, each step consisting of a Reasoning step, then an Action step where it formulates tool calls that will bring it closer to solving the task at hand. All agents in `smolagents` are based on singular `MultiStepAgent` class, which is an abstraction of ReAct framework. On a basic level, this class performs actions on a cycle of following steps, where existing variables and knowledge is incorporated into the agent logs like below: Initialization: the system prompt is stored in a `SystemPromptStep`, and the user query is logged into a `TaskStep` . While loop (ReAct loop): - Use `agent.write_memory_to_messages()` to write the agent logs into a list of LLM-readable [chat messages](https://huggingface.co/docs/transformers/en/chat_templating). - Send these messages to a `Model` object to get its completion. Parse the completion to get the action (a JSON blob for `ToolCallingAgent`, a code snippet for `CodeAgent`). - Execute the action and logs result into memory (an `ActionStep`). - At the end of each step, we run all callback functions defined in `agent.step_callbacks` . Optionally, when planning is activated, a plan can be periodically revised and stored in a `PlanningStep` . This includes feeding facts about the task at hand to the memory. For a `CodeAgent`, it looks like the figure below. <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/codeagent_docs.png" /> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/codeagent_docs.png" /> </div> Here is a video overview of how that works: <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif" /> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif" /> </div> ![Framework of a React Agent](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png) We implement two versions of agents: - [`CodeAgent`] is the preferred type of agent: it generates its tool calls as blobs of code. - [`ToolCallingAgent`] generates tool calls as a JSON in its output, as is commonly done in agentic frameworks. We incorporate this option because it can be useful in some narrow cases where you can do fine with only one tool call per step: for instance, for web browsing, you need to wait after each action on the page to monitor how the page changes. > [!TIP] > We also provide an option to run agents in one-shot: just pass `single_step=True` when launching the agent, like `agent.run(your_task, single_step=True)` > [!TIP] > Read [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) blog post to learn more about multi-step agents.
{ "source": "huggingface/smolagents", "title": "docs/source/en/conceptual_guides/react.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/conceptual_guides/react.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 4180 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Orchestrate a multi-agent system 🤖🤝🤖 [[open-in-colab]] In this notebook we will make a **multi-agent web browser: an agentic system with several agents collaborating to solve problems using the web!** It will be a simple hierarchy: ``` +----------------+ | Manager agent | +----------------+ | _______________|______________ | | Code Interpreter +------------------+ tool | Web Search agent | +------------------+ | | Web Search tool | Visit webpage tool ``` Let's set up this system. Run the line below to install the required dependencies: ``` !pip install markdownify duckduckgo-search smolagents --upgrade -q ``` Let's login in order to call the HF Inference API: ``` from huggingface_hub import login login() ``` ⚡️ Our agent will be powered by [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) using `HfApiModel` class that uses HF's Inference API: the Inference API allows to quickly and easily run any OS model. _Note:_ The Inference API hosts models based on various criteria, and deployed models may be updated or replaced without prior notice. Learn more about it [here](https://huggingface.co/docs/api-inference/supported-models). ```py model_id = "Qwen/Qwen2.5-Coder-32B-Instruct" ``` ## 🔍 Create a web search tool For web browsing, we can already use our pre-existing [`DuckDuckGoSearchTool`](https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L151-L176) tool to provide a Google search equivalent. But then we will also need to be able to peak into the page found by the `DuckDuckGoSearchTool`. To do so, we could import the library's built-in `VisitWebpageTool`, but we will build it again to see how it's done. So let's create our `VisitWebpageTool` tool from scratch using `markdownify`. ```py import re import requests from markdownify import markdownify from requests.exceptions import RequestException from smolagents import tool @tool def visit_webpage(url: str) -> str: """Visits a webpage at the given URL and returns its content as a markdown string. Args: url: The URL of the webpage to visit. Returns: The content of the webpage converted to Markdown, or an error message if the request fails. """ try: # Send a GET request to the URL response = requests.get(url) response.raise_for_status() # Raise an exception for bad status codes # Convert the HTML content to Markdown markdown_content = markdownify(response.text).strip() # Remove multiple line breaks markdown_content = re.sub(r"\n{3,}", "\n\n", markdown_content) return markdown_content except RequestException as e: return f"Error fetching the webpage: {str(e)}" except Exception as e: return f"An unexpected error occurred: {str(e)}" ``` Ok, now let's initialize and test our tool! ```py print(visit_webpage("https://en.wikipedia.org/wiki/Hugging_Face")[:500]) ``` ## Build our multi-agent system 🤖🤝🤖 Now that we have all the tools `search` and `visit_webpage`, we can use them to create the web agent. Which configuration to choose for this agent? - Web browsing is a single-timeline task that does not require parallel tool calls, so JSON tool calling works well for that. We thus choose a `ToolCallingAgent`. - Also, since sometimes web search requires exploring many pages before finding the correct answer, we prefer to increase the number of `max_steps` to 10. ```py from smolagents import ( CodeAgent, ToolCallingAgent, HfApiModel, DuckDuckGoSearchTool, LiteLLMModel, ) model = HfApiModel(model_id) web_agent = ToolCallingAgent( tools=[DuckDuckGoSearchTool(), visit_webpage], model=model, max_steps=10, name="search", description="Runs web searches for you. Give it your query as an argument.", ) ``` Note that we gave this agent attributes `name` and `description`, mandatory attributes to make this agent callable by its manager agent. Then we create a manager agent, and upon initialization we pass our managed agent to it in its `managed_agents` argument. Since this agent is the one tasked with the planning and thinking, advanced reasoning will be beneficial, so a `CodeAgent` will be the best choice. Also, we want to ask a question that involves the current year and does additional data calculations: so let us add `additional_authorized_imports=["time", "numpy", "pandas"]`, just in case the agent needs these packages. ```py manager_agent = CodeAgent( tools=[], model=model, managed_agents=[web_agent], additional_authorized_imports=["time", "numpy", "pandas"], ) ``` That's all! Now let's run our system! We select a question that requires both some calculation and research: ```py answer = manager_agent.run("If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.") ``` We get this report as the answer: ``` Based on current growth projections and energy consumption estimates, if LLM trainings continue to scale up at the current rhythm until 2030: 1. The electric power required to power the biggest training runs by 2030 would be approximately 303.74 GW, which translates to about 2,660,762 GWh/year. 2. Comparing this to countries' electricity consumption: - It would be equivalent to about 34% of China's total electricity consumption. - It would exceed the total electricity consumption of India (184%), Russia (267%), and Japan (291%). - It would be nearly 9 times the electricity consumption of countries like Italy or Mexico. 3. Source of numbers: - The initial estimate of 5 GW for future LLM training comes from AWS CEO Matt Garman. - The growth projection used a CAGR of 79.80% from market research by Springs. - Country electricity consumption data is from the U.S. Energy Information Administration, primarily for the year 2021. ``` Seems like we'll need some sizeable powerplants if the [scaling hypothesis](https://gwern.net/scaling-hypothesis) continues to hold true. Our agents managed to efficiently collaborate towards solving the task! ✅ 💡 You can easily extend this orchestration to more agents: one does the code execution, one the web search, one handles file loadings...
{ "source": "huggingface/smolagents", "title": "docs/source/en/examples/multiagents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/multiagents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7468 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Agentic RAG [[open-in-colab]] Retrieval-Augmented-Generation (RAG) is “using an LLM to answer a user query, but basing the answer on information retrieved from a knowledge base”. It has many advantages over using a vanilla or fine-tuned LLM: to name a few, it allows to ground the answer on true facts and reduce confabulations, it allows to provide the LLM with domain-specific knowledge, and it allows fine-grained control of access to information from the knowledge base. But vanilla RAG has limitations, most importantly these two: - It performs only one retrieval step: if the results are bad, the generation in turn will be bad. - Semantic similarity is computed with the user query as a reference, which might be suboptimal: for instance, the user query will often be a question and the document containing the true answer will be in affirmative voice, so its similarity score will be downgraded compared to other source documents in the interrogative form, leading to a risk of missing the relevant information. We can alleviate these problems by making a RAG agent: very simply, an agent armed with a retriever tool! This agent will: ✅ Formulate the query itself and ✅ Critique to re-retrieve if needed. So it should naively recover some advanced RAG techniques! - Instead of directly using the user query as the reference in semantic search, the agent formulates itself a reference sentence that can be closer to the targeted documents, as in [HyDE](https://huggingface.co/papers/2212.10496). The agent can use the generated snippets and re-retrieve if needed, as in [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/). Let's build this system. 🛠️ Run the line below to install required dependencies: ```bash !pip install smolagents pandas langchain langchain-community sentence-transformers datasets python-dotenv rank_bm25 --upgrade -q ``` To call the HF Inference API, you will need a valid token as your environment variable `HF_TOKEN`. We use python-dotenv to load it. ```py from dotenv import load_dotenv load_dotenv() ``` We first load a knowledge base on which we want to perform RAG: this dataset is a compilation of the documentation pages for many Hugging Face libraries, stored as markdown. We will keep only the documentation for the `transformers` library. Then prepare the knowledge base by processing the dataset and storing it into a vector database to be used by the retriever. We use [LangChain](https://python.langchain.com/docs/introduction/) for its excellent vector database utilities. ```py import datasets from langchain.docstore.document import Document from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.retrievers import BM25Retriever knowledge_base = datasets.load_dataset("m-ric/huggingface_doc", split="train") knowledge_base = knowledge_base.filter(lambda row: row["source"].startswith("huggingface/transformers")) source_docs = [ Document(page_content=doc["text"], metadata={"source": doc["source"].split("/")[1]}) for doc in knowledge_base ] text_splitter = RecursiveCharacterTextSplitter( chunk_size=500, chunk_overlap=50, add_start_index=True, strip_whitespace=True, separators=["\n\n", "\n", ".", " ", ""], ) docs_processed = text_splitter.split_documents(source_docs) ``` Now the documents are ready. So let’s build our agentic RAG system! 👉 We only need a RetrieverTool that our agent can leverage to retrieve information from the knowledge base. Since we need to add a vectordb as an attribute of the tool, we cannot simply use the simple tool constructor with a `@tool` decorator: so we will follow the advanced setup highlighted in the [tools tutorial](../tutorials/tools). ```py from smolagents import Tool class RetrieverTool(Tool): name = "retriever" description = "Uses semantic search to retrieve the parts of transformers documentation that could be most relevant to answer your query." inputs = { "query": { "type": "string", "description": "The query to perform. This should be semantically close to your target documents. Use the affirmative form rather than a question.", } } output_type = "string" def __init__(self, docs, **kwargs): super().__init__(**kwargs) self.retriever = BM25Retriever.from_documents( docs, k=10 ) def forward(self, query: str) -> str: assert isinstance(query, str), "Your search query must be a string" docs = self.retriever.invoke( query, ) return "\nRetrieved documents:\n" + "".join( [ f"\n\n===== Document {str(i)} =====\n" + doc.page_content for i, doc in enumerate(docs) ] ) retriever_tool = RetrieverTool(docs_processed) ``` We have used BM25, a classic retrieval method, because it's lightning fast to setup. To improve retrieval accuracy, you could use replace BM25 with semantic search using vector representations for documents: thus you can head to the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) to select a good embedding model. Now it’s straightforward to create an agent that leverages this `retriever_tool`! The agent will need these arguments upon initialization: - `tools`: a list of tools that the agent will be able to call. - `model`: the LLM that powers the agent. Our `model` must be a callable that takes as input a list of messages and returns text. It also needs to accept a stop_sequences argument that indicates when to stop its generation. For convenience, we directly use the HfEngine class provided in the package to get a LLM engine that calls Hugging Face's Inference API. >[!NOTE] To use a specific model, pass it like this: `HfApiModel("meta-llama/Llama-3.3-70B-Instruct")`. The Inference API hosts models based on various criteria, and deployed models may be updated or replaced without prior notice. Learn more about it [here](https://huggingface.co/docs/api-inference/supported-models). ```py from smolagents import HfApiModel, CodeAgent agent = CodeAgent( tools=[retriever_tool], model=HfApiModel(), max_steps=4, verbosity_level=2 ) ``` Upon initializing the CodeAgent, it has been automatically given a default system prompt that tells the LLM engine to process step-by-step and generate tool calls as code snippets, but you could replace this prompt template with your own as needed. Then when its `.run()` method is launched, the agent takes care of calling the LLM engine, and executing the tool calls, all in a loop that ends only when tool `final_answer` is called with the final answer as its argument. ```py agent_output = agent.run("For a transformers model training, which is slower, the forward or the backward pass?") print("Final output:") print(agent_output) ```
{ "source": "huggingface/smolagents", "title": "docs/source/en/examples/rag.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/rag.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7628 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Text-to-SQL [[open-in-colab]] In this tutorial, we’ll see how to implement an agent that leverages SQL using `smolagents`. > Let's start with the golden question: why not keep it simple and use a standard text-to-SQL pipeline? A standard text-to-sql pipeline is brittle, since the generated SQL query can be incorrect. Even worse, the query could be incorrect, but not raise an error, instead giving some incorrect/useless outputs without raising an alarm. 👉 Instead, an agent system is able to critically inspect outputs and decide if the query needs to be changed or not, thus giving it a huge performance boost. Let’s build this agent! 💪 Run the line below to install required dependencies: ```bash !pip install smolagents python-dotenv sqlalchemy --upgrade -q ``` To call the HF Inference API, you will need a valid token as your environment variable `HF_TOKEN`. We use python-dotenv to load it. ```py from dotenv import load_dotenv load_dotenv() ``` Then, we setup the SQL environment: ```py from sqlalchemy import ( create_engine, MetaData, Table, Column, String, Integer, Float, insert, inspect, text, ) engine = create_engine("sqlite:///:memory:") metadata_obj = MetaData() def insert_rows_into_table(rows, table, engine=engine): for row in rows: stmt = insert(table).values(**row) with engine.begin() as connection: connection.execute(stmt) table_name = "receipts" receipts = Table( table_name, metadata_obj, Column("receipt_id", Integer, primary_key=True), Column("customer_name", String(16), primary_key=True), Column("price", Float), Column("tip", Float), ) metadata_obj.create_all(engine) rows = [ {"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20}, {"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24}, {"receipt_id": 3, "customer_name": "Woodrow Wilson", "price": 53.43, "tip": 5.43}, {"receipt_id": 4, "customer_name": "Margaret James", "price": 21.11, "tip": 1.00}, ] insert_rows_into_table(rows, receipts) ``` ### Build our agent Now let’s make our SQL table retrievable by a tool. The tool’s description attribute will be embedded in the LLM’s prompt by the agent system: it gives the LLM information about how to use the tool. This is where we want to describe the SQL table. ```py inspector = inspect(engine) columns_info = [(col["name"], col["type"]) for col in inspector.get_columns("receipts")] table_description = "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info]) print(table_description) ``` ```text Columns: - receipt_id: INTEGER - customer_name: VARCHAR(16) - price: FLOAT - tip: FLOAT ``` Now let’s build our tool. It needs the following: (read [the tool doc](../tutorials/tools) for more detail) - A docstring with an `Args:` part listing arguments. - Type hints on both inputs and output. ```py from smolagents import tool @tool def sql_engine(query: str) -> str: """ Allows you to perform SQL queries on the table. Returns a string representation of the result. The table is named 'receipts'. Its description is as follows: Columns: - receipt_id: INTEGER - customer_name: VARCHAR(16) - price: FLOAT - tip: FLOAT Args: query: The query to perform. This should be correct SQL. """ output = "" with engine.connect() as con: rows = con.execute(text(query)) for row in rows: output += "\n" + str(row) return output ``` Now let us create an agent that leverages this tool. We use the `CodeAgent`, which is smolagents’ main agent class: an agent that writes actions in code and can iterate on previous output according to the ReAct framework. The model is the LLM that powers the agent system. `HfApiModel` allows you to call LLMs using HF’s Inference API, either via Serverless or Dedicated endpoint, but you could also use any proprietary API. ```py from smolagents import CodeAgent, HfApiModel agent = CodeAgent( tools=[sql_engine], model=HfApiModel("meta-llama/Meta-Llama-3.1-8B-Instruct"), ) agent.run("Can you give me the name of the client who got the most expensive receipt?") ``` ### Level 2: Table joins Now let’s make it more challenging! We want our agent to handle joins across multiple tables. So let’s make a second table recording the names of waiters for each receipt_id! ```py table_name = "waiters" waiters = Table( table_name, metadata_obj, Column("receipt_id", Integer, primary_key=True), Column("waiter_name", String(16), primary_key=True), ) metadata_obj.create_all(engine) rows = [ {"receipt_id": 1, "waiter_name": "Corey Johnson"}, {"receipt_id": 2, "waiter_name": "Michael Watts"}, {"receipt_id": 3, "waiter_name": "Michael Watts"}, {"receipt_id": 4, "waiter_name": "Margaret James"}, ] insert_rows_into_table(rows, waiters) ``` Since we changed the table, we update the `SQLExecutorTool` with this table’s description to let the LLM properly leverage information from this table. ```py updated_description = """Allows you to perform SQL queries on the table. Beware that this tool's output is a string representation of the execution output. It can use the following tables:""" inspector = inspect(engine) for table in ["receipts", "waiters"]: columns_info = [(col["name"], col["type"]) for col in inspector.get_columns(table)] table_description = f"Table '{table}':\n" table_description += "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info]) updated_description += "\n\n" + table_description print(updated_description) ``` Since this request is a bit harder than the previous one, we’ll switch the LLM engine to use the more powerful [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)! ```py sql_engine.description = updated_description agent = CodeAgent( tools=[sql_engine], model=HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct"), ) agent.run("Which waiter got more total money from tips?") ``` It directly works! The setup was surprisingly simple, wasn’t it? This example is done! We've touched upon these concepts: - Building new tools. - Updating a tool's description. - Switching to a stronger LLM helps agent reasoning. ✅ Now you can go build this text-to-SQL system you’ve always dreamt of! ✨
{ "source": "huggingface/smolagents", "title": "docs/source/en/examples/text_to_sql.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/text_to_sql.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7203 }
# Web Browser Automation with Agents 🤖🌐 [[open-in-colab]] In this notebook, we'll create an **agent-powered web browser automation system**! This system can navigate websites, interact with elements, and extract information automatically. The agent will be able to: - [x] Navigate to web pages - [x] Click on elements - [x] Search within pages - [x] Handle popups and modals - [x] Extract information Let's set up this system step by step! First, run these lines to install the required dependencies: ```bash pip install smolagents selenium helium pillow -q ``` Let's import our required libraries and set up environment variables: ```python from io import BytesIO from time import sleep import helium from dotenv import load_dotenv from PIL import Image from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from smolagents import CodeAgent, tool from smolagents.agents import ActionStep # Load environment variables load_dotenv() ``` Now let's create our core browser interaction tools that will allow our agent to navigate and interact with web pages: ```python @tool def search_item_ctrl_f(text: str, nth_result: int = 1) -> str: """ Searches for text on the current page via Ctrl + F and jumps to the nth occurrence. Args: text: The text to search for nth_result: Which occurrence to jump to (default: 1) """ elements = driver.find_elements(By.XPATH, f"//*[contains(text(), '{text}')]") if nth_result > len(elements): raise Exception(f"Match n°{nth_result} not found (only {len(elements)} matches found)") result = f"Found {len(elements)} matches for '{text}'." elem = elements[nth_result - 1] driver.execute_script("arguments[0].scrollIntoView(true);", elem) result += f"Focused on element {nth_result} of {len(elements)}" return result @tool def go_back() -> None: """Goes back to previous page.""" driver.back() @tool def close_popups() -> str: """ Closes any visible modal or pop-up on the page. Use this to dismiss pop-up windows! This does not work on cookie consent banners. """ webdriver.ActionChains(driver).send_keys(Keys.ESCAPE).perform() ``` Let's set up our browser with Chrome and configure screenshot capabilities: ```python # Configure Chrome options chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("--force-device-scale-factor=1") chrome_options.add_argument("--window-size=1000,1350") chrome_options.add_argument("--disable-pdf-viewer") chrome_options.add_argument("--window-position=0,0") # Initialize the browser driver = helium.start_chrome(headless=False, options=chrome_options) # Set up screenshot callback def save_screenshot(memory_step: ActionStep, agent: CodeAgent) -> None: sleep(1.0) # Let JavaScript animations happen before taking the screenshot driver = helium.get_driver() current_step = memory_step.step_number if driver is not None: for previous_memory_step in agent.memory.steps: # Remove previous screenshots for lean processing if isinstance(previous_memory_step, ActionStep) and previous_memory_step.step_number <= current_step - 2: previous_memory_step.observations_images = None png_bytes = driver.get_screenshot_as_png() image = Image.open(BytesIO(png_bytes)) print(f"Captured a browser screenshot: {image.size} pixels") memory_step.observations_images = [image.copy()] # Create a copy to ensure it persists # Update observations with current URL url_info = f"Current url: {driver.current_url}" memory_step.observations = ( url_info if memory_step.observations is None else memory_step.observations + "\n" + url_info ) ``` Now let's create our web automation agent: ```python from smolagents import HfApiModel # Initialize the model model_id = "meta-llama/Llama-3.3-70B-Instruct" # You can change this to your preferred model model = HfApiModel(model_id) # Create the agent agent = CodeAgent( tools=[go_back, close_popups, search_item_ctrl_f], model=model, additional_authorized_imports=["helium"], step_callbacks=[save_screenshot], max_steps=20, verbosity_level=2, ) # Import helium for the agent agent.python_executor("from helium import *", agent.state) ``` The agent needs instructions on how to use Helium for web automation. Here are the instructions we'll provide: ```python helium_instructions = """ You can use helium to access websites. Don't bother about the helium driver, it's already managed. We've already ran "from helium import *" Then you can go to pages! Code: ```py go_to('github.com/trending') ```<end_code> You can directly click clickable elements by inputting the text that appears on them. Code: ```py click("Top products") ```<end_code> If it's a link: Code: ```py click(Link("Top products")) ```<end_code> If you try to interact with an element and it's not found, you'll get a LookupError. In general stop your action after each button click to see what happens on your screenshot. Never try to login in a page. To scroll up or down, use scroll_down or scroll_up with as an argument the number of pixels to scroll from. Code: ```py scroll_down(num_pixels=1200) # This will scroll one viewport down ```<end_code> When you have pop-ups with a cross icon to close, don't try to click the close icon by finding its element or targeting an 'X' element (this most often fails). Just use your built-in tool `close_popups` to close them: Code: ```py close_popups() ```<end_code> You can use .exists() to check for the existence of an element. For example: Code: ```py if Text('Accept cookies?').exists(): click('I accept') ```<end_code> """ ``` Now we can run our agent with a task! Let's try finding information on Wikipedia: ```python search_request = """ Please navigate to https://en.wikipedia.org/wiki/Chicago and give me a sentence containing the word "1992" that mentions a construction accident. """ agent_output = agent.run(search_request + helium_instructions) print("Final output:") print(agent_output) ``` You can run different tasks by modifying the request. For example, here's for me to know if I should work harder: ```python github_request = """ I'm trying to find how hard I have to work to get a repo in github.com/trending. Can you navigate to the profile for the top author of the top trending repo, and give me their total number of commits over the last year? """ agent_output = agent.run(github_request + helium_instructions) print("Final output:") print(agent_output) ``` The system is particularly effective for tasks like: - Data extraction from websites - Web research automation - UI testing and verification - Content monitoring
{ "source": "huggingface/smolagents", "title": "docs/source/en/examples/web_browser.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/examples/web_browser.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 6795 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Agents <Tip warning={true}> Smolagents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. </Tip> To learn more about agents and tools make sure to read the [introductory guide](../index). This page contains the API docs for the underlying classes. ## Agents Our agents inherit from [`MultiStepAgent`], which means they can act in multiple steps, each step consisting of one thought, then one tool call and execution. Read more in [this conceptual guide](../conceptual_guides/react). We provide two types of agents, based on the main [`Agent`] class. - [`CodeAgent`] is the default agent, it writes its tool calls in Python code. - [`ToolCallingAgent`] writes its tool calls in JSON. Both require arguments `model` and list of tools `tools` at initialization. ### Classes of agents [[autodoc]] MultiStepAgent [[autodoc]] CodeAgent [[autodoc]] ToolCallingAgent ### ManagedAgent _This class is deprecated since 1.8.0: now you simply need to pass attributes `name` and `description` to a normal agent to make it callable by a manager agent._ ### stream_to_gradio [[autodoc]] stream_to_gradio ### GradioUI > [!TIP] > You must have `gradio` installed to use the UI. Please run `pip install smolagents[gradio]` if it's not the case. [[autodoc]] GradioUI ## Prompts [[autodoc]] smolagents.agents.PromptTemplates [[autodoc]] smolagents.agents.PlanningPromptTemplate [[autodoc]] smolagents.agents.ManagedAgentPromptTemplate [[autodoc]] smolagents.agents.FinalAnswerPromptTemplate
{ "source": "huggingface/smolagents", "title": "docs/source/en/reference/agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/reference/agents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2356 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Models <Tip warning={true}> Smolagents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. </Tip> To learn more about agents and tools make sure to read the [introductory guide](../index). This page contains the API docs for the underlying classes. ## Models You're free to create and use your own models to power your agent. You could use any `model` callable for your agent, as long as: 1. It follows the [messages format](./chat_templating) (`List[Dict[str, str]]`) for its input `messages`, and it returns a `str`. 2. It stops generating outputs *before* the sequences passed in the argument `stop_sequences` For defining your LLM, you can make a `custom_model` method which accepts a list of [messages](./chat_templating) and returns an object with a .content attribute containing the text. This callable also needs to accept a `stop_sequences` argument that indicates when to stop generating. ```python from huggingface_hub import login, InferenceClient login("<YOUR_HUGGINGFACEHUB_API_TOKEN>") model_id = "meta-llama/Llama-3.3-70B-Instruct" client = InferenceClient(model=model_id) def custom_model(messages, stop_sequences=["Task"]): response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000) answer = response.choices[0].message return answer ``` Additionally, `custom_model` can also take a `grammar` argument. In the case where you specify a `grammar` upon agent initialization, this argument will be passed to the calls to model, with the `grammar` that you defined upon initialization, to allow [constrained generation](https://huggingface.co/docs/text-generation-inference/conceptual/guidance) in order to force properly-formatted agent outputs. ### TransformersModel For convenience, we have added a `TransformersModel` that implements the points above by building a local `transformers` pipeline for the model_id given at initialization. ```python from smolagents import TransformersModel model = TransformersModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct") print(model([{"role": "user", "content": [{"type": "text", "text": "Ok!"}]}], stop_sequences=["great"])) ``` ```text >>> What a ``` > [!TIP] > You must have `transformers` and `torch` installed on your machine. Please run `pip install smolagents[transformers]` if it's not the case. [[autodoc]] TransformersModel ### HfApiModel The `HfApiModel` wraps huggingface_hub's [InferenceClient](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference) for the execution of the LLM. It supports both HF's own [Inference API](https://huggingface.co/docs/api-inference/index) as well as all [Inference Providers](https://huggingface.co/blog/inference-providers) available on the Hub. ```python from smolagents import HfApiModel messages = [ {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]} ] model = HfApiModel() print(model(messages)) ``` ```text >>> Of course! If you change your mind, feel free to reach out. Take care! ``` [[autodoc]] HfApiModel ### LiteLLMModel The `LiteLLMModel` leverages [LiteLLM](https://www.litellm.ai/) to support 100+ LLMs from various providers. You can pass kwargs upon model initialization that will then be used whenever using the model, for instance below we pass `temperature`. ```python from smolagents import LiteLLMModel messages = [ {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]} ] model = LiteLLMModel("anthropic/claude-3-5-sonnet-latest", temperature=0.2, max_tokens=10) print(model(messages)) ``` [[autodoc]] LiteLLMModel ### OpenAIServerModel This class lets you call any OpenAIServer compatible model. Here's how you can set it (you can customise the `api_base` url to point to another server): ```py import os from smolagents import OpenAIServerModel model = OpenAIServerModel( model_id="gpt-4o", api_base="https://api.openai.com/v1", api_key=os.environ["OPENAI_API_KEY"], ) ``` [[autodoc]] OpenAIServerModel ### AzureOpenAIServerModel `AzureOpenAIServerModel` allows you to connect to any Azure OpenAI deployment. Below you can find an example of how to set it up, note that you can omit the `azure_endpoint`, `api_key`, and `api_version` arguments, provided you've set the corresponding environment variables -- `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`. Pay attention to the lack of an `AZURE_` prefix for `OPENAI_API_VERSION`, this is due to the way the underlying [openai](https://github.com/openai/openai-python) package is designed. ```py import os from smolagents import AzureOpenAIServerModel model = AzureOpenAIServerModel( model_id = os.environ.get("AZURE_OPENAI_MODEL"), azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"), api_key=os.environ.get("AZURE_OPENAI_API_KEY"), api_version=os.environ.get("OPENAI_API_VERSION") ) ``` [[autodoc]] AzureOpenAIServerModel ### MLXModel ```python from smolagents import MLXModel model = MLXModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct") print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"])) ``` ```text >>> What a ``` > [!TIP] > You must have `mlx-lm` installed on your machine. Please run `pip install smolagents[mlx-lm]` if it's not the case. [[autodoc]] MLXModel
{ "source": "huggingface/smolagents", "title": "docs/source/en/reference/models.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/reference/models.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 6148 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Tools <Tip warning={true}> Smolagents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. </Tip> To learn more about agents and tools make sure to read the [introductory guide](../index). This page contains the API docs for the underlying classes. ## Tools ### load_tool [[autodoc]] load_tool ### tool [[autodoc]] tool ### Tool [[autodoc]] Tool ### launch_gradio_demo [[autodoc]] launch_gradio_demo ## Default tools ### PythonInterpreterTool [[autodoc]] PythonInterpreterTool ### FinalAnswerTool [[autodoc]] FinalAnswerTool ### UserInputTool [[autodoc]] UserInputTool ### DuckDuckGoSearchTool [[autodoc]] DuckDuckGoSearchTool ### GoogleSearchTool [[autodoc]] GoogleSearchTool ### VisitWebpageTool [[autodoc]] VisitWebpageTool ### SpeechToTextTool [[autodoc]] SpeechToTextTool ## ToolCollection [[autodoc]] ToolCollection ## Agent Types Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to correctly render these returns in ipython (jupyter, colab, ipython notebooks, ...), we implement wrapper classes around these types. The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image object should still behave as a `PIL.Image`. These types have three specific purposes: - Calling `to_raw` on the type should return the underlying object - Calling `to_string` on the type should return the object as a string: that can be the string in case of an `AgentText` but will be the path of the serialized version of the object in other instances - Displaying it in an ipython kernel should display the object correctly ### AgentText [[autodoc]] smolagents.agent_types.AgentText ### AgentImage [[autodoc]] smolagents.agent_types.AgentImage ### AgentAudio [[autodoc]] smolagents.agent_types.AgentAudio
{ "source": "huggingface/smolagents", "title": "docs/source/en/reference/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/reference/tools.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2817 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Building good agents [[open-in-colab]] There's a world of difference between building an agent that works and one that doesn't. How can we build agents that fall into the former category? In this guide, we're going to talk about best practices for building agents. > [!TIP] > If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour). ### The best agentic systems are the simplest: simplify the workflow as much as you can Giving an LLM some agency in your workflow introduces some risk of errors. Well-programmed agentic systems have good error logging and retry mechanisms anyway, so the LLM engine has a chance to self-correct their mistake. But to reduce the risk of LLM error to the maximum, you should simplify your workflow! Let's revisit the example from the [intro to agents](../conceptual_guides/intro_agents): a bot that answers user queries for a surf trip company. Instead of letting the agent do 2 different calls for "travel distance API" and "weather API" each time they are asked about a new surf spot, you could just make one unified tool "return_spot_information", a function that calls both APIs at once and returns their concatenated outputs to the user. This will reduce costs, latency, and error risk! The main guideline is: Reduce the number of LLM calls as much as you can. This leads to a few takeaways: - Whenever possible, group 2 tools in one, like in our example of the two APIs. - Whenever possible, logic should be based on deterministic functions rather than agentic decisions. ### Improve the information flow to the LLM engine Remember that your LLM engine is like an *intelligent* robot, tapped into a room with the only communication with the outside world being notes passed under a door. It won't know of anything that happened if you don't explicitly put that into its prompt. So first start with making your task very clear! Since an agent is powered by an LLM, minor variations in your task formulation might yield completely different results. Then, improve the information flow towards your agent in tool use. Particular guidelines to follow: - Each tool should log (by simply using `print` statements inside the tool's `forward` method) everything that could be useful for the LLM engine. - In particular, logging detail on tool execution errors would help a lot! For instance, here's a tool that retrieves weather data based on location and date-time: First, here's a poor version: ```python import datetime from smolagents import tool def get_weather_report_at_coordinates(coordinates, date_time): # Dummy function, returns a list of [temperature in °C, risk of rain on a scale 0-1, wave height in m] return [28.0, 0.35, 0.85] def convert_location_to_coordinates(location): # Returns dummy coordinates return [3.3, -42.0] @tool def get_weather_api(location: str, date_time: str) -> str: """ Returns the weather report. Args: location: the name of the place that you want the weather for. date_time: the date and time for which you want the report. """ lon, lat = convert_location_to_coordinates(location) date_time = datetime.strptime(date_time) return str(get_weather_report_at_coordinates((lon, lat), date_time)) ``` Why is it bad? - there's no precision of the format that should be used for `date_time` - there's no detail on how location should be specified. - there's no logging mechanism trying to make explicit failure cases like location not being in a proper format, or date_time not being properly formatted. - the output format is hard to understand If the tool call fails, the error trace logged in memory can help the LLM reverse engineer the tool to fix the errors. But why leave it with so much heavy lifting to do? A better way to build this tool would have been the following: ```python @tool def get_weather_api(location: str, date_time: str) -> str: """ Returns the weather report. Args: location: the name of the place that you want the weather for. Should be a place name, followed by possibly a city name, then a country, like "Anchor Point, Taghazout, Morocco". date_time: the date and time for which you want the report, formatted as '%m/%d/%y %H:%M:%S'. """ lon, lat = convert_location_to_coordinates(location) try: date_time = datetime.strptime(date_time) except Exception as e: raise ValueError("Conversion of `date_time` to datetime format failed, make sure to provide a string in format '%m/%d/%y %H:%M:%S'. Full trace:" + str(e)) temperature_celsius, risk_of_rain, wave_height = get_weather_report_at_coordinates((lon, lat), date_time) return f"Weather report for {location}, {date_time}: Temperature will be {temperature_celsius}°C, risk of rain is {risk_of_rain*100:.0f}%, wave height is {wave_height}m." ``` In general, to ease the load on your LLM, the good question to ask yourself is: "How easy would it be for me, if I was dumb and using this tool for the first time ever, to program with this tool and correct my own errors?". ### Give more arguments to the agent To pass some additional objects to your agent beyond the simple string describing the task, you can use the `additional_args` argument to pass any type of object: ```py from smolagents import CodeAgent, HfApiModel model_id = "meta-llama/Llama-3.3-70B-Instruct" agent = CodeAgent(tools=[], model=HfApiModel(model_id=model_id), add_base_tools=True) agent.run( "Why does Mike not know many people in New York?", additional_args={"mp3_sound_file_url":'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3'} ) ``` For instance, you can use this `additional_args` argument to pass images or strings that you want your agent to leverage. ## How to debug your agent ### 1. Use a stronger LLM In an agentic workflows, some of the errors are actual errors, some other are the fault of your LLM engine not reasoning properly. For instance, consider this trace for an `CodeAgent` that I asked to create a car picture: ``` ==================================================================================================== New task ==================================================================================================== Make me a cool car picture ──────────────────────────────────────────────────────────────────────────────────────────────────── New step ──────────────────────────────────────────────────────────────────────────────────────────────────── Agent is executing the code below: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── image_generator(prompt="A cool, futuristic sports car with LED headlights, aerodynamic design, and vibrant color, high-res, photorealistic") ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Last output from code snippet: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png Step 1: - Time taken: 16.35 seconds - Input tokens: 1,383 - Output tokens: 77 ──────────────────────────────────────────────────────────────────────────────────────────────────── New step ──────────────────────────────────────────────────────────────────────────────────────────────────── Agent is executing the code below: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── final_answer("/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png") ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Print outputs: Last output from code snippet: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png Final answer: /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png ``` The user sees, instead of an image being returned, a path being returned to them. It could look like a bug from the system, but actually the agentic system didn't cause the error: it's just that the LLM brain did the mistake of not saving the image output into a variable. Thus it cannot access the image again except by leveraging the path that was logged while saving the image, so it returns the path instead of an image. The first step to debugging your agent is thus "Use a more powerful LLM". Alternatives like `Qwen2/5-72B-Instruct` wouldn't have made that mistake. ### 2. Provide more guidance / more information You can also use less powerful models, provided you guide them more effectively. Put yourself in the shoes of your model: if you were the model solving the task, would you struggle with the information available to you (from the system prompt + task formulation + tool description) ? Would you need some added clarifications? To provide extra information, we do not recommend to change the system prompt right away: the default system prompt has many adjustments that you do not want to mess up except if you understand the prompt very well. Better ways to guide your LLM engine are: - If it's about the task to solve: add all these details to the task. The task could be 100s of pages long. - If it's about how to use tools: the description attribute of your tools. ### 3. Change the system prompt (generally not advised) If above clarifications are not sufficient, you can change the system prompt. Let's see how it works. For example, let us check the default system prompt for the [`CodeAgent`] (below version is shortened by skipping zero-shot examples). ```python print(agent.prompt_templates["system_prompt"]) ``` Here is what you get: ```text You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can. To do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code. To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences. At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use. Then in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '<end_code>' sequence. During each intermediate step, you can use 'print()' to save whatever important information you will then need. These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step. In the end you have to return a final answer using the `final_answer` tool. Here are a few examples using notional tools: --- {examples} Above example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools: {{tool_descriptions}} {{managed_agents_descriptions}} Here are the rules you should always follow to solve your task: 1. Always provide a 'Thought:' sequence, and a 'Code:\n```py' sequence ending with '```<end_code>' sequence, else you will fail. 2. Use only variables that you have defined! 3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': "What is the place where James Bond lives?"})', but use the arguments directly as in 'answer = wiki(query="What is the place where James Bond lives?")'. 4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block. 5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters. 6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'. 7. Never create any notional variables in our code, as having these in your logs might derail you from the true variables. 8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}} 9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist. 10. Don't give up! You're in charge of solving the task, not providing directions to solve it. Now Begin! If you solve the task correctly, you will receive a reward of $1,000,000. ``` As you can see, there are placeholders like `"{{tool_descriptions}}"`: these will be used upon agent initialization to insert certain automatically generated descriptions of tools or managed agents. So while you can overwrite this system prompt template by passing your custom prompt as an argument to the `system_prompt` parameter, your new system prompt must contain the following placeholders: - `"{{tool_descriptions}}"` to insert tool descriptions. - `"{{managed_agents_description}}"` to insert the description for managed agents if there are any. - For `CodeAgent` only: `"{{authorized_imports}}"` to insert the list of authorized imports. Then you can change the system prompt as follows: ```py from smolagents.prompts import CODE_SYSTEM_PROMPT modified_system_prompt = CODE_SYSTEM_PROMPT + "\nHere you go!" # Change the system prompt here agent = CodeAgent( tools=[], model=HfApiModel(), system_prompt=modified_system_prompt ) ``` This also works with the [`ToolCallingAgent`]. ### 4. Extra planning We provide a model for a supplementary planning step, that an agent can run regularly in-between normal action steps. In this step, there is no tool call, the LLM is simply asked to update a list of facts it knows and to reflect on what steps it should take next based on those facts. ```py from smolagents import load_tool, CodeAgent, HfApiModel, DuckDuckGoSearchTool from dotenv import load_dotenv load_dotenv() # Import tool from Hub image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True) search_tool = DuckDuckGoSearchTool() agent = CodeAgent( tools=[search_tool, image_generation_tool], model=HfApiModel("Qwen/Qwen2.5-72B-Instruct"), planning_interval=3 # This is where you activate planning! ) # Run it! result = agent.run( "How long would a cheetah at full speed take to run the length of Pont Alexandre III?", ) ```
{ "source": "huggingface/smolagents", "title": "docs/source/en/tutorials/building_good_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/building_good_agents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 16145 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Inspecting runs with OpenTelemetry [[open-in-colab]] > [!TIP] > If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour). ## Why log your agent runs? Agent runs are complicated to debug. Validating that a run went properly is hard, since agent workflows are [unpredictable by design](../conceptual_guides/intro_agents) (if they were predictable, you'd just be using good old code). And inspecting a run is hard as well: multi-step agents tend to quickly fill a console with logs, and most of the errors are just "LLM dumb" kind of errors, from which the LLM auto-corrects in the next step by writing better code or tool calls. So using instrumentation to record agent runs is necessary in production for later inspection and monitoring! We've adopted the [OpenTelemetry](https://opentelemetry.io/) standard for instrumenting agent runs. This means that you can just run some instrumentation code, then run your agents normally, and everything gets logged into your platform. Below are some examples of how to do this with different OpenTelemetry backends. Here's how it then looks like on the platform: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/inspect_run_phoenix.gif"/> </div> ## Setting up telemetry with Arize AI Phoenix First install the required packages. Here we install [Phoenix by Arize AI](https://github.com/Arize-ai/phoenix) because that's a good solution to collect and inspect the logs, but there are other OpenTelemetry-compatible platforms that you could use for this collection & inspection part. ```shell pip install smolagents pip install arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp openinference-instrumentation-smolagents ``` Then run the collector in the background. ```shell python -m phoenix.server.main serve ``` Finally, set up `SmolagentsInstrumentor` to trace your agents and send the traces to Phoenix at the endpoint defined below. ```python from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import BatchSpanProcessor from openinference.instrumentation.smolagents import SmolagentsInstrumentor from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor endpoint = "http://0.0.0.0:6006/v1/traces" trace_provider = TracerProvider() trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) SmolagentsInstrumentor().instrument(tracer_provider=trace_provider) ``` Then you can run your agents! ```py from smolagents import ( CodeAgent, ToolCallingAgent, DuckDuckGoSearchTool, VisitWebpageTool, HfApiModel, ) model = HfApiModel() search_agent = ToolCallingAgent( tools=[DuckDuckGoSearchTool(), VisitWebpageTool()], model=model, name="search_agent", description="This is an agent that can do web search.", ) manager_agent = CodeAgent( tools=[], model=model, managed_agents=[search_agent], ) manager_agent.run( "If the US keeps its 2024 growth rate, how many years will it take for the GDP to double?" ) ``` Voilà! You can then navigate to `http://0.0.0.0:6006/projects/` to inspect your run! <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/inspect_run_phoenix.png"> You can see that the CodeAgent called its managed ToolCallingAgent (by the way, the managed agent could be have been a CodeAgent as well) to ask it to run the web search for the U.S. 2024 growth rate. Then the managed agent returned its report and the manager agent acted upon it to calculate the economy doubling time! Sweet, isn't it? ## Setting up telemetry with Langfuse This part shows how to monitor and debug your Hugging Face **smolagents** with **Langfuse** using the `SmolagentsInstrumentor`. > **What is Langfuse?** [Langfuse](https://langfuse.com) is an open-source platform for LLM engineering. It provides tracing and monitoring capabilities for AI agents, helping developers debug, analyze, and optimize their products. Langfuse integrates with various tools and frameworks via native integrations, OpenTelemetry, and SDKs. ### Step 1: Install Dependencies ```python %pip install smolagents %pip install opentelemetry-sdk opentelemetry-exporter-otlp openinference-instrumentation-smolagents ``` ### Step 2: Set Up Environment Variables Set your Langfuse API keys and configure the OpenTelemetry endpoint to send traces to Langfuse. Get your Langfuse API keys by signing up for [Langfuse Cloud](https://cloud.langfuse.com) or [self-hosting Langfuse](https://langfuse.com/self-hosting). Also, add your [Hugging Face token](https://huggingface.co/settings/tokens) (`HF_TOKEN`) as an environment variable. ```python import os import base64 LANGFUSE_PUBLIC_KEY="pk-lf-..." LANGFUSE_SECRET_KEY="sk-lf-..." LANGFUSE_AUTH=base64.b64encode(f"{LANGFUSE_PUBLIC_KEY}:{LANGFUSE_SECRET_KEY}".encode()).decode() os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://cloud.langfuse.com/api/public/otel" # EU data region # os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://us.cloud.langfuse.com/api/public/otel" # US data region os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Basic {LANGFUSE_AUTH}" # your Hugging Face token os.environ["HF_TOKEN"] = "hf_..." ``` ### Step 3: Initialize the `SmolagentsInstrumentor` Initialize the `SmolagentsInstrumentor` before your application code. Configure `tracer_provider` and add a span processor to export traces to Langfuse. `OTLPSpanExporter()` uses the endpoint and headers from the environment variables. ```python from opentelemetry.sdk.trace import TracerProvider from openinference.instrumentation.smolagents import SmolagentsInstrumentor from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter from opentelemetry.sdk.trace.export import SimpleSpanProcessor trace_provider = TracerProvider() trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter())) SmolagentsInstrumentor().instrument(tracer_provider=trace_provider) ``` ### Step 4: Run your smolagent ```python from smolagents import ( CodeAgent, ToolCallingAgent, DuckDuckGoSearchTool, VisitWebpageTool, HfApiModel, ) model = HfApiModel( model_id="deepseek-ai/DeepSeek-R1-Distill-Qwen-32B" ) search_agent = ToolCallingAgent( tools=[DuckDuckGoSearchTool(), VisitWebpageTool()], model=model, name="search_agent", description="This is an agent that can do web search.", ) manager_agent = CodeAgent( tools=[], model=model, managed_agents=[search_agent], ) manager_agent.run( "How can Langfuse be used to monitor and improve the reasoning and decision-making of smolagents when they execute multi-step tasks, like dynamically adjusting a recipe based on user feedback or available ingredients?" ) ``` ### Step 5: View Traces in Langfuse After running the agent, you can view the traces generated by your smolagents application in [Langfuse](https://cloud.langfuse.com). You should see detailed steps of the LLM interactions, which can help you debug and optimize your AI agent. ![smolagents example trace](https://langfuse.com/images/cookbook/integration-smolagents/smolagent_example_trace.png) _[Public example trace in Langfuse](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/ce5160f9bfd5a6cd63b07d2bfcec6f54?timestamp=2025-02-11T09%3A25%3A45.163Z&display=details)_
{ "source": "huggingface/smolagents", "title": "docs/source/en/tutorials/inspect_runs.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/inspect_runs.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 8422 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Secure code execution [[open-in-colab]] > [!TIP] > If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour). ### Code agents [Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingface.co/papers/2411.01747) [papers](https://huggingface.co/papers/2401.00812) have shown that having the LLM write its actions (the tool calls) in code is much better than the current standard format for tool calling, which is across the industry different shades of "writing actions as a JSON of tools names and arguments to use". Why is code better? Well, because we crafted our code languages specifically to be great at expressing actions performed by a computer. If JSON snippets was a better way, this package would have been written in JSON snippets and the devil would be laughing at us. Code is just a better way to express actions on a computer. It has better: - **Composability:** could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function? - **Object management:** how do you store the output of an action like `generate_image` in JSON? - **Generality:** code is built to express simply anything you can do have a computer do. - **Representation in LLM training corpus:** why not leverage this benediction of the sky that plenty of quality actions have already been included in LLM training corpus? This is illustrated on the figure below, taken from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png"> This is why we put emphasis on proposing code agents, in this case python agents, which meant putting higher effort on building secure python interpreters. ### Local python interpreter By default, the `CodeAgent` runs LLM-generated code in your environment. This execution is not done by the vanilla Python interpreter: we've re-built a more secure `LocalPythonInterpreter` from the ground up. This interpreter is designed for security by: - Restricting the imports to a list explicitly passed by the user - Capping the number of operations to prevent infinite loops and resource bloating. - Will not perform any operation that's not pre-defined. We've used this on many use cases, without ever observing any damage to the environment. However this solution is not watertight: one could imagine occasions where LLMs fine-tuned for malignant actions could still hurt your environment. For instance if you've allowed an innocuous package like `Pillow` to process images, the LLM could generate thousands of saves of images to bloat your hard drive. It's certainly not likely if you've chosen the LLM engine yourself, but it could happen. So if you want to be extra cautious, you can use the remote code execution option described below. ### E2B code executor For maximum security, you can use our integration with E2B to run code in a sandboxed environment. This is a remote execution service that runs your code in an isolated container, making it impossible for the code to affect your local environment. For this, you will need to setup your E2B account and set your `E2B_API_KEY` in your environment variables. Head to [E2B's quickstart documentation](https://e2b.dev/docs/quickstart) for more information. Then you can install it with `pip install "smolagents[e2b]"`. Now you're set! To set the code executor to E2B, simply pass the flag `use_e2b_executor=True` when initializing your `CodeAgent`. Note that you should add all the tool's dependencies in `additional_authorized_imports`, so that the executor installs them. ```py from smolagents import CodeAgent, VisitWebpageTool, HfApiModel agent = CodeAgent( tools = [VisitWebpageTool()], model=HfApiModel(), additional_authorized_imports=["requests", "markdownify"], use_e2b_executor=True ) agent.run("What was Abraham Lincoln's preferred pet?") ``` E2B code execution is not compatible with multi-agents at the moment - because having an agent call in a code blob that should be executed remotely is a mess. But we're working on adding it!
{ "source": "huggingface/smolagents", "title": "docs/source/en/tutorials/secure_code_execution.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/secure_code_execution.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5081 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Tools [[open-in-colab]] Here, we're going to see advanced tool usage. > [!TIP] > If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour). - [Tools](#tools) - [What is a tool, and how to build one?](#what-is-a-tool-and-how-to-build-one) - [Share your tool to the Hub](#share-your-tool-to-the-hub) - [Import a Space as a tool](#import-a-space-as-a-tool) - [Use LangChain tools](#use-langchain-tools) - [Manage your agent's toolbox](#manage-your-agents-toolbox) - [Use a collection of tools](#use-a-collection-of-tools) ### What is a tool, and how to build one? A tool is mostly a function that an LLM can use in an agentic system. But to use it, the LLM will need to be given an API: name, tool description, input types and descriptions, output type. So it cannot be only a function. It should be a class. So at core, the tool is a class that wraps a function with metadata that helps the LLM understand how to use it. Here's how it looks: ```python from smolagents import Tool class HFModelDownloadsTool(Tool): name = "model_download_counter" description = """ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint.""" inputs = { "task": { "type": "string", "description": "the task category (such as text-classification, depth-estimation, etc)", } } output_type = "string" def forward(self, task: str): from huggingface_hub import list_models model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return model.id model_downloads_tool = HFModelDownloadsTool() ``` The custom tool subclasses [`Tool`] to inherit useful methods. The child class also defines: - An attribute `name`, which corresponds to the name of the tool itself. The name usually describes what the tool does. Since the code returns the model with the most downloads for a task, let's name it `model_download_counter`. - An attribute `description` is used to populate the agent's system prompt. - An `inputs` attribute, which is a dictionary with keys `"type"` and `"description"`. It contains information that helps the Python interpreter make educated choices about the input. - An `output_type` attribute, which specifies the output type. The types for both `inputs` and `output_type` should be [Pydantic formats](https://docs.pydantic.dev/latest/concepts/json_schema/#generating-json-schema), they can be either of these: [`~AUTHORIZED_TYPES`]. - A `forward` method which contains the inference code to be executed. And that's all it needs to be used in an agent! There's another way to build a tool. In the [guided_tour](../guided_tour), we implemented a tool using the `@tool` decorator. The [`tool`] decorator is the recommended way to define simple tools, but sometimes you need more than this: using several methods in a class for more clarity, or using additional class attributes. In this case, you can build your tool by subclassing [`Tool`] as described above. ### Share your tool to the Hub You can share your custom tool to the Hub by calling [`~Tool.push_to_hub`] on the tool. Make sure you've created a repository for it on the Hub and are using a token with read access. ```python model_downloads_tool.push_to_hub("{your_username}/hf-model-downloads", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") ``` For the push to Hub to work, your tool will need to respect some rules: - All methods are self-contained, e.g. use variables that come either from their args. - As per the above point, **all imports should be defined directly within the tool's functions**, else you will get an error when trying to call [`~Tool.save`] or [`~Tool.push_to_hub`] with your custom tool. - If you subclass the `__init__` method, you can give it no other argument than `self`. This is because arguments set during a specific tool instance's initialization are hard to track, which prevents from sharing them properly to the hub. And anyway, the idea of making a specific class is that you can already set class attributes for anything you need to hard-code (just set `your_variable=(...)` directly under the `class YourTool(Tool):` line). And of course you can still create a class attribute anywhere in your code by assigning stuff to `self.your_variable`. Once your tool is pushed to Hub, you can visualize it. [Here](https://huggingface.co/spaces/m-ric/hf-model-downloads) is the `model_downloads_tool` that I've pushed. It has a nice gradio interface. When diving into the tool files, you can find that all the tool's logic is under [tool.py](https://huggingface.co/spaces/m-ric/hf-model-downloads/blob/main/tool.py). That is where you can inspect a tool shared by someone else. Then you can load the tool with [`load_tool`] or create it with [`~Tool.from_hub`] and pass it to the `tools` parameter in your agent. Since running tools means running custom code, you need to make sure you trust the repository, thus we require to pass `trust_remote_code=True` to load a tool from the Hub. ```python from smolagents import load_tool, CodeAgent model_download_tool = load_tool( "{your_username}/hf-model-downloads", trust_remote_code=True ) ``` ### Import a Space as a tool You can directly import a Space from the Hub as a tool using the [`Tool.from_space`] method! You only need to provide the id of the Space on the Hub, its name, and a description that will help you agent understand what the tool does. Under the hood, this will use [`gradio-client`](https://pypi.org/project/gradio-client/) library to call the Space. For instance, let's import the [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) Space from the Hub and use it to generate an image. ```python image_generation_tool = Tool.from_space( "black-forest-labs/FLUX.1-schnell", name="image_generator", description="Generate an image from a prompt" ) image_generation_tool("A sunny beach") ``` And voilà, here's your image! 🏖️ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sunny_beach.webp"> Then you can use this tool just like any other tool. For example, let's improve the prompt `a rabbit wearing a space suit` and generate an image of it. This example also shows how you can pass additional arguments to the agent. ```python from smolagents import CodeAgent, HfApiModel model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct") agent = CodeAgent(tools=[image_generation_tool], model=model) agent.run( "Improve this prompt, then generate an image of it.", additional_args={'user_prompt': 'A rabbit wearing a space suit'} ) ``` ```text === Agent thoughts: improved_prompt could be "A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background" Now that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt. >>> Agent is executing the code below: image = image_generator(prompt="A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background") final_answer(image) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit_spacesuit_flux.webp"> How cool is this? 🤩 ### Use LangChain tools We love Langchain and think it has a very compelling suite of tools. To import a tool from LangChain, use the `from_langchain()` method. Here is how you can use it to recreate the intro's search result using a LangChain web search tool. This tool will need `pip install langchain google-search-results -q` to work properly. ```python from langchain.agents import load_tools search_tool = Tool.from_langchain(load_tools(["serpapi"])[0]) agent = CodeAgent(tools=[search_tool], model=model) agent.run("How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?") ``` ### Manage your agent's toolbox You can manage an agent's toolbox by adding or replacing a tool in attribute `agent.tools`, since it is a standard dictionary. Let's add the `model_download_tool` to an existing agent initialized with only the default toolbox. ```python from smolagents import HfApiModel model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct") agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.tools[model_download_tool.name] = model_download_tool ``` Now we can leverage the new tool: ```python agent.run( "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub but reverse the letters?" ) ``` > [!TIP] > Beware of not adding too many tools to an agent: this can overwhelm weaker LLM engines. ### Use a collection of tools You can leverage tool collections by using the `ToolCollection` object. It supports loading either a collection from the Hub or an MCP server tools. #### Tool Collection from a collection in the Hub You can leverage it with the slug of the collection you want to use. Then pass them as a list to initialize your agent, and start using them! ```py from smolagents import ToolCollection, CodeAgent image_tool_collection = ToolCollection.from_hub( collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>" ) agent = CodeAgent(tools=[*image_tool_collection.tools], model=model, add_base_tools=True) agent.run("Please draw me a picture of rivers and lakes.") ``` To speed up the start, tools are loaded only if called by the agent. #### Tool Collection from any MCP server Leverage tools from the hundreds of MCP servers available on [glama.ai](https://glama.ai/mcp/servers) or [smithery.ai](https://smithery.ai/). The MCP servers tools can be loaded in a `ToolCollection` object as follow: ```py from smolagents import ToolCollection, CodeAgent from mcp import StdioServerParameters server_parameters = StdioServerParameters( command="uv", args=["--quiet", "pubmedmcp@0.1.3"], env={"UV_PYTHON": "3.12", **os.environ}, ) with ToolCollection.from_mcp(server_parameters) as tool_collection: agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True) agent.run("Please find a remedy for hangover.") ```
{ "source": "huggingface/smolagents", "title": "docs/source/en/tutorials/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/en/tutorials/tools.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 11306 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Agents का परिचय ## 🤔 Agents क्या हैं? AI का उपयोग करने वाली किसी भी कुशल प्रणाली को LLM को वास्तविक दुनिया तक किसी प्रकार की पहुंच प्रदान करने की आवश्यकता होगी: उदाहरण के लिए बाहरी जानकारी प्राप्त करने के लिए एक खोज टूल को कॉल करने की संभावना, या किसी कार्य को हल करने के लिए कुछ प्रोग्राम पर कार्य करने की। दूसरे शब्दों में, LLM में ***agency*** होनी चाहिए। एजेंटिक प्रोग्राम LLM के लिए बाहरी दुनिया का प्रवेश द्वार हैं। > [!TIP] > AI Agents वे **प्रोग्राम हैं जहां LLM आउटपुट वर्कफ़्लो को नियंत्रित करते हैं**। LLM का उपयोग करने वाली कोई भी प्रणाली LLM आउटपुट को कोड में एकीकृत करेगी। कोड वर्कफ़्लो पर LLM के इनपुट का प्रभाव सिस्टम में LLM की एजेंसी का स्तर है। ध्यान दें कि इस परिभाषा के साथ, "agent" एक अलग, 0 या 1 परिभाषा नहीं है: इसके बजाय, "agency" एक निरंतर स्पेक्ट्रम पर विकसित होती है, जैसे-जैसे आप अपने वर्कफ़्लो पर LLM को अधिक या कम शक्ति देते हैं। नीचे दी गई तालिका में देखें कि कैसे एजेंसी विभिन्न प्रणालियों में भिन्न हो सकती है: | एजेंसी स्तर | विवरण | इसे क्या कहा जाता है | उदाहरण पैटर्न | |------------|---------|-------------------|----------------| | ☆☆☆ | LLM आउटपुट का प्रोग्राम प्रवाह पर कोई प्रभाव नहीं | सरल प्रोसेसर | `process_llm_output(llm_response)` | | ★☆☆ | LLM आउटपुट if/else स्विच निर्धारित करता है | राउटर | `if llm_decision(): path_a() else: path_b()` | | ★★☆ | LLM आउटपुट फंक्शन एक्जीक्यूशन निर्धारित करता है | टूल कॉलर | `run_function(llm_chosen_tool, llm_chosen_args)` | | ★★★ | LLM आउटपुट पुनरावृत्ति और प्रोग्राम की निरंतरता को नियंत्रित करता है | मल्टी-स्टेप एजेंट | `while llm_should_continue(): execute_next_step()` | | ★★★ | एक एजेंटिक वर्कफ़्लो दूसरे एजेंटिक वर्कफ़्लो को शुरू कर सकता है | मल्टी-एजेंट | `if llm_trigger(): execute_agent()` | मल्टी-स्टेप agent की यह कोड संरचना है: ```python memory = [user_defined_task] while llm_should_continue(memory): # यह लूप मल्टी-स्टेप भाग है action = llm_get_next_action(memory) # यह टूल-कॉलिंग भाग है observations = execute_action(action) memory += [action, observations] ``` यह एजेंटिक सिस्टम एक लूप में चलता है, प्रत्येक चरण में एक नई क्रिया को शुरू करता है (क्रिया में कुछ पूर्व-निर्धारित *tools* को कॉल करना शामिल हो सकता है जो केवल फंक्शंस हैं), जब तक कि उसके अवलोकन से यह स्पष्ट न हो जाए कि दिए गए कार्य को हल करने के लिए एक संतोषजनक स्थिति प्राप्त कर ली गई है। ## ✅ Agents का उपयोग कब करें / ⛔ कब उनसे बचें Agents तब उपयोगी होते हैं जब आपको किसी ऐप के वर्कफ़्लो को निर्धारित करने के लिए LLM की आवश्यकता होती है। लेकिन वे अक्सर जरूरत से ज्यादा होते हैं। सवाल यह है कि, क्या मुझे वास्तव में दिए गए कार्य को कुशलतापूर्वक हल करने के लिए वर्कफ़्लो में लचीलेपन की आवश्यकता है? यदि पूर्व-निर्धारित वर्कफ़्लो बहुत बार विफल होता है, तो इसका मतलब है कि आपको अधिक लचीलेपन की आवश्यकता है। आइए एक उदाहरण लेते हैं: मान लीजिए आप एक ऐप बना रहे हैं जो एक सर्फिंग ट्रिप वेबसाइट पर ग्राहक अनुरोधों को संभालता है। आप पहले से जान सकते हैं कि अनुरोध 2 में से किसी एक श्रेणी में आएंगे (उपयोगकर्ता की पसंद के आधार पर), और आपके पास इन 2 मामलों में से प्रत्येक के लिए एक पूर्व-निर्धारित वर्कफ़्लो है। 1. ट्रिप के बारे में कुछ जानकारी चाहिए? ⇒ उन्हें अपने नॉलेज बेस में खोज करने के लिए एक सर्च बार तक पहुंच दें 2. सेल्स टीम से बात करना चाहते हैं? ⇒ उन्हें एक संपर्क फॉर्म में टाइप करने दें। यदि वह निर्धारणात्मक वर्कफ़्लो सभी प्रश्नों के लिए फिट बैठता है, तो बेशक बस सब कुछ कोड करें! यह आपको एक 100% विश्वसनीय सिस्टम देगा और एलएलएम द्वारा अनपेक्षित कार्यप्रवाह में हस्तक्षेप करने से त्रुटियों का कोई जोखिम नहीं होगा। साधारणता और मजबूती के लिए, सलाह दी जाती है कि एजेंटिक व्यवहार का उपयोग न किया जाए। लेकिन क्या होगा अगर वर्कफ़्लो को पहले से इतनी अच्छी तरह से निर्धारित नहीं किया जा सकता? उदाहरण के लिए, एक उपयोगकर्ता पूछना चाहता है: `"मैं सोमवार को आ सकता हूं, लेकिन मैं अपना पासपोर्ट भूल गया जिससे मुझे बुधवार तक देर हो सकती है, क्या आप मुझे और मेरी चीजों को मंगलवार सुबह सर्फ करने ले जा सकते हैं, क्या मुझे कैंसलेशन इंश्योरेंस मिल सकता है?"` यह प्रश्न कई कारकों पर निर्भर करता है, और शायद ऊपर दिए गए पूर्व-निर्धारित मानदंडों में से कोई भी इस अनुरोध के लिए पर्याप्त नहीं होगा। यदि पूर्व-निर्धारित वर्कफ़्लो बहुत बार विफल होता है, तो इसका मतलब है कि आपको अधिक लचीलेपन की आवश्यकता है। यहीं पर एक एजेंटिक सेटअप मदद करता है। ऊपर दिए गए उदाहरण में, आप बस एक मल्टी-स्टेप agent बना सकते हैं जिसके पास मौसम पूर्वानुमान के लिए एक मौसम API, यात्रा की दूरी जानने के लिए के लिए Google Maps API, एक कर्मचारी उपलब्धता डैशबोर्ड और आपके नॉलेज बेस पर एक RAG सिस्टम तक पहुंच है। हाल ही तक, कंप्यूटर प्रोग्राम पूर्व-निर्धारित वर्कफ़्लो तक सीमित थे, if/else स्विच का ढेर लगाकार जटिलता को संभालने का प्रयास कर रहे थे। वे बेहद संकीर्ण कार्यों पर केंद्रित थे, जैसे "इन संख्याओं का योग निकालें" या "इस ग्राफ़ में सबसे छोटा रास्ता खोजें"। लेकिन वास्तव में, अधिकांश वास्तविक जीवन के कार्य, जैसे ऊपर दिया गया हमारा यात्रा उदाहरण, पूर्व-निर्धारित वर्कफ़्लो में फिट नहीं होते हैं। एजेंटिक सिस्टम प्रोग्राम के लिए वास्तविक दुनिया के कार्यों की विशाल दुनिया खोलते हैं! ## क्यों `smolagents`? कुछ लो-लेवल एजेंटिक उपयोग के मामलों के लिए, जैसे चेन या राउटर, आप सभी कोड खुद लिख सकते हैं। आप इस तरह से बहुत बेहतर होंगे, क्योंकि यह आपको अपने सिस्टम को बेहतर ढंग से नियंत्रित और समझने की अनुमति देगा। लेकिन जैसे ही आप अधिक जटिल व्यवहारों की ओर बढ़ते हैं जैसे कि LLM को एक फ़ंक्शन कॉल करने देना (यह "tool calling" है) या LLM को एक while लूप चलाने देना ("multi-step agent"), कुछ एब्सट्रैक्शन्स की आवश्यकता होती है: - टूल कॉलिंग के लिए, आपको एजेंट के आउटपुट को पार्स करने की आवश्यकता होती है, इसलिए इस आउटपुट को एक पूर्व-निर्धारित प्रारूप की आवश्यकता होती है जैसे "विचार: मुझे 'get_weather' टूल कॉल करना चाहिए। क्रिया: get_weather(Paris)।", जिसे आप एक पूर्व-निर्धारित फ़ंक्शन के साथ पार्स करते हैं, और LLM को दिए गए सिस्टम प्रॉम्प्ट को इस प्रारूप के बारे में सूचित करना चाहिए। - एक मल्टी-स्टेप एजेंट के लिए जहां LLM आउटपुट लूप को निर्धारित करता है, आपको पिछले लूप इटरेशन में क्या हुआ इसके आधार पर LLM को एक अलग प्रॉम्प्ट देने की आवश्यकता होती है: इसलिए आपको किसी प्रकार की मेमोरी की आवश्यकता होती है। इन दो उदाहरणों के साथ, हमने पहले ही कुछ चीजों की आवश्यकता का पता लगा लिया: - बेशक, एक LLM जो सिस्टम को पावर देने वाले इंजन के रूप में कार्य करता है - एजेंट द्वारा एक्सेस किए जा सकने वाले टूल्स की एक सूची - एक पार्सर जो LLM आउटपुट से टूल कॉल को निकालता है - एक सिस्टम प्रोम्प्ट जो पार्सर के साथ सिंक्रनाइज़ होता है - एक मेमोरी लेकिन रुकिए, चूंकि हम निर्णयों में LLM को जगह देते हैं, निश्चित रूप से वे गलतियां करेंगे: इसलिए हमें एरर लॉगिंग और पुनः प्रयास तंत्र की आवश्यकता है। ये सभी तत्व एक अच्छे कामकाजी सिस्टम बनाने के लिए एक-दूसरे से घनिष्ठ रूप से जुड़े हुए हैं। यही कारण है कि हमने तय किया कि इन सभी चीजों को एक साथ काम करने के लिए बुनियादी निर्माण ब्लॉक्स की आवश्यकता है। ## कोड Agents एक मल्टी-स्टेप एजेंट में, प्रत्येक चरण पर, LLM बाहरी टूल्स को कुछ कॉल के रूप में एक क्रिया लिख सकता है। इन क्रियाओं को लिखने के लिए एक सामान्य स्वरूप (Anthropic, OpenAI और कई अन्य द्वारा उपयोग किया जाता है) आमतौर पर "टूल्स के नाम और उपयोग करने के लिए तर्कों के JSON के रूप में क्रियाएं लिखने" के विभिन्न रूप होते हैं, जिन्हें आप फिर पार्स करते हैं यह जानने के लिए कि कौन सा टूल किन तर्कों के साथ निष्पादित करना है"। [कई](https://huggingface.co/papers/2402.01030) [शोध](https://huggingface.co/papers/2411.01747) [पत्रों](https://huggingface.co/papers/2401.00812) ने दिखाया है कि कोड में टूल कॉलिंग LLM का होना बहुत बेहतर है। इसका कारण बस यह है कि *हमने अपनी कोड भाषाओं को विशेष रूप से कंप्यूटर द्वारा किए गए कार्यों को व्यक्त करने का सर्वोत्तम संभव तरीका बनाने के लिए तैयार किया*। यदि JSON स्निपेट्स बेहतर अभिव्यक्ति होते, तो JSON शीर्ष प्रोग्रामिंग भाषा होती और प्रोग्रामिंग नरक में होती। नीचे दी गई छवि, [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030) से ली गई है, जो कोड में क्रियाएं लिखने के कुछ फायदे दर्शाती है: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png"> JSON जैसे स्निपेट्स की बजाय कोड में क्रियाएं लिखने से बेहतर प्राप्त होता है: - **कम्पोजेबिलिटी:** क्या आप JSON क्रियाओं को एक-दूसरे के भीतर नेस्ट कर सकते हैं, या बाद में पुन: उपयोग करने के लिए JSON क्रियाओं का एक सेट परिभाषित कर सकते हैं, उसी तरह जैसे आप बस एक पायथन फंक्शन परिभाषित कर सकते हैं? - **ऑब्जेक्ट प्रबंधन:** आप `generate_image` जैसी क्रिया के आउटपुट को JSON में कैसे स्टोर करते हैं? - **सामान्यता:** कोड को सरल रूप से कुछ भी व्यक्त करने के लिए बनाया गया है जो आप कंप्यूटर से करवा सकते हैं। - **LLM प्रशिक्षण डेटा में प्रतिनिधित्व:** बहुत सारी गुणवत्तापूर्ण कोड क्रियाएं पहले से ही LLM के ट्रेनिंग डेटा में शामिल हैं जिसका मतलब है कि वे इसके लिए पहले से ही प्रशिक्षित हैं!
{ "source": "huggingface/smolagents", "title": "docs/source/hi/conceptual_guides/intro_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/conceptual_guides/intro_agents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 9194 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # मल्टी-स्टेप एजेंट्स कैसे काम करते हैं? ReAct फ्रेमवर्क ([Yao et al., 2022](https://huggingface.co/papers/2210.03629)) वर्तमान में एजेंट्स बनाने का मुख्य दृष्टिकोण है। नाम दो शब्दों, "Reason" (तर्क) और "Act" (क्रिया) के संयोजन पर आधारित है। वास्तव में, इस आर्किटेक्चर का पालन करने वाले एजेंट अपने कार्य को उतने चरणों में हल करेंगे जितने आवश्यक हों, प्रत्येक चरण में एक Reasoning कदम होगा, फिर एक Action कदम होगा, जहाँ यह टूल कॉल्स तैयार करेगा जो उसे कार्य को हल करने के करीब ले जाएंगे। ReAct प्रक्रिया में पिछले चरणों की मेमोरी रखना शामिल है। > [!TIP] > मल्टी-स्टेप एजेंट्स के बारे में अधिक जानने के लिए [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) ब्लॉग पोस्ट पढ़ें। यहाँ एक वीडियो ओवरव्यू है कि यह कैसे काम करता है: <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif" /> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif" /> </div> ![ReAct एजेंट का फ्रेमवर्क](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png) हम दो प्रकार के ToolCallingAgent को लागू करते हैं: - [`ToolCallingAgent`] अपने आउटपुट में टूल कॉल को JSON के रूप में जनरेट करता है। - [`CodeAgent`] ToolCallingAgent का एक नया प्रकार है जो अपने टूल कॉल को कोड के ब्लॉब्स के रूप में जनरेट करता है, जो उन LLM के लिए वास्तव में अच्छी तरह काम करता है जिनका कोडिंग प्रदर्शन मजबूत है। > [!TIP] > हम एजेंट्स को वन-शॉट में चलाने का विकल्प भी प्रदान करते हैं: बस एजेंट को लॉन्च करते समय `single_step=True` पास करें, जैसे `agent.run(your_task, single_step=True)`
{ "source": "huggingface/smolagents", "title": "docs/source/hi/conceptual_guides/react.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/conceptual_guides/react.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2565 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # मल्टी-एजेंट सिस्टम का आयोजन करें 🤖🤝🤖 [[open-in-colab]] इस नोटबुक में हम एक **मल्टी-एजेंट वेब ब्राउज़र बनाएंगे: एक एजेंटिक सिस्टम जिसमें कई एजेंट वेब का उपयोग करके समस्याओं को हल करने के लिए सहयोग करते हैं!** यह एक सरल संरचना होगी, जो प्रबंधित वेब खोज एजेंट को रैप करने के लिए `ManagedAgent` ऑब्जेक्ट का उपयोग करता है: ``` +----------------+ | Manager agent | +----------------+ | _______________|______________ | | Code interpreter +--------------------------------+ tool | Managed agent | | +------------------+ | | | Web Search agent | | | +------------------+ | | | | | | Web Search tool | | | Visit webpage tool | +--------------------------------+ ``` आइए इस सिस्टम को सेट करें। आवश्यक डिपेंडेंसी इंस्टॉल करने के लिए नीचे दी गई लाइन चलाएं: ``` !pip install markdownify duckduckgo-search smolagents --upgrade -q ``` HF Inference API को कॉल करने के लिए लॉगिन करें: ``` from huggingface_hub import login login() ``` ⚡️ हमारा एजेंट [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) द्वारा संचालित होगा जो `HfApiModel` क्लास का उपयोग करता है जो HF के Inference API का उपयोग करता है: Inference API किसी भी OS मॉडल को जल्दी और आसानी से चलाने की अनुमति देता है। _नोट:_ The Inference API विभिन्न मानदंडों के आधार पर मॉडल होस्ट करता है, और डिप्लॉय किए गए मॉडल बिना पूर्व सूचना के अपडेट या बदले जा सकते हैं। इसके बारे में अधिक जानें [यहां](https://huggingface.co/docs/api-inference/supported-models)। ```py model_id = "Qwen/Qwen2.5-Coder-32B-Instruct" ``` ## 🔍 एक वेब सर्च टूल बनाएं वेब ब्राउज़िंग के लिए, हम पहले से मौजूद [`DuckDuckGoSearchTool`](https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L151-L176) टूल का उपयोग कर सकते हैं जो Google search के समान सुविधा प्रदान करता है। लेकिन फिर हमें `DuckDuckGoSearchTool` द्वारा खोजे गए पेज को देखने में भी सक्षम होने की आवश्यकता होगी। ऐसा करने के लिए, हम लाइब्रेरी के बिल्ट-इन `VisitWebpageTool` को इम्पोर्ट कर सकते हैं, लेकिन हम इसे फिर से बनाएंगे यह देखने के लिए कि यह कैसे किया जाता है। तो आइए `markdownify` का उपयोग करके शुरू से अपना `VisitWebpageTool` टूल बनाएं। ```py import re import requests from markdownify import markdownify from requests.exceptions import RequestException from smolagents import tool @tool def visit_webpage(url: str) -> str: """Visits a webpage at the given URL and returns its content as a markdown string. Args: url: The URL of the webpage to visit. Returns: The content of the webpage converted to Markdown, or an error message if the request fails. """ try: # Send a GET request to the URL response = requests.get(url) response.raise_for_status() # Raise an exception for bad status codes # Convert the HTML content to Markdown markdown_content = markdownify(response.text).strip() # Remove multiple line breaks markdown_content = re.sub(r"\n{3,}", "\n\n", markdown_content) return markdown_content except RequestException as e: return f"Error fetching the webpage: {str(e)}" except Exception as e: return f"An unexpected error occurred: {str(e)}" ``` ठीक है, अब चलिए हमारे टूल को टेस्ट करें! ```py print(visit_webpage("https://en.wikipedia.org/wiki/Hugging_Face")[:500]) ``` ## हमारी मल्टी-एजेंट सिस्टम का निर्माण करें 🤖🤝🤖 अब जब हमारे पास सभी टूल्स `search` और `visit_webpage` हैं, हम उनका उपयोग वेब एजेंट बनाने के लिए कर सकते हैं। इस एजेंट के लिए कौन सा कॉन्फ़िगरेशन चुनें? - वेब ब्राउज़िंग एक सिंगल-टाइमलाइन टास्क है जिसे समानांतर टूल कॉल की आवश्यकता नहीं है, इसलिए JSON टूल कॉलिंग इसके लिए अच्छी तरह काम करती है। इसलिए हम `ToolCallingAgent` चुनते हैं। - साथ ही, चूंकि कभी-कभी वेब सर्च में सही उत्तर खोजने से पहले कई पेजों की सर्च करने की आवश्यकता होती है, हम `max_steps` को बढ़ाकर 10 करना पसंद करते हैं। ```py from smolagents import ( CodeAgent, ToolCallingAgent, HfApiModel, ManagedAgent, DuckDuckGoSearchTool, LiteLLMModel, ) model = HfApiModel(model_id) web_agent = ToolCallingAgent( tools=[DuckDuckGoSearchTool(), visit_webpage], model=model, max_steps=10, ) ``` फिर हम इस एजेंट को एक `ManagedAgent` में रैप करते हैं जो इसे इसके मैनेजर एजेंट द्वारा कॉल करने योग्य बनाएगा। ```py managed_web_agent = ManagedAgent( agent=web_agent, name="search", description="Runs web searches for you. Give it your query as an argument.", ) ``` अंत में हम एक मैनेजर एजेंट बनाते हैं, और इनिशियलाइजेशन पर हम अपने मैनेज्ड एजेंट को इसके `managed_agents` आर्गुमेंट में पास करते हैं। चूंकि यह एजेंट योजना बनाने और सोचने का काम करता है, उन्नत तर्क लाभदायक होगा, इसलिए `CodeAgent` सबसे अच्छा विकल्प होगा। साथ ही, हम एक ऐसा प्रश्न पूछना चाहते हैं जिसमें वर्तमान वर्ष और अतिरिक्त डेटा गणना शामिल है: इसलिए आइए `additional_authorized_imports=["time", "numpy", "pandas"]` जोड़ें, यदि एजेंट को इन पैकेजों की आवश्यकता हो। ```py manager_agent = CodeAgent( tools=[], model=model, managed_agents=[managed_web_agent], additional_authorized_imports=["time", "numpy", "pandas"], ) ``` बस इतना ही! अब चलिए हमारे सिस्टम को चलाते हैं! हम एक ऐसा प्रश्न चुनते हैं जिसमें गणना और शोध दोनों की आवश्यकता है। ```py answer = manager_agent.run("If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.") ``` We get this report as the answer: ``` Based on current growth projections and energy consumption estimates, if LLM trainings continue to scale up at the current rhythm until 2030: 1. The electric power required to power the biggest training runs by 2030 would be approximately 303.74 GW, which translates to about 2,660,762 GWh/year. 2. Comparing this to countries' electricity consumption: - It would be equivalent to about 34% of China's total electricity consumption. - It would exceed the total electricity consumption of India (184%), Russia (267%), and Japan (291%). - It would be nearly 9 times the electricity consumption of countries like Italy or Mexico. 3. Source of numbers: - The initial estimate of 5 GW for future LLM training comes from AWS CEO Matt Garman. - The growth projection used a CAGR of 79.80% from market research by Springs. - Country electricity consumption data is from the U.S. Energy Information Administration, primarily for the year 2021. ``` लगता है कि यदि [स्केलिंग हाइपोथिसिस](https://gwern.net/scaling-hypothesis) सत्य बनी रहती है तो हमें कुछ बड़े पावरप्लांट्स की आवश्यकता होगी। हमारे एजेंट्स ने कार्य को हल करने के लिए कुशलतापूर्वक सहयोग किया! ✅ 💡 आप इस ऑर्केस्ट्रेशन को आसानी से अधिक एजेंट्स में विस्तारित कर सकते हैं: एक कोड एक्जीक्यूशन करता है, एक वेब सर्च करता है, एक फाइल लोडिंग को संभालता है।
{ "source": "huggingface/smolagents", "title": "docs/source/hi/examples/multiagents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/examples/multiagents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7954 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # एजेंटिक RAG [[open-in-colab]] रिट्रीवल-ऑगमेंटेड-जनरेशन (RAG) है "एक यूजर के प्रश्न का उत्तर देने के लिए LLM का उपयोग करना, लेकिन उत्तर को एक नॉलेज बेस से प्राप्त जानकारी पर आधारित करना"। इसमें वैनिला या फाइन-ट्यून्ड LLM का उपयोग करने की तुलना में कई फायदे हैं: कुछ नाम लेने के लिए, यह उत्तर को सत्य तथ्यों पर आधारित करने और काल्पनिक बातों को कम करने की अनुमति देता है, यह LLM को डोमेन-विशिष्ट ज्ञान प्रदान करने की अनुमति देता है, और यह नॉलेज बेस से जानकारी तक पहुंच का सूक्ष्म नियंत्रण प्रदान करता है। लेकिन वैनिला RAG की सीमाएं हैं, सबसे महत्वपूर्ण ये दो: - यह केवल एक रिट्रीवल स्टेप करता है: यदि परिणाम खराब हैं, तो जनरेशन भी बदले में खराब होगा। - सिमेंटिक समानता की गणना यूजर के प्रश्न को संदर्भ के रूप में करके की जाती है, जो अनुकूल नहीं हो सकती: उदाहरण के लिए, यूजर का प्रश्न अक्सर एक सवाल होगा, जबकि सही उत्तर देने वाला डॉक्यूमेंट सकारात्मक स्वर में हो सकता है, और इसका समानता स्कोर अन्य स्रोत दस्तावेज़ों की तुलना में कम हो सकता है, जो प्रश्नवाचक स्वर में हो सकते हैं। इससे संबंधित जानकारी को चूकने का जोखिम होता है। हम एक RAG एजेंट बनाकर इन समस्याओं को कम कर सकते हैं: बहुत सरल तरीके से, एक रिट्रीवर टूल से लैस एजेंट! यह एजेंट करेगा: ✅ स्वयं क्वेरी तैयार करेगा और ✅ आवश्यकता पड़ने पर पुनः-प्राप्ति के लिए समीक्षा करेगा। इसलिए यह सहज रूप से कुछ उन्नत RAG तकनीकों को प्राप्त कर लेना चाहिए! - सिमेंटिक खोज में सीधे यूजर क्वेरी का संदर्भ के रूप में उपयोग करने के बजाय, एजेंट स्वयं एक संदर्भ वाक्य तैयार करता है जो लक्षित डॉक्यूमेंट्स के करीब हो सकता है, जैसा कि [HyDE](https://huggingface.co/papers/2212.10496) में किया गया है। एजेंट जनरेट किए गए स्निपेट्स का उपयोग कर सकता है और आवश्यकता पड़ने पर पुनः-प्राप्ति कर सकता है, जैसा कि [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/) में किया गया है। चलिए इस सिस्टम को बनाते हैं। 🛠️ आवश्यक डिपेंडेंसी इंस्टॉल करने के लिए नीचे दी गई लाइन चलाएं। ```bash !pip install smolagents pandas langchain langchain-community sentence-transformers rank_bm25 --upgrade -q ``` HF Inference API को कॉल करने के लिए, आपको अपने एनवायरनमेंट वेरिएबल `HF_TOKEN` के रूप में एक वैध टोकन की आवश्यकता होगी। हम इसे लोड करने के लिए python-dotenv का उपयोग करते हैं। ```py from dotenv import load_dotenv load_dotenv() ``` हम पहले एक नॉलेज बेस लोड करते हैं जिस पर हम RAG को लागू करना चाहते हैं: यह डेटा सेट Hugging Face के कई लाइब्रेरी के डॉक्यूमेंट पृष्ठों का संकलन है, जिन्हें Markdown में स्टोर किया गया है। हम केवल `transformers` लाइब्रेरी के दस्तावेज़ों को रखेंगे। फिर डेटासेट को प्रोसेस करके और इसे एक वेक्टर डेटाबेस में स्टोर करके नॉलेज बेस तैयार करें जिसे रिट्रीवर द्वारा उपयोग किया जाएगा। हम [LangChain](https://python.langchain.com/docs/introduction/) का उपयोग करते हैं क्योंकि इसमें उत्कृष्ट वेक्टर डेटाबेस उपयोगिताएं हैं। ```py import datasets from langchain.docstore.document import Document from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.retrievers import BM25Retriever knowledge_base = datasets.load_dataset("m-ric/huggingface_doc", split="train") knowledge_base = knowledge_base.filter(lambda row: row["source"].startswith("huggingface/transformers")) source_docs = [ Document(page_content=doc["text"], metadata={"source": doc["source"].split("/")[1]}) for doc in knowledge_base ] text_splitter = RecursiveCharacterTextSplitter( chunk_size=500, chunk_overlap=50, add_start_index=True, strip_whitespace=True, separators=["\n\n", "\n", ".", " ", ""], ) docs_processed = text_splitter.split_documents(source_docs) ``` अब डॉक्यूमेंट्स तैयार हैं। तो चलिए अपना एजेंटिक RAG सिस्टम बनाएं! 👉 हमें केवल एक RetrieverTool की आवश्यकता है जिसका उपयोग हमारा एजेंट नॉलेज बेस से जानकारी प्राप्त करने के लिए कर सकता है। चूंकि हमें टूल के एट्रीब्यूट के रूप में एक vectordb जोड़ने की आवश्यकता है, हम सरल टूल कंस्ट्रक्टर को `@tool` डेकोरेटर के साथ सीधे उपयोग नहीं कर सकते: इसलिए हम [tools tutorial](../tutorials/tools) में हाइलाइट किए गए सेटअप का पालन करेंगे। ```py from smolagents import Tool class RetrieverTool(Tool): name = "retriever" description = "Uses semantic search to retrieve the parts of transformers documentation that could be most relevant to answer your query." inputs = { "query": { "type": "string", "description": "The query to perform. This should be semantically close to your target documents. Use the affirmative form rather than a question.", } } output_type = "string" def __init__(self, docs, **kwargs): super().__init__(**kwargs) self.retriever = BM25Retriever.from_documents( docs, k=10 ) def forward(self, query: str) -> str: assert isinstance(query, str), "Your search query must be a string" docs = self.retriever.invoke( query, ) return "\nRetrieved documents:\n" + "".join( [ f"\n\n===== Document {str(i)} =====\n" + doc.page_content for i, doc in enumerate(docs) ] ) retriever_tool = RetrieverTool(docs_processed) ``` हमने BM25 का उपयोग किया है, जो एक क्लासिक रिट्रीवल विधि है, क्योंकि इसे सेटअप करना बहुत आसान है। रिट्रीवल सटीकता में सुधार करने के लिए, आप BM25 को डॉक्यूमेंट्स के लिए वेक्टर प्रतिनिधित्व का उपयोग करके सिमेंटिक खोज से बदल सकते हैं: इस प्रकार आप एक अच्छा एम्बेडिंग मॉडल चुनने के लिए [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) पर जा सकते हैं। अब यह सीधा है कि एक एजेंट बनाया जाए जो इस `retriever_tool` का उपयोग करेगा! एजेंट को इनिशियलाइजेशन पर इन आर्गुमेंट्स की आवश्यकता होगी: - `tools`: टूल्स की एक सूची जिन्हें एजेंट कॉल कर सकेगा। - `model`: LLM जो एजेंट को पावर देता है। हमारा `model` एक कॉलेबल होना चाहिए जो इनपुट के रूप में संदेशों की एक सूची लेता है और टेक्स्ट लौटाता है। इसे एक stop_sequences आर्गुमेंट भी स्वीकार करने की आवश्यकता है जो बताता है कि जनरेशन कब रोकनी है। सुविधा के लिए, हम सीधे पैकेज में प्रदान की गई HfEngine क्लास का उपयोग करते हैं ताकि एक LLM इंजन मिल सके जो Hugging Face के Inference API को कॉल करता है। और हम [meta-llama/Llama-3.3-70B-Instruct](meta-llama/Llama-3.3-70B-Instruct) का उपयोग llm इंजन के रूप में करते हैं क्योंकि: - इसमें लंबा 128k कॉन्टेक्स्ट है, जो लंबे स्रोत दस्तावेजों को प्रोसेस करने में मददगार है - यह हर समय HF के Inference API पर मुफ्त में उपलब्ध है! _नोट:_ Inference API विभिन्न मानदंडों के आधार पर मॉडल होस्ट करता है, और डिप्लॉय किए गए मॉडल बिना पूर्व सूचना के अपडेट या बदले जा सकते हैं। इसके बारे में अधिक जानें [यहां](https://huggingface.co/docs/api-inference/supported-models) पढ़ें। ```py from smolagents import HfApiModel, CodeAgent agent = CodeAgent( tools=[retriever_tool], model=HfApiModel("meta-llama/Llama-3.3-70B-Instruct"), max_steps=4, verbosity_level=2 ) ``` CodeAgent को इनिशियलाइज करने पर, इसे स्वचालित रूप से एक डिफ़ॉल्ट सिस्टम प्रॉम्प्ट दिया गया है जो LLM इंजन को चरण-दर-चरण प्रोसेस करने और कोड स्निपेट्स के रूप में टूल कॉल जनरेट करने के लिए कहता है, लेकिन आप आवश्यकतानुसार इस प्रॉम्प्ट टेम्पलेट को अपने से बदल सकते हैं। जब CodeAgent का `.run()` मेथड लॉन्च किया जाता है, तो एजेंट LLM इंजन को कॉल करने का कार्य करता है, और टूल कॉल्स को निष्पादित करता है, यह सब एक लूप में होता है, जो तब तक चलता है जब तक टूल final_answer के साथ अंतिम उत्तर के रूप में नहीं बुलाया जाता। ```py agent_output = agent.run("For a transformers model training, which is slower, the forward or the backward pass?") print("Final output:") print(agent_output) ```
{ "source": "huggingface/smolagents", "title": "docs/source/hi/examples/rag.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/examples/rag.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 8103 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Text-to-SQL [[open-in-colab]] इस ट्यूटोरियल में, हम देखेंगे कि कैसे `smolagents` का उपयोग करके एक एजेंट को SQL का उपयोग करने के लिए लागू किया जा सकता है। > आइए सबसे महत्वपूर्ण प्रश्न से शुरू करें: इसे साधारण क्यों नहीं रखें और एक सामान्य text-to-SQL पाइपलाइन का उपयोग करें? एक सामान्य text-to-SQL पाइपलाइन कमजोर होती है, क्योंकि उत्पन्न SQL क्वेरी गलत हो सकती है। इससे भी बुरी बात यह है कि क्वेरी गलत हो सकती है, लेकिन कोई एरर नहीं दिखाएगी, बल्कि बिना किसी अलार्म के गलत/बेकार आउटपुट दे सकती है। 👉 इसके बजाय, एक एजेंट सिस्टम आउटपुट का गंभीरता से निरीक्षण कर सकता है और तय कर सकता है कि क्वेरी को बदलने की जरूरत है या नहीं, इस प्रकार इसे बेहतर प्रदर्शन में मदद मिलती है। आइए इस एजेंट को बनाएं! 💪 पहले, हम SQL एनवायरनमेंट सेटअप करते हैं: ```py from sqlalchemy import ( create_engine, MetaData, Table, Column, String, Integer, Float, insert, inspect, text, ) engine = create_engine("sqlite:///:memory:") metadata_obj = MetaData() # create city SQL table table_name = "receipts" receipts = Table( table_name, metadata_obj, Column("receipt_id", Integer, primary_key=True), Column("customer_name", String(16), primary_key=True), Column("price", Float), Column("tip", Float), ) metadata_obj.create_all(engine) rows = [ {"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20}, {"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24}, {"receipt_id": 3, "customer_name": "Woodrow Wilson", "price": 53.43, "tip": 5.43}, {"receipt_id": 4, "customer_name": "Margaret James", "price": 21.11, "tip": 1.00}, ] for row in rows: stmt = insert(receipts).values(**row) with engine.begin() as connection: cursor = connection.execute(stmt) ``` ### Agent बनाएं अब आइए हमारी SQL टेबल को एक टूल द्वारा पुनर्प्राप्त करने योग्य बनाएं। टूल का विवरण विशेषता एजेंट सिस्टम द्वारा LLM के prompt में एम्बेड किया जाएगा: यह LLM को टूल का उपयोग करने के बारे में जानकारी देता है। यहीं पर हम SQL टेबल का वर्णन करना चाहते हैं। ```py inspector = inspect(engine) columns_info = [(col["name"], col["type"]) for col in inspector.get_columns("receipts")] table_description = "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info]) print(table_description) ``` ```text Columns: - receipt_id: INTEGER - customer_name: VARCHAR(16) - price: FLOAT - tip: FLOAT ``` अब आइए हमारा टूल बनाएं। इसे निम्नलिखित की आवश्यकता है: (अधिक जानकारी के लिए [टूल doc](../tutorials/tools) पढ़ें) - एक डॉकस्ट्रिंग जिसमें आर्ग्युमेंट्स की सूची वाला `Args:` भाग हो। - इनपुट और आउटपुट दोनों पर टाइप हिंट्स। ```py from smolagents import tool @tool def sql_engine(query: str) -> str: """ Allows you to perform SQL queries on the table. Returns a string representation of the result. The table is named 'receipts'. Its description is as follows: Columns: - receipt_id: INTEGER - customer_name: VARCHAR(16) - price: FLOAT - tip: FLOAT Args: query: The query to perform. This should be correct SQL. """ output = "" with engine.connect() as con: rows = con.execute(text(query)) for row in rows: output += "\n" + str(row) return output ``` अब आइए एक एजेंट बनाएं जो इस टूल का लाभ उठाता है। हम `CodeAgent` का उपयोग करते हैं, जो smolagents का मुख्य एजेंट क्लास है: एक एजेंट जो कोड में एक्शन लिखता है और ReAct फ्रेमवर्क के अनुसार पिछले आउटपुट पर पुनरावृत्ति कर सकता है। मॉडल वह LLM है जो एजेंट सिस्टम को संचालित करता है। `HfApiModel` आपको HF के Inference API का उपयोग करके LLM को कॉल करने की अनुमति देता है, या तो सर्वरलेस या डेडिकेटेड एंडपॉइंट के माध्यम से, लेकिन आप किसी भी प्रोप्राइटरी API का भी उपयोग कर सकते हैं। ```py from smolagents import CodeAgent, HfApiModel agent = CodeAgent( tools=[sql_engine], model=HfApiModel("meta-llama/Meta-Llama-3.1-8B-Instruct"), ) agent.run("Can you give me the name of the client who got the most expensive receipt?") ``` ### लेवल 2: टेबल जॉइन्स अब आइए इसे और चुनौतीपूर्ण बनाएं! हम चाहते हैं कि हमारा एजेंट कई टेबल्स के बीच जॉइन को संभाल सके। तो आइए हम प्रत्येक receipt_id के लिए वेटर्स के नाम रिकॉर्ड करने वाली एक दूसरी टेबल बनाते हैं! ```py table_name = "waiters" receipts = Table( table_name, metadata_obj, Column("receipt_id", Integer, primary_key=True), Column("waiter_name", String(16), primary_key=True), ) metadata_obj.create_all(engine) rows = [ {"receipt_id": 1, "waiter_name": "Corey Johnson"}, {"receipt_id": 2, "waiter_name": "Michael Watts"}, {"receipt_id": 3, "waiter_name": "Michael Watts"}, {"receipt_id": 4, "waiter_name": "Margaret James"}, ] for row in rows: stmt = insert(receipts).values(**row) with engine.begin() as connection: cursor = connection.execute(stmt) ``` चूंकि हमने टेबल को बदल दिया है, हम LLM को इस टेबल की जानकारी का उचित उपयोग करने देने के लिए इस टेबल के विवरण के साथ `SQLExecutorTool` को अपडेट करते हैं। ```py updated_description = """Allows you to perform SQL queries on the table. Beware that this tool's output is a string representation of the execution output. It can use the following tables:""" inspector = inspect(engine) for table in ["receipts", "waiters"]: columns_info = [(col["name"], col["type"]) for col in inspector.get_columns(table)] table_description = f"Table '{table}':\n" table_description += "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info]) updated_description += "\n\n" + table_description print(updated_description) ``` चूंकि यह रिक्वेस्ट पिछले वाले से थोड़ी कठिन है, हम LLM इंजन को अधिक शक्तिशाली [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) का उपयोग करने के लिए स्विच करेंगे! ```py sql_engine.description = updated_description agent = CodeAgent( tools=[sql_engine], model=HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct"), ) agent.run("Which waiter got more total money from tips?") ``` यह सीधे काम करता है! सेटअप आश्चर्यजनक रूप से सरल था, है ना? यह उदाहरण पूरा हो गया! हमने इन अवधारणाओं को छुआ है: - नए टूल्स का निर्माण। - टूल के विवरण को अपडेट करना। - एक मजबूत LLM में स्विच करने से एजेंट की तर्कशक्ति में मदद मिलती है। ✅ अब आप वह text-to-SQL सिस्टम बना सकते हैं जिसका आपने हमेशा सपना देखा है! ✨
{ "source": "huggingface/smolagents", "title": "docs/source/hi/examples/text_to_sql.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/examples/text_to_sql.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7067 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Agents <Tip warning={true}> Smolagents एक experimental API है जो किसी भी समय बदल सकता है। एजेंट्स द्वारा लौटाए गए परिणाम भिन्न हो सकते हैं क्योंकि APIs या underlying मॉडल बदलने की संभावना रखते हैं। </Tip> Agents और tools के बारे में अधिक जानने के लिए [introductory guide](../index) पढ़ना सुनिश्चित करें। यह पेज underlying क्लासेज के लिए API docs को शामिल करता है। ## Agents हमारे एजेंट्स [`MultiStepAgent`] से इनहेरिट करते हैं, जिसका अर्थ है कि वे कई चरणों में कार्य कर सकते हैं, प्रत्येक चरण में एक विचार, फिर एक टूल कॉल और एक्जीक्यूशन शामिल होता है। [इस कॉन्सेप्चुअल गाइड](../conceptual_guides/react) में अधिक पढ़ें। हम मुख्य [`Agent`] क्लास पर आधारित दो प्रकार के एजेंट्स प्रदान करते हैं। - [`CodeAgent`] डिफ़ॉल्ट एजेंट है, यह अपने टूल कॉल्स को Python कोड में लिखता है। - [`ToolCallingAgent`] अपने टूल कॉल्स को JSON में लिखता है। दोनों को इनिशियलाइजेशन पर `model` और टूल्स की सूची `tools` आर्गुमेंट्स की आवश्यकता होती है। ### Agents की क्लासेज [[autodoc]] MultiStepAgent [[autodoc]] CodeAgent [[autodoc]] ToolCallingAgent ### ManagedAgent _This class is deprecated since 1.8.0: now you just need to pass name and description attributes to an agent to directly use it as previously done with a ManagedAgent._ ### stream_to_gradio [[autodoc]] stream_to_gradio ### GradioUI [[autodoc]] GradioUI ## मॉडल्स आप स्वतंत्र रूप से अपने स्वयं के मॉडल बना सकते हैं और उनका उपयोग कर सकते हैं। आप अपने एजेंट के लिए कोई भी `model` कॉल करने योग्य उपयोग कर सकते हैं, जब तक कि: 1. यह अपने इनपुट `messages` के लिए [messages format](./chat_templating) (`List[Dict[str, str]]`) का पालन करता है, और यह एक `str` लौटाता है। 2. यह आर्गुमेंट `stop_sequences` में पास किए गए सीक्वेंस से *पहले* आउटपुट जनरेट करना बंद कर देता है। अपने LLM को परिभाषित करने के लिए, आप एक `custom_model` मेथड बना सकते हैं जो [messages](./chat_templating) की एक सूची स्वीकार करता है और टेक्स्ट युक्त .content विशेषता वाला एक ऑब्जेक्ट लौटाता है। इस कॉलेबल को एक `stop_sequences` आर्गुमेंट भी स्वीकार करने की आवश्यकता होती है जो बताता है कि कब जनरेट करना और बंद करना है। ```python from huggingface_hub import login, InferenceClient login("<YOUR_HUGGINGFACEHUB_API_TOKEN>") model_id = "meta-llama/Llama-3.3-70B-Instruct" client = InferenceClient(model=model_id) def custom_model(messages, stop_sequences=["Task"]): response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000) answer = response.choices[0].message return answer ``` इसके अतिरिक्त, `custom_model` एक `grammar` आर्गुमेंट भी ले सकता है। जिस स्थिति में आप एजेंट इनिशियलाइजेशन पर एक `grammar` निर्दिष्ट करते हैं, यह आर्गुमेंट मॉडल के कॉल्स को आपके द्वारा इनिशियलाइजेशन पर परिभाषित `grammar` के साथ पास किया जाएगा, ताकि [constrained generation](https://huggingface.co/docs/text-generation-inference/conceptual/guidance) की अनुमति मिल सके जिससे उचित-फॉर्मेटेड एजेंट आउटपुट को फोर्स किया जा सके। ### TransformersModel सुविधा के लिए, हमने एक `TransformersModel` जोड़ा है जो इनिशियलाइजेशन पर दिए गए model_id के लिए एक लोकल `transformers` पाइपलाइन बनाकर ऊपर के बिंदुओं को लागू करता है। ```python from smolagents import TransformersModel model = TransformersModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct") print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"])) ``` ```text >>> What a ``` [[autodoc]] TransformersModel ### HfApiModel `HfApiModel` LLM के एक्जीक्यूशन के लिए [HF Inference API](https://huggingface.co/docs/api-inference/index) क्लाइंट को रैप करता है। ```python from smolagents import HfApiModel messages = [ {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "No need to help, take it easy."}, ] model = HfApiModel() print(model(messages)) ``` ```text >>> Of course! If you change your mind, feel free to reach out. Take care! ``` [[autodoc]] HfApiModel ### LiteLLMModel `LiteLLMModel` विभिन्न प्रदाताओं से 100+ LLMs को सपोर्ट करने के लिए [LiteLLM](https://www.litellm.ai/) का लाभ उठाता है। आप मॉडल इनिशियलाइजेशन पर kwargs पास कर सकते हैं जो तब मॉडल का उपयोग करते समय प्रयोग किए जाएंगे, उदाहरण के लिए नीचे हम `temperature` पास करते हैं। ```python from smolagents import LiteLLMModel messages = [ {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "No need to help, take it easy."}, ] model = LiteLLMModel("anthropic/claude-3-5-sonnet-latest", temperature=0.2, max_tokens=10) print(model(messages)) ``` [[autodoc]] LiteLLMModel ### OpenAiServerModel यह क्लास आपको किसी भी OpenAIServer कम्पैटिबल मॉडल को कॉल करने देती है। यहाँ बताया गया है कि आप इसे कैसे सेट कर सकते हैं (आप दूसरे सर्वर को पॉइंट करने के लिए `api_base` url को कस्टमाइज़ कर सकते हैं): ```py import os from smolagents import OpenAIServerModel model = OpenAIServerModel( model_id="gpt-4o", api_base="https://api.openai.com/v1", api_key=os.environ["OPENAI_API_KEY"], ) ``` ## Prompts [[autodoc]] smolagents.agents.PromptTemplates [[autodoc]] smolagents.agents.PlanningPromptTemplate [[autodoc]] smolagents.agents.ManagedAgentPromptTemplate [[autodoc]] smolagents.agents.FinalAnswerPromptTemplate
{ "source": "huggingface/smolagents", "title": "docs/source/hi/reference/agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/reference/agents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5986 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Tools <Tip warning={true}> Smolagents एक experimental API है जो किसी भी समय बदल सकता है। एजेंट्स द्वारा लौटाए गए परिणाम भिन्न हो सकते हैं क्योंकि APIs या underlying मॉडल बदलने की संभावना रखते हैं। </Tip> एजेंट्स और टूल्स के बारे में अधिक जानने के लिए [introductory guide](../index) पढ़ना सुनिश्चित करें। यह पेज underlying क्लासेज के लिए API docs को शामिल करता है। ## Tools ### load_tool [[autodoc]] load_tool ### tool [[autodoc]] tool ### Tool [[autodoc]] Tool ### launch_gradio_demo [[autodoc]] launch_gradio_demo ## Default Tools ### PythonInterpreterTool [[autodoc]] PythonInterpreterTool ### DuckDuckGoSearchTool [[autodoc]] DuckDuckGoSearchTool ### VisitWebpageTool [[autodoc]] VisitWebpageTool ### UserInputTool [[autodoc]] UserInputTool ## ToolCollection [[autodoc]] ToolCollection ## Agent टाइप्स एजेंट्स टूल्स के बीच किसी भी प्रकार की ऑब्जेक्ट को संभाल सकते हैं; टूल्स, पूरी तरह से मल्टीमोडल होने के कारण, टेक्स्ट, इमेज, ऑडियो, वीडियो सहित अन्य प्रकारों को स्वीकार और रिटर्न कर सकते हैं। टूल्स के बीच अनुकूलता बढ़ाने के साथ-साथ इन रिटर्न्स को ipython (jupyter, colab, ipython notebooks, ...) में सही ढंग से रेंडर करने के लिए, हम इन टाइप्स के आसपास रैपर क्लासेज को लागू करते हैं। रैप किए गए ऑब्जेक्ट्स को प्रारंभ में जैसा व्यवहार करना चाहिए वैसा ही करना जारी रखना चाहिए; एक टेक्स्ट ऑब्जेक्ट को अभी भी स्ट्रिंग की तरह व्यवहार करना चाहिए| एक इमेज ऑब्जेक्ट को अभी भी `PIL.Image` की तरह व्यवहार करना चाहिए। इन टाइप्स के तीन विशिष्ट उद्देश्य हैं: - टाइप पर `to_raw` को कॉल करने से अंतर्निहित ऑब्जेक्ट रिटर्न होना चाहिए - टाइप पर `to_string` को कॉल करने से ऑब्जेक्ट को स्ट्रिंग के रूप में रिटर्न होना चाहिए: वह `AgentText` के मामले में स्ट्रिंग हो सकती है लेकिन अन्य उदाहरणों में ऑब्जेक्ट के सीरियलाइज्ड वर्जन का पाथ होगा - इसे एक ipython kernel में प्रदर्शित करने पर ऑब्जेक्ट को सही ढंग से प्रदर्शित करना चाहिए ### AgentText [[autodoc]] smolagents.agent_types.AgentText ### AgentImage [[autodoc]] smolagents.agent_types.AgentImage ### AgentAudio [[autodoc]] smolagents.agent_types.AgentAudio
{ "source": "huggingface/smolagents", "title": "docs/source/hi/reference/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/reference/tools.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2784 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # अच्छे Agents का निर्माण [[open-in-colab]] एक ऐसा एजेंट बनाने में जो काम करता है और जो काम नहीं करता है, इसमें ज़मीन-आसमान का अंतर है। हम कैसे ऐसे एजेंट्स बना सकते हैं जो बाद वाली श्रेणी में आते हैं? इस गाइड में, हम एजेंट्स बनाने के लिए सर्वोत्तम प्रक्रियाएँ के बारे में बात करेंगे। > [!TIP] > यदि आप एजेंट्स बनाने में नए हैं, तो पहले [एजेंट्स का परिचय](../conceptual_guides/intro_agents) और [smolagents की गाइडेड टूर](../guided_tour) पढ़ना सुनिश्चित करें। ### सर्वश्रेष्ठ एजेंटिक सिस्टम सबसे सरल होते हैं: वर्कफ़्लो को जितना हो सके उतना सरल बनाएं अपने वर्कफ़्लो में एक LLM को कुछ एजेंसी देने से त्रुटियों का जोखिम होता है। अच्छी तरह से प्रोग्राम किए गए एजेंटिक सिस्टम में वैसे भी अच्छी एरर लॉगिंग और रीट्राई मैकेनिज्म होते हैं, जिससे LLM इंजन अपनी गलतियों को सुधारने का मौका मिलता है। लेकिन LLM त्रुटि के जोखिम को अधिकतम कम करने के लिए, आपको अपना वर्कफ़्लो सरल बनाना चाहिए! आइए [एजेंट्स का परिचय](../conceptual_guides/intro_agents) से उदाहरण पर फिर से विचार करें: एक सर्फ ट्रिप कंपनी के लिए उपयोगकर्ता प्रश्नों का उत्तर देने वाला बॉट। एजेंट को हर बार जब एक नए सर्फ स्पॉट के बारे में पूछा जाता है तो "travel distance API" और "weather API" के लिए 2 अलग-अलग कॉल करने देने के बजाय, आप केवल एक एकीकृत टूल "return_spot_information" बना सकते हैं, एक फंक्शन जो दोनों APIs को एक साथ कॉल करता है और उनके संयोजित आउटपुट को उपयोगकर्ता को वापस करता है। यह लागत, देरी और त्रुटि जोखिम को कम करेगा! मुख्य दिशानिर्देश है: LLM कॉल्स की संख्या को जितना हो सके उतना कम करें। इससे कुछ निष्कर्ष निकलते हैं: - जब भी संभव हो, दो APIs के हमारे उदाहरण की तरह 2 टूल्स को एक में समूहित करें। - जब भी संभव हो, लॉजिक एजेंटिक निर्णयों के बजाय डिटरमिनिस्टिक फंक्शंस पर आधारित होनी चाहिए। ### LLM इंजन को जानकारी के प्रवाह में सुधार करें याद रखें कि आपका LLM इंजन एक *बुद्धिमान* रोबोट की तरह है, जो एक कमरे में बंद है, और बाहरी दुनिया के साथ इसका एकमात्र संचार दरवाजे के नीचे से नोट्स पास करना है। यह किसी भी ऐसी चीज के बारे में नहीं जानेगा जिसे आप स्पष्ट रूप से अपने प्रॉम्प्ट में नहीं डालते हैं। इसलिए पहले अपने कार्य को बहुत स्पष्ट बनाने से शुरू करें! चूंकि एक एजेंट LLM द्वारा संचालित होता है, आपके कार्य के निर्माण में छोटे बदलाव भी पूरी तरह से अलग परिणाम दे सकते हैं। फिर, टूल के उपयोग में अपने एजेंट की ओर जानकारी के प्रवाह में सुधार करें। पालन करने के लिए विशेष दिशानिर्देश: - प्रत्येक टूल को वह सब कुछ लॉग करना चाहिए (टूल की `forward` मेथड के अंदर केवल `print` स्टेटमेंट्स का उपयोग करके) जो LLM इंजन के लिए उपयोगी हो सकता है। - विशेष रूप से, टूल एक्जीक्यूशन गलतियों पर विस्तृत लॉगिंग बहुत मदद करेगी! उदाहरण के लिए, यहाँ एक टूल है जो लोकेशन और डेट-टाइम के आधार पर मौसम डेटा प्राप्त करता है: पहले, यहाँ एक खराब रूप है: ```python import datetime from smolagents import tool def get_weather_report_at_coordinates(coordinates, date_time): # Dummy function, returns a list of [temperature in °C, risk of rain on a scale 0-1, wave height in m] return [28.0, 0.35, 0.85] def convert_location_to_coordinates(location): # Returns dummy coordinates return [3.3, -42.0] @tool def get_weather_api(location: str, date_time: str) -> str: """ Returns the weather report. Args: location: the name of the place that you want the weather for. date_time: the date and time for which you want the report. """ lon, lat = convert_location_to_coordinates(location) date_time = datetime.strptime(date_time) return str(get_weather_report_at_coordinates((lon, lat), date_time)) ``` # यह खराब क्यों है? - `date_time` के लिए उपयोग किए जाने वाले फॉर्मेट की सटीकता का कोई उल्लेख नहीं है। - यह स्पष्ट नहीं है कि स्थान (location) को किस प्रकार निर्दिष्ट किया जाना चाहिए। - त्रुटियों को स्पष्ट रूप से इंगित करने के लिए कोई लॉगिंग मेकैनिज्म मौजूद नहीं है, जैसे कि स्थान गलत फॉर्मेट में होना या `date_time` का सही ढंग से फॉर्मेट न होना। - आउटपुट फॉर्मेट समझने में कठिन है। यदि टूल कॉल विफल हो जाती है, तो मेमोरी में लॉग की गई एरर ट्रेस LLM को टूल की समस्याओं को ठीक करने के लिए रिवर्स इंजीनियरिंग में मदद कर सकती है। लेकिन इतना सारा काम LLM को ही क्यों करने देना? इस टूल को बेहतर तरीके से बनाने का एक उदाहरण इस प्रकार हो सकता है: ```python @tool def get_weather_api(location: str, date_time: str) -> str: """ Returns the weather report. Args: location: the name of the place that you want the weather for. Should be a place name, followed by possibly a city name, then a country, like "Anchor Point, Taghazout, Morocco". date_time: the date and time for which you want the report, formatted as '%m/%d/%y %H:%M:%S'. """ lon, lat = convert_location_to_coordinates(location) try: date_time = datetime.strptime(date_time) except Exception as e: raise ValueError("Conversion of `date_time` to datetime format failed, make sure to provide a string in format '%m/%d/%y %H:%M:%S'. Full trace:" + str(e)) temperature_celsius, risk_of_rain, wave_height = get_weather_report_at_coordinates((lon, lat), date_time) return f"Weather report for {location}, {date_time}: Temperature will be {temperature_celsius}°C, risk of rain is {risk_of_rain*100:.0f}%, wave height is {wave_height}m." ``` सामान्य तौर पर, अपने LLM का बोझ को कम करने के लिए, खुद से यह अच्छा सवाल पूछें: "यदि मैं नया और अनुभवहीन हूं और इस टूल का पहली बार उपयोग कर रहा हूं, तो इस टूल के साथ प्रोग्रामिंग करना और अपनी गलतियों को ठीक करना मेरे लिए कितना आसान होगा?" ### एजेंट को अधिक तर्क (arguments) दें अपने एजेंट को कार्य का वर्णन करने वाले साधारण स्ट्रिंग से आगे बढ़कर कुछ अतिरिक्त ऑब्जेक्ट्स देने के लिए, आप `additional_args` का उपयोग कर सकते हैं। यह आपको किसी भी प्रकार का ऑब्जेक्ट पास करने की सुविधा देता है: ```py from smolagents import CodeAgent, HfApiModel model_id = "meta-llama/Llama-3.3-70B-Instruct" agent = CodeAgent(tools=[], model=HfApiModel(model_id=model_id), add_base_tools=True) agent.run( "Why does Mike not know many people in New York?", additional_args={"mp3_sound_file_url":'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3'} ) ``` उदाहरण के लिए, आप इस `additional_args` आर्ग्यूमेंट का उपयोग उन इमेजेज़ या स्ट्रिंग्स को पास करने के लिए कर सकते हैं जिन्हें आप चाहते हैं कि आपका एजेंट उपयोग करे। ## अपने एजेंट को डिबग कैसे करें ### 1. एक अधिक शक्तिशाली LLM का उपयोग करें एजेंटिक वर्कफ़्लो में, कुछ त्रुटियां वास्तविक होती हैं, जबकि कुछ अन्य त्रुटियां आपके LLM इंजन के सही तरीके से तर्क न कर पाने की वजह से होती हैं। उदाहरण के लिए, इस ट्रेस को देखें, जहां मैंने एक `CodeAgent` से एक कार की तस्वीर बनाने के लिए कहा: ``` ==================================================================================================== New task ==================================================================================================== Make me a cool car picture ──────────────────────────────────────────────────────────────────────────────────────────────────── New step ──────────────────────────────────────────────────────────────────────────────────────────────────── Agent is executing the code below: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── image_generator(prompt="A cool, futuristic sports car with LED headlights, aerodynamic design, and vibrant color, high-res, photorealistic") ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Last output from code snippet: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png Step 1: - Time taken: 16.35 seconds - Input tokens: 1,383 - Output tokens: 77 ──────────────────────────────────────────────────────────────────────────────────────────────────── New step ──────────────────────────────────────────────────────────────────────────────────────────────────── Agent is executing the code below: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── final_answer("/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png") ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Print outputs: Last output from code snippet: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png Final answer: /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png ``` उपयोगकर्ता को, एक इमेज लौटाए जाने के बजाय, उन्हें एक पाथ लौटाया जाता है। यह सिस्टम से एक बग की तरह दिख सकता है, लेकिन वास्तव में एजेंटिक सिस्टम ने त्रुटि नहीं की: यह केवल इसलिए है कि LLM ब्रेन ने इमेज आउटपुट को एक वेरिएबल में सेव करने की गलती की। इस प्रकार यह इमेज को फिर से एक्सेस नहीं कर सकता है सिवाय इमेज को सेव करते समय लॉग किए गए पाथ का उपयोग करके, इसलिए यह इमेज के बजाय पाथ लौटाता है। अपने एजेंट को डीबग करने का पहला कदम इस प्रकार है "एक अधिक शक्तिशाली LLM का उपयोग करें"। `Qwen2/5-72B-Instruct` जैसे विकल्प वह गलती नहीं करते। ### 2. अधिक मार्गदर्शन / अधिक जानकारी प्रदान करें आप कम शक्तिशाली मॉडल्स का भी उपयोग कर सकते हैं, बशर्ते आप उन्हें अधिक प्रभावी ढंग से मार्गदर्शन करें। अपने आप को अपने मॉडल की जगह रखें: यदि आप कार्य को हल करने वाला मॉडल होते, तो क्या आप उपलब्ध जानकारी (सिस्टम प्रॉम्प्ट + कार्य निर्माण + टूल विवरण से) के साथ संघर्ष करते? क्या आपको कुछ अतिरिक्त स्पष्टीकरण की आवश्यकता होती? अतिरिक्त जानकारी प्रदान करने के लिए, हम तुरंत सिस्टम प्रॉम्प्ट को बदलने की सलाह नहीं देते हैं: डिफ़ॉल्ट सिस्टम प्रॉम्प्ट में कई समायोजन हैं जिन्हें आप तब तक नहीं बिगाड़ना चाहते जब तक आप प्रॉम्प्ट को बहुत अच्छी तरह से नहीं समझते। अपने LLM इंजन को मार्गदर्शन करने के बेहतर तरीके हैं: - यदि यह कार्य को हल करने के बारे में है: इन सभी विवरणों को कार्य में जोड़ें। यह कार्य 100 पेज लंबा हो सकता है - यदि यह टूल्स के उपयोग के बारे में है: आपके टूल्स की विवरण विशेषता। ### 3. सिस्टम प्रॉम्प्ट बदलें (आमतौर पर यह सलाह नहीं दी जाती) यदि उपरोक्त स्पष्टीकरण पर्याप्त नहीं हैं, तो आप सिस्टम प्रॉम्प्ट बदल सकते हैं। आइए देखें कि यह कैसे काम करता है। उदाहरण के लिए, आइए [`CodeAgent`] के लिए डिफ़ॉल्ट सिस्टम प्रॉम्प्ट की जाँच करें (नीचे दिया गया वर्जन जीरो-शॉट उदाहरणों को छोड़कर छोटा किया गया है)। ```python print(agent.prompt_templates["system_prompt"]) ``` Here is what you get: ```text You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can. To do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code. To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences. At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use. Then in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '<end_code>' sequence. During each intermediate step, you can use 'print()' to save whatever important information you will then need. These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step. In the end you have to return a final answer using the `final_answer` tool. Here are a few examples using notional tools: --- {examples} Above example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools: {{tool_descriptions}} {{managed_agents_descriptions}} Here are the rules you should always follow to solve your task: 1. Always provide a 'Thought:' sequence, and a 'Code:\n```py' sequence ending with '```<end_code>' sequence, else you will fail. 2. Use only variables that you have defined! 3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': "What is the place where James Bond lives?"})', but use the arguments directly as in 'answer = wiki(query="What is the place where James Bond lives?")'. 4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block. 5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters. 6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'. 7. Never create any notional variables in our code, as having these in your logs might derail you from the true variables. 8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}} 9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist. 10. Don't give up! You're in charge of solving the task, not providing directions to solve it. Now Begin! If you solve the task correctly, you will receive a reward of $1,000,000. ``` जैसा कि आप देख सकते हैं, `"{{tool_descriptions}}"` जैसे प्लेसहोल्डर्स हैं: इनका उपयोग एजेंट इनिशियलाइजेशन के समय टूल्स या मैनेज्ड एजेंट्स के कुछ स्वचालित रूप से जनरेट किए गए विवरणों को डालने के लिए किया जाएगा। इसलिए जबकि आप `system_prompt` पैरामीटर में अपने कस्टम प्रॉम्प्ट को आर्गुमेंट के रूप में पास करके इस सिस्टम प्रॉम्प्ट टेम्पलेट को ओवरराइट कर सकते हैं, आपके नए सिस्टम प्रॉम्प्ट में निम्नलिखित प्लेसहोल्डर्स होने चाहिए: - टूल विवरण डालने के लिए `"{{tool_descriptions}}"`। - यदि कोई मैनेज्ड एजेंट्स हैं तो उनके लिए विवरण डालने के लिए `"{{managed_agents_description}}"`। - केवल `CodeAgent` के लिए: अधिकृत इम्पोर्ट्स की सूची डालने के लिए `"{{authorized_imports}}"`। फिर आप सिस्टम प्रॉम्प्ट को निम्नानुसार बदल सकते हैं: ```py from smolagents.prompts import CODE_SYSTEM_PROMPT modified_system_prompt = CODE_SYSTEM_PROMPT + "\nHere you go!" # Change the system prompt here agent = CodeAgent( tools=[], model=HfApiModel(), system_prompt=modified_system_prompt ) ``` This also works with the [`ToolCallingAgent`]. ### 4. अतिरिक्त योजना हम पूरक योजना चरण के लिए एक मॉडल प्रदान करते हैं, जिसे एजेंट सामान्य क्रियाओं के चरणों के बीच नियमित रूप से चला सकता है। इस चरण में कोई टूल कॉल नहीं होती है, LLM से केवल उन तथ्यों की सूची को अपडेट करने के लिए कहा जाता है जो उसे ज्ञात हैं और इन तथ्यों के आधार पर उसे अगले कदमों के बारे में विचार करना होता है। ```py from smolagents import load_tool, CodeAgent, HfApiModel, DuckDuckGoSearchTool from dotenv import load_dotenv load_dotenv() # Import tool from Hub image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True) search_tool = DuckDuckGoSearchTool() agent = CodeAgent( tools=[search_tool], model=HfApiModel("Qwen/Qwen2.5-72B-Instruct"), planning_interval=3 # This is where you activate planning! ) # Run it! result = agent.run( "How long would a cheetah at full speed take to run the length of Pont Alexandre III?", ) ```
{ "source": "huggingface/smolagents", "title": "docs/source/hi/tutorials/building_good_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/tutorials/building_good_agents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 16459 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # OpenTelemetry के साथ runs का निरीक्षण [[open-in-colab]] > [!TIP] > यदि आप एजेंट्स बनाने में नए हैं, तो पहले [एजेंट्स का परिचय](../conceptual_guides/intro_agents) और [smolagents की गाइडेड टूर](../guided_tour) पढ़ना सुनिश्चित करें। ### Agents runs को लॉग क्यों करें? Agent runs को डीबग करना जटिल होता है। यह सत्यापित करना कठिन है कि एक रन ठीक से चला या नहीं, क्योंकि एजेंट वर्कफ़्लो [डिज़ाइन के अनुसार अप्रत्याशित](../conceptual_guides/intro_agents) होते हैं (यदि वे प्रत्याशित होते, तो आप पुराने अच्छे कोड का ही उपयोग कर रहे होते)। और रन का निरीक्षण करना भी कठिन है: मल्टी-स्टेप एजेंट्स जल्दी ही कंसोल को लॉग से भर देते हैं, और अधिकांश त्रुटियां केवल "LLM dumb" प्रकार की त्रुटियां होती हैं, जिनसे LLM अगले चरण में बेहतर कोड या टूल कॉल लिखकर स्वयं को सुधार लेता है। इसलिए बाद के निरीक्षण और मॉनिटरिंग के लिए प्रोडक्शन में agent runs को रिकॉर्ड करने के लिए इंस्ट्रुमेंटेशन का उपयोग करना आवश्यक है! हमने agent runs को इंस्ट्रुमेंट करने के लिए [OpenTelemetry](https://opentelemetry.io/) मानक को अपनाया है। इसका मतलब है कि आप बस कुछ इंस्ट्रुमेंटेशन कोड चला सकते हैं, फिर अपने एजेंट्स को सामान्य रूप से चला सकते हैं, और सब कुछ आपके प्लेटफॉर्म में लॉग हो जाता है। यह इस प्रकार होता है: पहले आवश्यक पैकेज इंस्टॉल करें। यहां हम [Phoenix by Arize AI](https://github.com/Arize-ai/phoenix) इंस्टॉल करते हैं क्योंकि यह लॉग्स को एकत्र और निरीक्षण करने का एक अच्छा समाधान है, लेकिन इस संग्रह और निरीक्षण भाग के लिए आप अन्य OpenTelemetry-कम्पैटिबल प्लेटफॉर्म्स का उपयोग कर सकते हैं। ```shell pip install smolagents pip install arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp openinference-instrumentation-smolagents ``` फिर कलेक्टर को बैकग्राउंड में चलाएं। ```shell python -m phoenix.server.main serve ``` अंत में, अपने एजेंट्स को ट्रेस करने और ट्रेस को नीचे परिभाषित एंडपॉइंट पर Phoenix को भेजने के लिए `SmolagentsInstrumentor` को सेट करें। ```python from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import BatchSpanProcessor from openinference.instrumentation.smolagents import SmolagentsInstrumentor from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor endpoint = "http://0.0.0.0:6006/v1/traces" trace_provider = TracerProvider() trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint))) SmolagentsInstrumentor().instrument(tracer_provider=trace_provider) ``` तब आप अपने एजेंट चला सकते हैं! ```py from smolagents import ( CodeAgent, ToolCallingAgent, DuckDuckGoSearchTool, VisitWebpageTool, HfApiModel, ) model = HfApiModel() managed_agent = ToolCallingAgent( tools=[DuckDuckGoSearchTool(), VisitWebpageTool()], model=model, name="managed_agent", description="This is an agent that can do web search.", ) manager_agent = CodeAgent( tools=[], model=model, managed_agents=[managed_agent], ) manager_agent.run( "If the US keeps its 2024 growth rate, how many years will it take for the GDP to double?" ) ``` और फिर आप अपने रन का निरीक्षण करने के लिए `http://0.0.0.0:6006/projects/` पर जा सकते हैं! <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/inspect_run_phoenix.png"> आप देख सकते हैं कि CodeAgent ने अपने मैनेज्ड ToolCallingAgent को (वैसे, मैनेज्ड एजेंट एक CodeAgent भी हो सकता था) U.S. 2024 ग्रोथ रेट के लिए वेब सर्च चलाने के लिए कॉल किया। फिर मैनेज्ड एजेंट ने अपनी रिपोर्ट लौटाई और मैनेजर एजेंट ने अर्थव्यवस्था के दोगुना होने का समय गणना करने के लिए उस पर कार्य किया! अच्छा है, है ना?
{ "source": "huggingface/smolagents", "title": "docs/source/hi/tutorials/inspect_runs.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/tutorials/inspect_runs.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 4375 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # सुरक्षित कोड एक्जीक्यूशन [[open-in-colab]] > [!TIP] > यदि आप एजेंट्स बनाने में नए हैं, तो सबसे पहले [एजेंट्स का परिचय](../conceptual_guides/intro_agents) और [smolagents की गाइडेड टूर](../guided_tour) पढ़ना सुनिश्चित करें। ### कोड Agents [कई](https://huggingface.co/papers/2402.01030) [शोध](https://huggingface.co/papers/2411.01747) [पत्रों](https://huggingface.co/papers/2401.00812) ने दिखाया है कि LLM द्वारा अपनी क्रियाओं (टूल कॉल्स) को कोड में लिखना, टूल कॉलिंग के वर्तमान मानक प्रारूप से बहुत बेहतर है, जो industry में "टूल्स नेम्स और आर्ग्यूमेंट्स को JSON के रूप में लिखने" के विभिन्न रूप हैं। कोड बेहतर क्यों है? क्योंकि हमने अपनी कोड भाषाओं को विशेष रूप से कंप्यूटर द्वारा की जाने वाली क्रियाओं को व्यक्त करने के लिए तैयार किया है। यदि JSON स्निपेट्स एक बेहतर तरीका होता, तो यह पैकेज JSON स्निपेट्स में लिखा गया होता और शैतान हम पर हंस रहा होता। कोड कंप्यूटर पर क्रियाएँ व्यक्त करने का बेहतर तरीका है। इसमें बेहतर है: - **कंपोज़ेबिलिटी:** क्या आप JSON क्रियाओं को एक-दूसरे के भीतर नेस्ट कर सकते हैं, या बाद में पुन: उपयोग करने के लिए JSON क्रियाओं का एक सेट परिभाषित कर सकते हैं, जैसे आप बस एक पायथन फ़ंक्शन परिभाषित कर सकते हैं? - **ऑब्जेक्ट प्रबंधन:** JSON में `generate_image` जैसी क्रिया का आउटपुट कैसे स्टोर करें? - **सामान्यता:** कोड किसी भी कंप्यूटर कार्य को व्यक्त करने के लिए बनाया गया है। - **LLM प्रशिक्षण कॉर्पस में प्रतिनिधित्व:** क्यों न इस आशीर्वाद का लाभ उठाएं कि उच्च गुणवत्ता वाले कोड उदाहरण पहले से ही LLM प्रशिक्षण डेटा में शामिल हैं? यह नीचे दी गई छवि में दर्शाया गया है, जो [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030) से ली गई है। <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png"> यही कारण है कि हमने कोड एजेंट्स, इस मामले में पायथन एजेंट्स पर जोर दिया, जिसका मतलब सुरक्षित पायथन इंटरप्रेटर बनाने पर अधिक प्रयास करना था। ### लोकल पायथन इंटरप्रेटर डिफ़ॉल्ट रूप से, `CodeAgent` LLM-जनरेटेड कोड को आपके एनवायरनमेंट में चलाता है। यह एक्जीक्यूशन वैनिला पायथन इंटरप्रेटर द्वारा नहीं किया जाता: हमने एक अधिक सुरक्षित `LocalPythonInterpreter` को शुरू से फिर से बनाया है। यह इंटरप्रेटर सुरक्षा के लिए डिज़ाइन किया गया है: - इम्पोर्ट्स को उपयोगकर्ता द्वारा स्पष्ट रूप से पास की गई सूची तक सीमित करना - इनफिनिट लूप्स और रिसोर्स ब्लोटिंग को रोकने के लिए ऑपरेशंस की संख्या को कैप करना - कोई भी ऐसा ऑपरेशन नहीं करेगा जो पूर्व-परिभाषित नहीं है हमने इसे कई उपयोग मामलों में इस्तेमाल किया है, और कभी भी एनवायरनमेंट को कोई नुकसान नहीं देखा। हालांकि यह समाधान पूरी तरह से सुरक्षित नहीं है: कोई ऐसे अवसरों की कल्पना कर सकता है जहां दुर्भावनापूर्ण कार्यों के लिए फाइन-ट्यून किए गए LLM अभी भी आपके एनवायरनमेंट को नुकसान पहुंचा सकते हैं। उदाहरण के लिए यदि आपने छवियों को प्रोसेस करने के लिए `Pillow` जैसे मासूम पैकेज की अनुमति दी है, तो LLM आपकी हार्ड ड्राइव को ब्लोट करने के लिए हजारों छवियों को सेव कर सकता है। यदि आपने खुद LLM इंजन चुना है तो यह निश्चित रूप से संभावित नहीं है, लेकिन यह हो सकता है। तो यदि आप अतिरिक्त सावधानी बरतना चाहते हैं, तो आप नीचे वर्णित रिमोट कोड एक्जीक्यूशन विकल्प का उपयोग कर सकते हैं। ### E2B कोड एक्जीक्यूटर अधिकतम सुरक्षा के लिए, आप कोड को सैंडबॉक्स्ड एनवायरनमेंट में चलाने के लिए E2B के साथ हमारे एकीकरण का उपयोग कर सकते हैं। यह एक रिमोट एक्जीक्यूशन सेवा है जो आपके कोड को एक आइसोलेटेड कंटेनर में चलाती है, जिससे कोड का आपके स्थानीय एनवायरनमेंट को प्रभावित करना असंभव हो जाता है। इसके लिए, आपको अपना E2B अकाउंट सेटअप करने और अपने एनवायरनमेंट वेरिएबल्स में अपना `E2B_API_KEY` सेट करने की आवश्यकता होगी। अधिक जानकारी के लिए [E2B की क्विकस्टार्ट डॉक्यूमेंटेशन](https://e2b.dev/docs/quickstart) पर जाएं। फिर आप इसे `pip install e2b-code-interpreter python-dotenv` के साथ इंस्टॉल कर सकते हैं। अब आप तैयार हैं! कोड एक्जीक्यूटर को E2B पर सेट करने के लिए, बस अपने `CodeAgent` को इनिशियलाइज़ करते समय `use_e2b_executor=True` फ्लैग पास करें। ध्यान दें कि आपको `additional_authorized_imports` में सभी टूल की डिपेंडेंसीज़ जोड़नी चाहिए, ताकि एक्जीक्यूटर उन्हें इंस्टॉल करे। ```py from smolagents import CodeAgent, VisitWebpageTool, HfApiModel agent = CodeAgent( tools = [VisitWebpageTool()], model=HfApiModel(), additional_authorized_imports=["requests", "markdownify"], use_e2b_executor=True ) agent.run("What was Abraham Lincoln's preferred pet?") ``` E2B कोड एक्जीक्यूशन वर्तमान में मल्टी-एजेंट्स के साथ काम नहीं करता है - क्योंकि कोड ब्लॉब में एक एजेंट कॉल करना जो रिमोटली एक्जीक्यूट किया जाना चाहिए, यह एक गड़बड़ है। लेकिन हम इसे जोड़ने पर काम कर रहे हैं!
{ "source": "huggingface/smolagents", "title": "docs/source/hi/tutorials/secure_code_execution.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/tutorials/secure_code_execution.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5209 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Tools [[open-in-colab]] यहाँ, हम एडवांस्ड tools उपयोग देखेंगे। > [!TIP] > यदि आप एजेंट्स बनाने में नए हैं, तो सबसे पहले [एजेंट्स का परिचय](../conceptual_guides/intro_agents) और [smolagents की गाइडेड टूर](../guided_tour) पढ़ना सुनिश्चित करें। - [Tools](#tools) - [टूल क्या है, और इसे कैसे बनाएं?](#टूल-क्या-है-और-इसे-कैसे-बनाएं) - [अपना टूल हब पर शेयर करें](#अपना-टूल-हब-पर-शेयर-करें) - [स्पेस को टूल के रूप में इम्पोर्ट करें](#स्पेस-को-टूल-के-रूप-में-इम्पोर्ट-करें) - [LangChain टूल्स का उपयोग करें](#LangChain-टूल्स-का-उपयोग-करें) - [अपने एजेंट के टूलबॉक्स को मैनेज करें](#अपने-एजेंट-के-टूलबॉक्स-को-मैनेज-करें) - [टूल्स का कलेक्शन उपयोग करें](#टूल्स-का-कलेक्शन-उपयोग-करें) ### टूल क्या है और इसे कैसे बनाएं टूल मुख्य रूप से एक फ़ंक्शन है जिसे एक LLM एजेंटिक सिस्टम में उपयोग कर सकता है। लेकिन इसका उपयोग करने के लिए, LLM को एक API दी जाएगी: नाम, टूल विवरण, इनपुट प्रकार और विवरण, आउटपुट प्रकार। इसलिए यह केवल एक फ़ंक्शन नहीं हो सकता। यह एक क्लास होनी चाहिए। तो मूल रूप से, टूल एक क्लास है जो एक फ़ंक्शन को मेटाडेटा के साथ रैप करती है जो LLM को समझने में मदद करती है कि इसका उपयोग कैसे करें। यह कैसा दिखता है: ```python from smolagents import Tool class HFModelDownloadsTool(Tool): name = "model_download_counter" description = """ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint.""" inputs = { "task": { "type": "string", "description": "the task category (such as text-classification, depth-estimation, etc)", } } output_type = "string" def forward(self, task: str): from huggingface_hub import list_models model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return model.id model_downloads_tool = HFModelDownloadsTool() ``` कस्टम टूल `Tool` को सबक्लास करता है उपयोगी मेथड्स को इनहेरिट करने के लिए। चाइल्ड क्लास भी परिभाषित करती है: - एक `name` एट्रिब्यूट, जो टूल के नाम से संबंधित है। नाम आमतौर पर बताता है कि टूल क्या करता है। चूंकि कोड एक टास्क के लिए सबसे अधिक डाउनलोड वाले मॉडल को रिटर्न करता है, इसलिए इसे `model_download_counter` नाम दें। - एक `description` एट्रिब्यूट एजेंट के सिस्टम प्रॉम्प्ट को पॉपुलेट करने के लिए उपयोग किया जाता है। - एक `inputs` एट्रिब्यूट, जो `"type"` और `"description"` keys वाला डिक्शनरी है। इसमें जानकारी होती है जो पायथन इंटरप्रेटर को इनपुट के बारे में शिक्षित विकल्प चुनने में मदद करती है। - एक `output_type` एट्रिब्यूट, जो आउटपुट टाइप को निर्दिष्ट करता है। `inputs` और `output_type` दोनों के लिए टाइप [Pydantic formats](https://docs.pydantic.dev/latest/concepts/json_schema/#generating-json-schema) होने चाहिए, वे इनमें से कोई भी हो सकते हैं: [`~AUTHORIZED_TYPES`]। - एक `forward` मेथड जिसमें एक्जीक्यूट किया जाने वाला इन्फरेंस कोड होता है। एजेंट में उपयोग किए जाने के लिए इतना ही चाहिए! टूल बनाने का एक और तरीका है। [guided_tour](../guided_tour) में, हमने `@tool` डेकोरेटर का उपयोग करके एक टूल को लागू किया। [`tool`] डेकोरेटर सरल टूल्स को परिभाषित करने का अनुशंसित तरीका है, लेकिन कभी-कभी आपको इससे अधिक की आवश्यकता होती है: अधिक स्पष्टता के लिए एक क्लास में कई मेथड्स का उपयोग करना, या अतिरिक्त क्लास एट्रिब्यूट्स का उपयोग करना। इस स्थिति में, आप ऊपर बताए अनुसार [`Tool`] को सबक्लास करके अपना टूल बना सकते हैं। ### अपना टूल हब पर शेयर करें आप टूल पर [`~Tool.push_to_hub`] को कॉल करके अपना कस्टम टूल हब पर शेयर कर सकते हैं। सुनिश्चित करें कि आपने हब पर इसके लिए एक रिपॉजिटरी बनाई है और आप रीड एक्सेस वाला टोकन उपयोग कर रहे हैं। ```python model_downloads_tool.push_to_hub("{your_username}/hf-model-downloads", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") ``` हब पर पुश करने के लिए काम करने के लिए, आपके टूल को कुछ नियमों का पालन करना होगा: - सभी मेथड्स सेल्फ-कंटेन्ड हैं, यानी उनके आर्ग्स से आने वाले वेरिएबल्स का उपयोग करें। - उपरोक्त बिंदु के अनुसार, **सभी इम्पोर्ट्स को सीधे टूल के फ़ंक्शंस के भीतर परिभाषित किया जाना चाहिए**, अन्यथा आपको अपने कस्टम टूल के साथ [`~Tool.save`] या [`~Tool.push_to_hub`] को कॉल करने का प्रयास करते समय एरर मिलेगा। - यदि आप `__init__` विधि को सबक्लास करते हैं, तो आप इसे `self` के अलावा कोई अन्य आर्ग्यूमेंट नहीं दे सकते। ऐसा इसलिए है क्योंकि किसी विशिष्ट टूल इंस्टेंस के इनिशियलाइजेशन के दौरान सेट किए गए तर्कों को आर्ग्यूमेंट्स करना कठिन होता है, जो उन्हें हब पर ठीक से साझा करने से रोकता है। और वैसे भी, एक विशिष्ट क्लास बनाने का विचार यह है कि आप हार्ड-कोड के लिए आवश्यक किसी भी चीज़ के लिए क्लास विशेषताएँ पहले से ही सेट कर सकते हैं (बस `your_variable=(...)` को सीधे `class YourTool(Tool):` पंक्ति के अंतर्गत सेट करें ). और निश्चित रूप से आप अभी भी `self.your_variable` को असाइन करके अपने कोड में कहीं भी एक क्लास विशेषता बना सकते हैं। एक बार जब आपका टूल हब पर पुश हो जाता है, तो आप इसे विज़ुअलाइज़ कर सकते हैं। [यहाँ](https://huggingface.co/spaces/m-ric/hf-model-downloads) `model_downloads_tool` है जिसे मैंने पुश किया है। इसमें एक अच्छा ग्रेडियो इंटरफ़ेस है। टूल फ़ाइलों में गहराई से जाने पर, आप पा सकते हैं कि सारी टूल लॉजिक [tool.py](https://huggingface.co/spaces/m-ric/hf-model-downloads/blob/main/tool.py) के अंतर्गत है। यहीं आप किसी और द्वारा शेयर किए गए टूल का निरीक्षण कर सकते हैं। फिर आप टूल को [`load_tool`] के साथ लोड कर सकते हैं या [`~Tool.from_hub`] के साथ बना सकते हैं और इसे अपने एजेंट में `tools` पैरामीटर में पास कर सकते हैं। चूंकि टूल्स को चलाने का मतलब कस्टम कोड चलाना है, आपको यह सुनिश्चित करना होगा कि आप रिपॉजिटरी पर भरोसा करते हैं, इसलिए हम हब से टूल लोड करने के लिए `trust_remote_code=True` पास करने की आवश्यकता रखते हैं। ```python from smolagents import load_tool, CodeAgent model_download_tool = load_tool( "{your_username}/hf-model-downloads", trust_remote_code=True ) ``` ### स्पेस को टूल के रूप में इम्पोर्ट करें आप [`Tool.from_space`] मेथड का उपयोग करके हब से एक स्पेस को सीधे टूल के रूप में इम्पोर्ट कर सकते हैं! आपको केवल हब पर स्पेस की ID, इसका नाम, और एक विवरण प्रदान करने की आवश्यकता है जो आपके एजेंट को समझने में मदद करेगा कि टूल क्या करता है। अंदर से, यह स्पेस को कॉल करने के लिए [`gradio-client`](https://pypi.org/project/gradio-client/) लाइब्रेरी का उपयोग करेगा। उदाहरण के लिए, चलिए हब से [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) स्पेस को इम्पोर्ट करें और इसका उपयोग एक इमेज जनरेट करने के लिए करें। ```python image_generation_tool = Tool.from_space( "black-forest-labs/FLUX.1-schnell", name="image_generator", description="Generate an image from a prompt" ) image_generation_tool("A sunny beach") ``` और देखो, यह तुम्हारी छवि है! 🏖️ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sunny_beach.webp"> फिर आप इस टूल का उपयोग किसी अन्य टूल की तरह कर सकते हैं। उदाहरण के लिए, चलिए प्रॉम्प्ट `a rabbit wearing a space suit` को सुधारें और इसकी एक इमेज जनरेट करें। यह उदाहरण यह भी दिखाता है कि आप एजेंट को अतिरिक्त आर्ग्यूमेंट्स कैसे पास कर सकते हैं। ```python from smolagents import CodeAgent, HfApiModel model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct") agent = CodeAgent(tools=[image_generation_tool], model=model) agent.run( "Improve this prompt, then generate an image of it.", additional_args={'user_prompt': 'A rabbit wearing a space suit'} ) ``` ```text === Agent thoughts: improved_prompt could be "A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background" Now that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt. >>> Agent is executing the code below: image = image_generator(prompt="A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background") final_answer(image) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit_spacesuit_flux.webp"> यह कितना कूल है? 🤩 ### LangChain टूल्स का उपयोग करें हम LangChain को पसंद करते हैं और मानते हैं कि इसके पास टूल्स का एक बहुत आकर्षक संग्रह है। LangChain से एक टूल इम्पोर्ट करने के लिए, `from_langchain()` मेथड का उपयोग करें। यहाँ बताया गया है कि आप LangChain वेब सर्च टूल का उपयोग करके परिचय के सर्च रिजल्ट को कैसे फिर से बना सकते हैं। इस टूल को काम करने के लिए `pip install langchain google-search-results -q` की आवश्यकता होगी। ```python from langchain.agents import load_tools search_tool = Tool.from_langchain(load_tools(["serpapi"])[0]) agent = CodeAgent(tools=[search_tool], model=model) agent.run("How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?") ``` ### अपने एजेंट के टूलबॉक्स को मैनेज करें आप एजेंट के टूलबॉक्स को `agent.tools` एट्रिब्यूट में एक टूल जोड़कर या बदलकर मैनेज कर सकते हैं, क्योंकि यह एक स्टैंडर्ड डिक्शनरी है। चलिए केवल डिफ़ॉल्ट टूलबॉक्स के साथ इनिशियलाइज़ किए गए मौजूदा एजेंट में `model_download_tool` जोड़ें। ```python from smolagents import HfApiModel model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct") agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.tools[model_download_tool.name] = model_download_tool ``` अब हम नए टूल का लाभ उठा सकते हैं। ```python agent.run( "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub but reverse the letters?" ) ``` > [!TIP] > एजेंट में बहुत अधिक टूल्स न जोड़ने से सावधान रहें: यह कमजोर LLM इंजन को ओवरव्हेल्म कर सकता है। ### टूल्स का कलेक्शन उपयोग करें आप `ToolCollection` ऑब्जेक्ट का उपयोग करके टूल कलेक्शंस का लाभ उठा सकते हैं। यह या तो हब से एक कलेक्शन या MCP सर्वर टूल्स को लोड करने का समर्थन करता है। #### हब में कलेक्शन से टूल कलेक्शन आप उस कलेक्शन के स्लग के साथ इसका लाभ उठा सकते हैं जिसका आप उपयोग करना चाहते हैं। फिर उन्हें अपने एजेंट को इनिशियलाइज़ करने के लिए एक लिस्ट के रूप में पास करें, और उनका उपयोग शुरू करें! ```py from smolagents import ToolCollection, CodeAgent image_tool_collection = ToolCollection.from_hub( collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>" ) agent = CodeAgent(tools=[*image_tool_collection.tools], model=model, add_base_tools=True) agent.run("Please draw me a picture of rivers and lakes.") ``` स्टार्ट को तेज करने के लिए, टूल्स केवल तभी लोड होते हैं जब एजेंट द्वारा कॉल किए जाते हैं। #### किसी भी MCP सर्वर से टूल कलेक्शन [glama.ai](https://glama.ai/mcp/servers) या [smithery.ai](https://smithery.ai/) पर उपलब्ध सैकड़ों MCP सर्वर्स से टूल्स का लाभ उठाएं। MCP सर्वर्स टूल्स को निम्नानुसार `ToolCollection` ऑब्जेक्ट में लोड किया जा सकता है: ```py from smolagents import ToolCollection, CodeAgent from mcp import StdioServerParameters server_parameters = StdioServerParameters( command="uv", args=["--quiet", "pubmedmcp@0.1.3"], env={"UV_PYTHON": "3.12", **os.environ}, ) with ToolCollection.from_mcp(server_parameters) as tool_collection: agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True) agent.run("Please find a remedy for hangover.") ```
{ "source": "huggingface/smolagents", "title": "docs/source/hi/tutorials/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/hi/tutorials/tools.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 11755 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Agent 简介 > [!TIP] > 译者注:Agent 的业内术语是“智能体”。本译文将保留 agent,不作翻译,以带来更高效的阅读体验。(在中文为主的文章中,It's easier to 注意到英文。Attention Is All You Need!) ## 🤔 什么是 agent? 任何使用 AI 的高效系统都需要为 LLM 提供某种访问现实世界的方式:例如调用搜索工具获取外部信息,或者操作某些程序以完成任务。换句话说,LLM 应该具有 **_Agent 能力_**。Agent 程序是 LLM 通往外部世界的门户。 > [!TIP] > AI agent 是 **LLM 输出控制工作流的程序**。 任何利用 LLM 的系统都会将 LLM 输出集成到代码中。LLM 输入对代码工作流的影响程度就是 LLM 在系统中的 agent 能力级别。 请注意,根据这个定义,"Agent" 不是一个离散的、非 0 即 1 的定义:相反,"Agent 能力" 是一个连续谱系,随着你在工作流中给予 LLM 更多或更少的权力而变化。 请参见下表中 agent 能力在不同系统中的变化: | Agent 能力级别 | 描述 | 名称 | 示例模式 | | ------------ | ---------------------------------------------- | ---------- | -------------------------------------------------- | | ☆☆☆ | LLM 输出对程序流程没有影响 | 简单处理器 | `process_llm_output(llm_response)` | | ★☆☆ | LLM 输出决定 if/else 分支 | 路由 | `if llm_decision(): path_a() else: path_b()` | | ★★☆ | LLM 输出决定函数执行 | 工具调用者 | `run_function(llm_chosen_tool, llm_chosen_args)` | | ★★★ | LLM 输出控制迭代和程序继续 | 多步 Agent | `while llm_should_continue(): execute_next_step()` | | ★★★ | 一个 agent 工作流可以启动另一个 agent 工作流 | 多 Agent | `if llm_trigger(): execute_agent()` | 多步 agent 具有以下代码结构: ```python memory = [user_defined_task] while llm_should_continue(memory): # 这个循环是多步部分 action = llm_get_next_action(memory) # 这是工具调用部分 observations = execute_action(action) memory += [action, observations] ``` 这个 agent 系统在一个循环中运行,每一步执行一个新动作(该动作可能涉及调用一些预定义的 *工具*,这些工具只是函数),直到其观察结果表明已达到解决给定任务的满意状态。以下是一个多步 agent 如何解决简单数学问题的示例: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif"/> </div> ## ✅ 何时使用 agent / ⛔ 何时避免使用 当你需要 LLM 确定应用程序的工作流时,agent 很有用。但它们通常有些过度。问题是:我真的需要工作流的灵活性来有效解决手头的任务吗? 如果预定义的工作流经常不足,这意味着你需要更多的灵活性。 让我们举个例子:假设你正在开发一个处理冲浪旅行网站客户请求的应用程序。 你可以提前知道请求将属于 2 个类别之一(基于用户选择),并且你为这 2 种情况都有预定义的工作流。 1. 想要了解旅行信息?⇒ 给他们访问搜索栏以搜索你的知识库 2. 想与销售交谈?⇒ 让他们填写联系表单。 如果这个确定性工作流适合所有查询,那就直接编码吧!这将为你提供一个 100% 可靠的系统,没有让不可预测的 LLM 干扰你的工作流而引入错误的风险。为了简单和稳健起见,建议规范化不使用任何 agent 行为。 但如果工作流不能提前确定得那么好呢? 例如,用户想问:`"I can come on Monday, but I forgot my passport so risk being delayed to Wednesday, is it possible to take me and my stuff to surf on Tuesday morning, with a cancellation insurance?"` 这个问题涉及许多因素,可能上述预定的标准都不足以满足这个请求。 如果预定义的工作流经常不足,这意味着你需要更多的灵活性。 这就是 agent 设置发挥作用的地方。 在上面的例子中,你可以创建一个多步 agent,它可以访问天气 API 获取天气预报,Google Maps API 计算旅行距离,员工在线仪表板和你的知识库上的 RAG 系统。 直到最近,计算机程序还局限于预定义的工作流,试图通过堆积 if/else 分支来处理复杂性。它们专注于极其狭窄的任务,如"计算这些数字的总和"或"找到这个图中的最短路径"。但实际上,大多数现实生活中的任务,如我们上面的旅行示例,都不适合预定义的工作流。agent 系统为程序打开了现实世界任务的大门! ## 为什么选择 `smolagents`? 对于一些低级的 agent 用例,如链或路由器,你可以自己编写所有代码。这样会更好,因为它可以让你更好地控制和理解你的系统。 但一旦你开始追求更复杂的行为,比如让 LLM 调用函数(即"工具调用")或让 LLM 运行 while 循环("多步 agent"),一些抽象就变得必要: - 对于工具调用,你需要解析 agent 的输出,因此这个输出需要一个预定义的格式,如"Thought: I should call tool 'get_weather'. Action: get_weather(Paris).",你用预定义的函数解析它,并且给 LLM 的系统提示应该通知它这个格式。 - 对于 LLM 输出决定循环的多步 agent,你需要根据上次循环迭代中发生的情况给 LLM 不同的提示:所以你需要某种记忆能力。 看到了吗?通过这两个例子,我们已经发现需要一些项目来帮助我们: - 当然,一个作为系统引擎的 LLM - agent 可以访问的工具列表 - 从 LLM 输出中提取工具调用的解析器 - 与解析器同步的系统提示 - 记忆能力 但是等等,既然我们给 LLM 在决策中留出了空间,它们肯定会犯错误:所以我们需要错误日志记录和重试机制。 所有这些元素都需要紧密耦合才能形成一个功能良好的系统。这就是为什么我们决定需要制作基本构建块来让所有这些东西协同工作。 ## 代码 agent 在多步 agent 中,每一步 LLM 都可以编写一个动作,形式为调用外部工具。编写这些动作的常见格式(由 Anthropic、OpenAI 等使用)通常是"将动作编写为工具名称和要使用的参数的 JSON,然后解析以知道要执行哪个工具以及使用哪些参数"的不同变体。 [多项](https://huggingface.co/papers/2402.01030) [研究](https://huggingface.co/papers/2411.01747) [论文](https://huggingface.co/papers/2401.00812) 表明,在代码中进行工具调用的 LLM 要好得多。 原因很简单,_我们专门设计了我们的代码语言,使其成为表达计算机执行动作的最佳方式_。如果 JSON 片段是更好的表达方式,JSON 将成为顶级编程语言,编程将变得非常困难。 下图取自 [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030),说明了用代码编写动作的一些优势: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png"> 与 JSON 片段相比,用代码编写动作提供了更好的: - **可组合性:** 你能像定义 python 函数一样,将 JSON 动作嵌套在一起,或定义一组 JSON 动作以供重用吗? - **对象管理:** 你如何在 JSON 中存储像 `generate_image` 这样的动作的输出? - **通用性:** 代码被构建为简单地表达任何你可以让计算机做的事情。 - **LLM 训练数据中的表示:** 大量高质量的代码动作已经包含在 LLM 的训练数据中,这意味着它们已经为此进行了训练!
{ "source": "huggingface/smolagents", "title": "docs/source/zh/conceptual_guides/intro_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/conceptual_guides/intro_agents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5058 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 多步骤 agent 是如何工作的? ReAct 框架([Yao et al., 2022](https://huggingface.co/papers/2210.03629))是目前构建 agent 的主要方法。 该名称基于两个词的组合:"Reason" (推理)和 "Act" (行动)。实际上,遵循此架构的 agent 将根据需要尽可能多的步骤来解决其任务,每个步骤包括一个推理步骤,然后是一个行动步骤,在该步骤中,它制定工具调用,使其更接近解决手头的任务。 ReAct 过程涉及保留过去步骤的记忆。 > [!TIP] > 阅读 [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) 博客文章以了解更多关于多步 agent 的信息。 以下是其工作原理的视频概述: <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif" /> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Agent_ManimCE.gif" /> </div> ![ReAct agent 的框架](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png) 我们实现了两个版本的 ToolCallingAgent: - [`ToolCallingAgent`] 在其输出中生成 JSON 格式的工具调用。 - [`CodeAgent`] 是一种新型的 ToolCallingAgent,它生成代码块形式的工具调用,这对于具有强大编码性能的 LLM 非常有效。 > [!TIP] > 我们还提供了一个选项来以单步模式运行 agent:只需在启动 agent 时传递 `single_step=True`,例如 `agent.run(your_task, single_step=True)`
{ "source": "huggingface/smolagents", "title": "docs/source/zh/conceptual_guides/react.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/conceptual_guides/react.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 1956 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 编排 multi-agent 系统 🤖🤝🤖 [[open-in-colab]] 此notebook将构建一个 **multi-agent 网络浏览器:一个有多个代理协作,使用网络进行搜索解决问题的代理系统** `ManagedAgent` 对象将封装这些管理网络搜索的agent,形成一个简单的层次结构: ``` +----------------+ | Manager agent | +----------------+ | _______________|______________ | | Code interpreter +--------------------------------+ tool | Managed agent | | +------------------+ | | | Web Search agent | | | +------------------+ | | | | | | Web Search tool | | | Visit webpage tool | +--------------------------------+ ``` 我们来一起构建这个系统。运行下列代码以安装依赖包: ``` !pip install markdownify duckduckgo-search smolagents --upgrade -q ``` 我们需要登录Hugging Face Hub以调用HF的Inference API: ``` from huggingface_hub import login login() ``` ⚡️ HF的Inference API 可以快速轻松地运行任何开源模型,因此我们的agent将使用HF的Inference API 中的`HfApiModel`类来调用 [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)模型。 _Note:_ 基于多参数和部署模型的 Inference API 可能在没有预先通知的情况下更新或替换模型。了解更多信息,请参阅[这里](https://huggingface.co/docs/api-inference/supported-models)。 ```py model_id = "Qwen/Qwen2.5-Coder-32B-Instruct" ``` ## 🔍 创建网络搜索工具 虽然我们可以使用已经存在的 [`DuckDuckGoSearchTool`](https://github.com/huggingface/smolagents/blob/main/src/smolagents/default_tools.py#L151-L176) 工具作为谷歌搜索的平替进行网页浏览,然后我们也需要能够查看`DuckDuckGoSearchTool`找到的页面。为此,我 们可以直接导入库的内置 `VisitWebpageTool`。但是我们将重新构建它以了解其工作原理。 我们将使用`markdownify` 来从头构建我们的`VisitWebpageTool`工具。 ```py import re import requests from markdownify import markdownify from requests.exceptions import RequestException from smolagents import tool @tool def visit_webpage(url: str) -> str: """Visits a webpage at the given URL and returns its content as a markdown string. Args: url: The URL of the webpage to visit. Returns: The content of the webpage converted to Markdown, or an error message if the request fails. """ try: # Send a GET request to the URL response = requests.get(url) response.raise_for_status() # Raise an exception for bad status codes # Convert the HTML content to Markdown markdown_content = markdownify(response.text).strip() # Remove multiple line breaks markdown_content = re.sub(r"\n{3,}", "\n\n", markdown_content) return markdown_content except RequestException as e: return f"Error fetching the webpage: {str(e)}" except Exception as e: return f"An unexpected error occurred: {str(e)}" ``` 现在我们初始化这个工具并测试它! ```py print(visit_webpage("https://en.wikipedia.org/wiki/Hugging_Face")[:500]) ``` ## 构建我们的 multi-agent 系统 🤖🤝🤖 现在我们有了所有工具`search`和`visit_webpage`,我们可以使用它们来创建web agent。 我们该选取什么样的配置来构建这个agent呢? - 网页浏览是一个单线程任务,不需要并行工具调用,因此JSON工具调用对于这个任务非常有效。因此我们选择`ToolCallingAgent`。 - 有时候网页搜索需要探索许多页面才能找到正确答案,所以我们更喜欢将 `max_steps` 增加到10。 ```py from smolagents import ( CodeAgent, ToolCallingAgent, HfApiModel, ManagedAgent, DuckDuckGoSearchTool, LiteLLMModel, ) model = HfApiModel(model_id) web_agent = ToolCallingAgent( tools=[DuckDuckGoSearchTool(), visit_webpage], model=model, max_steps=10, ) ``` 然后我们将这个agent封装到一个`ManagedAgent`中,使其可以被其管理的agent调用。 ```py managed_web_agent = ManagedAgent( agent=web_agent, name="search", description="Runs web searches for you. Give it your query as an argument.", ) ``` 最后,我们创建一个manager agent,在初始化时将我们的managed agent传递给它的`managed_agents`参数。因为这个agent负责计划和思考,所以高级推理将是有益的,因此`CodeAgent`将是最佳选择。此外,我们想要问一个涉及当前年份的问题,并进行额外的数据计算:因此让我们添加`additional_authorized_imports=["time", "numpy", "pandas"]`,以防agent需要这些包。 ```py manager_agent = CodeAgent( tools=[], model=model, managed_agents=[managed_web_agent], additional_authorized_imports=["time", "numpy", "pandas"], ) ``` 可以了!现在让我们运行我们的系统!我们选择一个需要一些计算和研究的问题: ```py answer = manager_agent.run("If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.") ``` 我们用这个report 来回答这个问题: ``` Based on current growth projections and energy consumption estimates, if LLM trainings continue to scale up at the current rhythm until 2030: 1. The electric power required to power the biggest training runs by 2030 would be approximately 303.74 GW, which translates to about 2,660,762 GWh/year. 1. Comparing this to countries' electricity consumption: - It would be equivalent to about 34% of China's total electricity consumption. - It would exceed the total electricity consumption of India (184%), Russia (267%), and Japan (291%). - It would be nearly 9 times the electricity consumption of countries like Italy or Mexico. 2. Source of numbers: - The initial estimate of 5 GW for future LLM training comes from AWS CEO Matt Garman. - The growth projection used a CAGR of 79.80% from market research by Springs. - Country electricity consumption data is from the U.S. Energy Information Administration, primarily for the year 2021. ``` 如果[scaling hypothesis](https://gwern.net/scaling-hypothesis)持续成立的话,我们需要一些庞大的动力配置。我们的agent成功地协作解决了这个任务!✅ 💡 你可以轻松地将这个编排扩展到更多的agent:一个执行代码,一个进行网页搜索,一个处理文件加载⋯⋯
{ "source": "huggingface/smolagents", "title": "docs/source/zh/examples/multiagents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/examples/multiagents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 6313 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Agentic RAG [[open-in-colab]] Retrieval-Augmented-Generation (RAG) 是“使用大语言模型(LLM)来回答用户查询,但基于从知识库中检索的信息”。它比使用普通或微调的 LLM 具有许多优势:举几个例子,它允许将答案基于真实事实并减少虚构;它允许提供 LLM 领域特定的知识;并允许对知识库中的信息访问进行精细控制。 但是,普通的 RAG 存在一些局限性,以下两点尤为突出: - 它只执行一次检索步骤:如果结果不好,生成的内容也会不好。 - 语义相似性是以用户查询为参考计算的,这可能不是最优的:例如,用户查询通常是一个问题,而包含真实答案的文档通常是肯定语态,因此其相似性得分会比其他以疑问形式呈现的源文档低,从而导致错失相关信息的风险。 我们可以通过制作一个 RAG agent来缓解这些问题:非常简单,一个配备了检索工具的agent!这个 agent 将 会:✅ 自己构建查询和检索,✅ 如果需要的话会重新检索。 因此,它将比普通 RAG 更智能,因为它可以自己构建查询,而不是直接使用用户查询作为参考。这样,它可以更 接近目标文档,从而提高检索的准确性, [HyDE](https://huggingface.co/papers/2212.10496)。此 agent 可以 使用生成的片段,并在需要时重新检索,就像 [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/)。 我们现在开始构建这个系统. 🛠️ 运行以下代码以安装所需的依赖包: ```bash !pip install smolagents pandas langchain langchain-community sentence-transformers rank_bm25 --upgrade -q ``` 你需要一个有效的 token 作为环境变量 `HF_TOKEN` 来调用 HF Inference API。我们使用 python-dotenv 来加载它。 ```py from dotenv import load_dotenv load_dotenv() ``` 我们首先加载一个知识库以在其上执行 RAG:此数据集是许多 Hugging Face 库的文档页面的汇编,存储为 markdown 格式。我们将仅保留 `transformers` 库的文档。然后通过处理数据集并将其存储到向量数据库中,为检索器准备知识库。我们将使用 [LangChain](https://python.langchain.com/docs/introduction/) 来利用其出色的向量数据库工具。 ```py import datasets from langchain.docstore.document import Document from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.retrievers import BM25Retriever knowledge_base = datasets.load_dataset("m-ric/huggingface_doc", split="train") knowledge_base = knowledge_base.filter(lambda row: row["source"].startswith("huggingface/transformers")) source_docs = [ Document(page_content=doc["text"], metadata={"source": doc["source"].split("/")[1]}) for doc in knowledge_base ] text_splitter = RecursiveCharacterTextSplitter( chunk_size=500, chunk_overlap=50, add_start_index=True, strip_whitespace=True, separators=["\n\n", "\n", ".", " ", ""], ) docs_processed = text_splitter.split_documents(source_docs) ``` 现在文档已准备好。我们来一起构建我们的 agent RAG 系统! 👉 我们只需要一个 RetrieverTool,我们的 agent 可以利用它从知识库中检索信息。 由于我们需要将 vectordb 添加为工具的属性,我们不能简单地使用带有 `@tool` 装饰器的简单工具构造函数:因此我们将遵循 [tools 教程](../tutorials/tools) 中突出显示的高级设置。 ```py from smolagents import Tool class RetrieverTool(Tool): name = "retriever" description = "Uses semantic search to retrieve the parts of transformers documentation that could be most relevant to answer your query." inputs = { "query": { "type": "string", "description": "The query to perform. This should be semantically close to your target documents. Use the affirmative form rather than a question.", } } output_type = "string" def __init__(self, docs, **kwargs): super().__init__(**kwargs) self.retriever = BM25Retriever.from_documents( docs, k=10 ) def forward(self, query: str) -> str: assert isinstance(query, str), "Your search query must be a string" docs = self.retriever.invoke( query, ) return "\nRetrieved documents:\n" + "".join( [ f"\n\n===== Document {str(i)} =====\n" + doc.page_content for i, doc in enumerate(docs) ] ) retriever_tool = RetrieverTool(docs_processed) ``` BM25 检索方法是一个经典的检索方法,因为它的设置速度非常快。为了提高检索准确性,你可以使用语义搜索,使用文档的向量表示替换 BM25:因此你可以前往 [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) 选择一个好的嵌入模型。 现在我们已经创建了一个可以从知识库中检索信息的工具,现在我们可以很容易地创建一个利用这个 `retriever_tool` 的 agent!此 agent 将使用如下参数初始化: - `tools`:代理将能够调用的工具列表。 - `model`:为代理提供动力的 LLM。 我们的 `model` 必须是一个可调用对象,它接受一个消息的 list 作为输入,并返回文本。它还需要接受一个 stop_sequences 参数,指示何时停止生成。为了方便起见,我们直接使用包中提供的 `HfEngine` 类来获取调用 Hugging Face 的 Inference API 的 LLM 引擎。 接着,我们将使用 [meta-llama/Llama-3.3-70B-Instruct](meta-llama/Llama-3.3-70B-Instruct) 作为 llm 引 擎,因为: - 它有一个长 128k 上下文,这对处理长源文档很有用。 - 它在 HF 的 Inference API 上始终免费提供! _Note:_ 此 Inference API 托管基于各种标准的模型,部署的模型可能会在没有事先通知的情况下进行更新或替换。了解更多信息,请点击[这里](https://huggingface.co/docs/api-inference/supported-models)。 ```py from smolagents import HfApiModel, CodeAgent agent = CodeAgent( tools=[retriever_tool], model=HfApiModel("meta-llama/Llama-3.3-70B-Instruct"), max_steps=4, verbose=True ) ``` 当我们初始化 CodeAgent 时,它已经自动获得了一个默认的系统提示,告诉 LLM 引擎按步骤处理并生成工具调用作为代码片段,但你可以根据需要替换此提示模板。接着,当其 `.run()` 方法被调用时,代理将负责调用 LLM 引擎,并在循环中执行工具调用,直到工具 `final_answer` 被调用,而其参数为最终答案。 ```py agent_output = agent.run("For a transformers model training, which is slower, the forward or the backward pass?") print("Final output:") print(agent_output) ```
{ "source": "huggingface/smolagents", "title": "docs/source/zh/examples/rag.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/examples/rag.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5294 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Text-to-SQL [[open-in-colab]] 在此教程中,我们将看到如何使用 `smolagents` 实现一个利用 SQL 的 agent。 > 让我们从经典问题开始:为什么不简单地使用标准的 text-to-SQL pipeline 呢? 标准的 text-to-SQL pipeline 很脆弱,因为生成的 SQL 查询可能会出错。更糟糕的是,查询可能出错却不引发错误警报,从而返回一些不正确或无用的结果。 👉 相反,agent 系统则可以检视输出结果并决定查询是否需要被更改,因此带来巨大的性能提升。 让我们来一起构建这个 agent! 💪 首先,我们构建一个 SQL 的环境: ```py from sqlalchemy import ( create_engine, MetaData, Table, Column, String, Integer, Float, insert, inspect, text, ) engine = create_engine("sqlite:///:memory:") metadata_obj = MetaData() # create city SQL table table_name = "receipts" receipts = Table( table_name, metadata_obj, Column("receipt_id", Integer, primary_key=True), Column("customer_name", String(16), primary_key=True), Column("price", Float), Column("tip", Float), ) metadata_obj.create_all(engine) rows = [ {"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20}, {"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24}, {"receipt_id": 3, "customer_name": "Woodrow Wilson", "price": 53.43, "tip": 5.43}, {"receipt_id": 4, "customer_name": "Margaret James", "price": 21.11, "tip": 1.00}, ] for row in rows: stmt = insert(receipts).values(**row) with engine.begin() as connection: cursor = connection.execute(stmt) ``` ### 构建 agent 现在,我们构建一个 agent,它将使用 SQL 查询来回答问题。工具的 description 属性将被 agent 系统嵌入到 LLM 的提示中:它为 LLM 提供有关如何使用该工具的信息。这正是我们描述 SQL 表的地方。 ```py inspector = inspect(engine) columns_info = [(col["name"], col["type"]) for col in inspector.get_columns("receipts")] table_description = "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info]) print(table_description) ``` ```text Columns: - receipt_id: INTEGER - customer_name: VARCHAR(16) - price: FLOAT - tip: FLOAT ``` 现在让我们构建我们的工具。它需要以下内容:(更多细节请参阅[工具文档](../tutorials/tools)) - 一个带有 `Args:` 部分列出参数的 docstring。 - 输入和输出的type hints。 ```py from smolagents import tool @tool def sql_engine(query: str) -> str: """ Allows you to perform SQL queries on the table. Returns a string representation of the result. The table is named 'receipts'. Its description is as follows: Columns: - receipt_id: INTEGER - customer_name: VARCHAR(16) - price: FLOAT - tip: FLOAT Args: query: The query to perform. This should be correct SQL. """ output = "" with engine.connect() as con: rows = con.execute(text(query)) for row in rows: output += "\n" + str(row) return output ``` 我们现在使用这个工具来创建一个 agent。我们使用 `CodeAgent`,这是 smolagent 的主要 agent 类:一个在代码中编写操作并根据 ReAct 框架迭代先前输出的 agent。 这个模型是驱动 agent 系统的 LLM。`HfApiModel` 允许你使用 HF Inference API 调用 LLM,无论是通过 Serverless 还是 Dedicated endpoint,但你也可以使用任何专有 API。 ```py from smolagents import CodeAgent, HfApiModel agent = CodeAgent( tools=[sql_engine], model=HfApiModel("meta-llama/Meta-Llama-3.1-8B-Instruct"), ) agent.run("Can you give me the name of the client who got the most expensive receipt?") ``` ### Level 2: 表连接 现在让我们增加一些挑战!我们希望我们的 agent 能够处理跨多个表的连接。因此,我们创建一个新表,记录每个 receipt_id 的服务员名字! ```py table_name = "waiters" receipts = Table( table_name, metadata_obj, Column("receipt_id", Integer, primary_key=True), Column("waiter_name", String(16), primary_key=True), ) metadata_obj.create_all(engine) rows = [ {"receipt_id": 1, "waiter_name": "Corey Johnson"}, {"receipt_id": 2, "waiter_name": "Michael Watts"}, {"receipt_id": 3, "waiter_name": "Michael Watts"}, {"receipt_id": 4, "waiter_name": "Margaret James"}, ] for row in rows: stmt = insert(receipts).values(**row) with engine.begin() as connection: cursor = connection.execute(stmt) ``` 因为我们改变了表,我们需要更新 `SQLExecutorTool`,让 LLM 能够正确利用这个表的信息。 ```py updated_description = """Allows you to perform SQL queries on the table. Beware that this tool's output is a string representation of the execution output. It can use the following tables:""" inspector = inspect(engine) for table in ["receipts", "waiters"]: columns_info = [(col["name"], col["type"]) for col in inspector.get_columns(table)] table_description = f"Table '{table}':\n" table_description += "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info]) updated_description += "\n\n" + table_description print(updated_description) ``` 因为这个request 比之前的要难一些,我们将 LLM 引擎切换到更强大的 [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)! ```py sql_engine.description = updated_description agent = CodeAgent( tools=[sql_engine], model=HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct"), ) agent.run("Which waiter got more total money from tips?") ``` 它直接就能工作!设置过程非常简单,难道不是吗? 这个例子到此结束!我们涵盖了这些概念: - 构建新工具。 - 更新工具的描述。 - 切换到更强大的 LLM 有助于 agent 推理。 ✅ 现在你可以构建你一直梦寐以求的 text-to-SQL 系统了!✨
{ "source": "huggingface/smolagents", "title": "docs/source/zh/examples/text_to_sql.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/examples/text_to_sql.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 5653 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Agents(智能体) <Tip warning={true}> Smolagents 是一个实验性的 API,可能会随时发生变化。由于 API 或底层模型可能发生变化,代理返回的结果也可能有所不同。 </Tip> 要了解有关智能体和工具的更多信息,请务必阅读[入门指南](../index)。本页面包含基础类的 API 文档。 ## 智能体(Agents) 我们的智能体继承自 [`MultiStepAgent`],这意味着它们可以执行多步操作,每一步包含一个思考(thought),然后是一个工具调用和执行。请阅读[概念指南](../conceptual_guides/react)以了解更多信息。 我们提供两种类型的代理,它们基于主要的 [`Agent`] 类: - [`CodeAgent`] 是默认代理,它以 Python 代码编写工具调用。 - [`ToolCallingAgent`] 以 JSON 编写工具调用。 两者在初始化时都需要提供参数 `model` 和工具列表 `tools`。 ### 智能体类 [[autodoc]] MultiStepAgent [[autodoc]] CodeAgent [[autodoc]] ToolCallingAgent ### ManagedAgent _此类自 1.8.0 起已被弃用:现在您只需向普通代理传递 `name` 和 `description` 属性即可使其可被管理代理调用。_ ### stream_to_gradio [[autodoc]] stream_to_gradio ### GradioUI > [!TIP] > 您必须安装 `gradio` 才能使用 UI。如果尚未安装,请运行 `pip install smolagents[gradio]`。 [[autodoc]] GradioUI ## 提示(Prompts) [[autodoc]] smolagents.agents.PromptTemplates [[autodoc]] smolagents.agents.PlanningPromptTemplate [[autodoc]] smolagents.agents.ManagedAgentPromptTemplate [[autodoc]] smolagents.agents.FinalAnswerPromptTemplate
{ "source": "huggingface/smolagents", "title": "docs/source/zh/reference/agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/reference/agents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 1797 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 模型 <Tip warning={true}> Smolagents 是一个实验性 API,其可能会随时发生更改。由于 API 或底层模型可能会变化,智能体返回的结果可能会有所不同。 </Tip> 要了解有关智能体和工具的更多信息,请务必阅读[入门指南](../index)。此页面包含底层类的 API 文档。 ## 模型 您可以自由创建和使用自己的模型为智能体提供支持。 您可以使用任何 `model` 可调用对象作为智能体的模型,只要满足以下条件: 1. 它遵循[消息格式](./chat_templating)(`List[Dict[str, str]]`),将其作为输入 `messages`,并返回一个 `str`。 2. 它在生成的序列到达 `stop_sequences` 参数中指定的内容之前停止生成输出。 要定义您的 LLM,可以创建一个 `custom_model` 方法,该方法接受一个 [messages](./chat_templating) 列表,并返回一个包含 `.content` 属性的对象,其中包含生成的文本。此可调用对象还需要接受一个 `stop_sequences` 参数,用于指示何时停止生成。 ```python from huggingface_hub import login, InferenceClient login("<YOUR_HUGGINGFACEHUB_API_TOKEN>") model_id = "meta-llama/Llama-3.3-70B-Instruct" client = InferenceClient(model=model_id) def custom_model(messages, stop_sequences=["Task"]): response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000) answer = response.choices[0].message return answer ``` 此外,`custom_model` 还可以接受一个 `grammar` 参数。如果在智能体初始化时指定了 `grammar`,则此参数将在调用模型时传递,以便进行[约束生成](https://huggingface.co/docs/text-generation-inference/conceptual/guidance),从而强制生成格式正确的智能体输出。 ### TransformersModel 为了方便起见,我们添加了一个 `TransformersModel`,该模型通过为初始化时指定的 `model_id` 构建一个本地 `transformers` pipeline 来实现上述功能。 ```python from smolagents import TransformersModel model = TransformersModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct") print(model([{"role": "user", "content": [{"type": "text", "text": "Ok!"}]}], stop_sequences=["great"])) ``` ```text >>> What a ``` > [!TIP] > 您必须在机器上安装 `transformers` 和 `torch`。如果尚未安装,请运行 `pip install smolagents[transformers]`。 [[autodoc]] TransformersModel ### HfApiModel `HfApiModel` 封装了 huggingface_hub 的 [InferenceClient](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference),用于执行 LLM。它支持 HF 的 [Inference API](https://huggingface.co/docs/api-inference/index) 以及 Hub 上所有可用的[Inference Providers](https://huggingface.co/blog/inference-providers)。 ```python from smolagents import HfApiModel messages = [ {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]} ] model = HfApiModel() print(model(messages)) ``` ```text >>> Of course! If you change your mind, feel free to reach out. Take care! ``` [[autodoc]] HfApiModel ### LiteLLMModel `LiteLLMModel` 利用 [LiteLLM](https://www.litellm.ai/) 支持来自不同提供商的 100+ 个 LLM。您可以在模型初始化时传递 `kwargs`,这些参数将在每次使用模型时被使用,例如下面的示例中传递了 `temperature`。 ```python from smolagents import LiteLLMModel messages = [ {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]} ] model = LiteLLMModel("anthropic/claude-3-5-sonnet-latest", temperature=0.2, max_tokens=10) print(model(messages)) ``` [[autodoc]] LiteLLMModel ### OpenAIServerModel 此类允许您调用任何 OpenAIServer 兼容模型。 以下是设置方法(您可以自定义 `api_base` URL 指向其他服务器): ```py import os from smolagents import OpenAIServerModel model = OpenAIServerModel( model_id="gpt-4o", api_base="https://api.openai.com/v1", api_key=os.environ["OPENAI_API_KEY"], ) ``` [[autodoc]] OpenAIServerModel ### AzureOpenAIServerModel `AzureOpenAIServerModel` 允许您连接到任何 Azure OpenAI 部署。 下面是设置示例,请注意,如果已经设置了相应的环境变量,您可以省略 `azure_endpoint`、`api_key` 和 `api_version` 参数——环境变量包括 `AZURE_OPENAI_ENDPOINT`、`AZURE_OPENAI_API_KEY` 和 `OPENAI_API_VERSION`。 请注意,`OPENAI_API_VERSION` 没有 `AZURE_` 前缀,这是由于底层 [openai](https://github.com/openai/openai-python) 包的设计所致。 ```py import os from smolagents import AzureOpenAIServerModel model = AzureOpenAIServerModel( model_id = os.environ.get("AZURE_OPENAI_MODEL"), azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"), api_key=os.environ.get("AZURE_OPENAI_API_KEY"), api_version=os.environ.get("OPENAI_API_VERSION") ) ``` [[autodoc]] AzureOpenAIServerModel ### MLXModel ```python from smolagents import MLXModel model = MLXModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct") print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"])) ``` ```text >>> What a ``` > [!TIP] > 您必须在机器上安装 `mlx-lm`。如果尚未安装,请运行 `pip install smolagents[mlx-lm]`。 [[autodoc]] MLXModel
{ "source": "huggingface/smolagents", "title": "docs/source/zh/reference/models.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/reference/models.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 4781 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 工具 <Tip warning={true}> Smolagents 是一个实验性 API,可能会随时更改。由于 API 或底层模型可能发生变化,代理返回的结果可能会有所不同。 </Tip> 要了解更多关于智能体和工具的信息,请务必阅读[入门指南](../index)。本页面包含底层类的 API 文档。 ## 工具 ### load_tool [[autodoc]] load_tool ### tool [[autodoc]] tool ### Tool [[autodoc]] Tool ### launch_gradio_demo [[autodoc]] launch_gradio_demo ## 默认工具 ### PythonInterpreterTool [[autodoc]] PythonInterpreterTool ### FinalAnswerTool [[autodoc]] FinalAnswerTool ### UserInputTool [[autodoc]] UserInputTool ### DuckDuckGoSearchTool [[autodoc]] DuckDuckGoSearchTool ### GoogleSearchTool [[autodoc]] GoogleSearchTool ### VisitWebpageTool [[autodoc]] VisitWebpageTool ### SpeechToTextTool [[autodoc]] SpeechToTextTool ## 工具集合 [[autodoc]] ToolCollection ## 智能体类型 智能体可以处理工具之间的任何类型的对象;工具是完全多模态的,可以接受和返回文本、图像、音频、视频以及其他类型的对象。为了增加工具之间的兼容性,以及正确呈现在 ipython(jupyter、colab、ipython notebooks 等)中的返回结果,我们为这些类型实现了包装类。 被包装的对象应该继续保持其初始行为;例如,一个文本对象应继续表现为字符串,一个图像对象应继续表现为 `PIL.Image`。 这些类型有三个特定的用途: - 调用 `to_raw` 方法时,应返回底层对象 - 调用 `to_string` 方法时,应将对象转换为字符串:对于 `AgentText` 类型,可以直接返回字符串;对于其他实例,则返回对象序列化版本的路径 - 在 ipython 内核中显示时,应正确显示对象 ### AgentText [[autodoc]] smolagents.agent_types.AgentText ### AgentImage [[autodoc]] smolagents.agent_types.AgentImage ### AgentAudio [[autodoc]] smolagents.agent_types.AgentAudio
{ "source": "huggingface/smolagents", "title": "docs/source/zh/reference/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/reference/tools.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2041 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 构建好用的 agent [[open-in-colab]] 能良好工作的 agent 和不能工作的 agent 之间,有天壤之别。 我们怎么样才能构建出属于前者的 agent 呢? 在本指南中,我们将看到构建 agent 的最佳实践。 > [!TIP] > 如果你是 agent 构建的新手,请确保首先阅读 [agent 介绍](../conceptual_guides/intro_agents) 和 [smolagents 导览](../guided_tour)。 ### 最好的 agent 系统是最简单的:尽可能简化工作流 在你的工作流中赋予 LLM 一些自主权,会引入一些错误风险。 经过良好编程的 agent 系统,通常具有良好的错误日志记录和重试机制,因此 LLM 引擎有机会自我纠错。但为了最大限度地降低 LLM 错误的风险,你应该简化你的工作流! 让我们回顾一下 [agent 介绍](../conceptual_guides/intro_agents) 中的例子:一个为冲浪旅行公司回答用户咨询的机器人。 与其让 agent 每次被问及新的冲浪地点时,都分别调用 "旅行距离 API" 和 "天气 API",你可以只创建一个统一的工具 "return_spot_information",一个同时调用这两个 API,并返回它们连接输出的函数。 这可以降低成本、延迟和错误风险! 主要的指导原则是:尽可能减少 LLM 调用的次数。 这可以带来一些启发: - 尽可能把两个工具合并为一个,就像我们两个 API 的例子。 - 尽可能基于确定性函数,而不是 agent 决策,来实现逻辑。 ### 改善流向 LLM 引擎的信息流 记住,你的 LLM 引擎就像一个 ~智能~ 机器人,被关在一个房间里,与外界唯一的交流方式是通过门缝传递的纸条。 如果你没有明确地将信息放入其提示中,它将不知道发生的任何事情。 所以首先要让你的任务非常清晰! 由于 agent 由 LLM 驱动,任务表述的微小变化可能会产生完全不同的结果。 然后,改善工具使用中流向 agent 的信息流。 需要遵循的具体指南: - 每个工具都应该记录(只需在工具的 `forward` 方法中使用 `print` 语句)对 LLM 引擎可能有用的所有信息。 - 特别是,记录工具执行错误的详细信息会很有帮助! 例如,这里有一个根据位置和日期时间检索天气数据的工具: 首先,这是一个糟糕的版本: ```python import datetime from smolagents import tool def get_weather_report_at_coordinates(coordinates, date_time): # 虚拟函数,返回 [温度(°C),降雨风险(0-1),浪高(m)] return [28.0, 0.35, 0.85] def get_coordinates_from_location(location): # 返回虚拟坐标 return [3.3, -42.0] @tool def get_weather_api(location: str, date_time: str) -> str: """ Returns the weather report. Args: location: the name of the place that you want the weather for. date_time: the date and time for which you want the report. """ lon, lat = convert_location_to_coordinates(location) date_time = datetime.strptime(date_time) return str(get_weather_report_at_coordinates((lon, lat), date_time)) ``` 为什么它不好? - 没有说明 `date_time` 应该使用的格式 - 没有说明位置应该如何指定 - 没有记录机制来处理明确的报错情况,如位置格式不正确或 date_time 格式不正确 - 输出格式难以理解 如果工具调用失败,内存中记录的错误跟踪,可以帮助 LLM 逆向工程工具来修复错误。但为什么要让它做这么多繁重的工作呢? 构建这个工具的更好方式如下: ```python @tool def get_weather_api(location: str, date_time: str) -> str: """ Returns the weather report. Args: location: the name of the place that you want the weather for. Should be a place name, followed by possibly a city name, then a country, like "Anchor Point, Taghazout, Morocco". date_time: the date and time for which you want the report, formatted as '%m/%d/%y %H:%M:%S'. """ lon, lat = convert_location_to_coordinates(location) try: date_time = datetime.strptime(date_time) except Exception as e: raise ValueError("Conversion of `date_time` to datetime format failed, make sure to provide a string in format '%m/%d/%y %H:%M:%S'. Full trace:" + str(e)) temperature_celsius, risk_of_rain, wave_height = get_weather_report_at_coordinates((lon, lat), date_time) return f"Weather report for {location}, {date_time}: Temperature will be {temperature_celsius}°C, risk of rain is {risk_of_rain*100:.0f}%, wave height is {wave_height}m." ``` 一般来说,为了减轻 LLM 的负担,要问自己的好问题是:"如果我是一个第一次使用这个工具的傻瓜,使用这个工具编程并纠正自己的错误有多容易?"。 ### 给 agent 更多参数 除了简单的任务描述字符串外,你还可以使用 `additional_args` 参数传递任何类型的对象: ```py from smolagents import CodeAgent, HfApiModel model_id = "meta-llama/Llama-3.3-70B-Instruct" agent = CodeAgent(tools=[], model=HfApiModel(model_id=model_id), add_base_tools=True) agent.run( "Why does Mike not know many people in New York?", additional_args={"mp3_sound_file_url":'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3'} ) ``` 例如,你可以使用这个 `additional_args` 参数传递你希望 agent 利用的图像或字符串。 ## 如何调试你的 agent ### 1. 使用更强大的 LLM 在 agent 工作流中,有些错误是实际错误,有些则是你的 LLM 引擎没有正确推理的结果。 例如,参考这个我要求创建一个汽车图片的 `CodeAgent` 的运行记录: ```text ==================================================================================================== New task ==================================================================================================== Make me a cool car picture ──────────────────────────────────────────────────────────────────────────────────────────────────── New step ───────────────────────────────────────────────────────────────────────────────────────────────────── Agent is executing the code below: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── image_generator(prompt="A cool, futuristic sports car with LED headlights, aerodynamic design, and vibrant color, high-res, photorealistic") ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Last output from code snippet: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png Step 1: - Time taken: 16.35 seconds - Input tokens: 1,383 - Output tokens: 77 ──────────────────────────────────────────────────────────────────────────────────────────────────── New step ───────────────────────────────────────────────────────────────────────────────────────────────────── Agent is executing the code below: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── final_answer("/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png") ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Print outputs: Last output from code snippet: ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png Final answer: /var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png ``` 用户看到的是返回了一个路径,而不是图像。 这看起来像是系统的错误,但实际上 agent 系统并没有导致错误:只是 LLM 大脑犯了一个错误,没有把图像输出,保存到变量中。 因此,它无法再次访问图像,只能利用保存图像时记录的路径,所以它返回的是路径,而不是图像。 调试 agent 的第一步是"使用更强大的 LLM"。像 `Qwen2.5-72B-Instruct` 这样的替代方案不会犯这种错误。 ### 2. 提供更多指导/更多信息 你也可以使用不太强大的模型,只要你更有效地指导它们。 站在模型的角度思考:如果你是模型在解决任务,你会因为系统提示+任务表述+工具描述中提供的信息而挣扎吗? 你需要一些额外的说明吗? 为了提供额外信息,我们不建议立即更改系统提示:默认系统提示有许多调整,除非你非常了解提示,否则你很容易翻车。 更好的指导 LLM 引擎的方法是: - 如果是关于要解决的任务:把所有细节添加到任务中。任务可以有几百页长。 - 如果是关于如何使用工具:你的工具的 description 属性。 ### 3. 更改系统提示(通常不建议) 如果上述说明不够,你可以更改系统提示。 让我们看看它是如何工作的。例如,让我们检查 [`CodeAgent`] 的默认系统提示(下面的版本通过跳过零样本示例进行了缩短)。 ```python print(agent.prompt_templates["system_prompt"]) ``` 你会得到: ```text You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can. To do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code. To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences. At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use. Then in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '<end_code>' sequence. During each intermediate step, you can use 'print()' to save whatever important information you will then need. These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step. In the end you have to return a final answer using the `final_answer` tool. Here are a few examples using notional tools: --- {examples} Above example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools: {{tool_descriptions}} {{managed_agents_descriptions}} Here are the rules you should always follow to solve your task: 1. Always provide a 'Thought:' sequence, and a 'Code:\n```py' sequence ending with '```<end_code>' sequence, else you will fail. 2. Use only variables that you have defined! 3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': "What is the place where James Bond lives?"})', but use the arguments directly as in 'answer = wiki(query="What is the place where James Bond lives?")'. 4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block. 5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters. 6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'. 7. Never create any notional variables in our code, as having these in your logs might derail you from the true variables. 8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}} 9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist. 10. Don't give up! You're in charge of solving the task, not providing directions to solve it. Now Begin! If you solve the task correctly, you will receive a reward of $1,000,000. ``` 如你所见,有一些占位符,如 `"{{tool_descriptions}}"`:这些将在 agent 初始化时用于插入某些自动生成的工具或管理 agent 的描述。 因此,虽然你可以通过将自定义提示作为参数传递给 `system_prompt` 参数来覆盖此系统提示模板,但你的新系统提示必须包含以下占位符: - `"{{tool_descriptions}}"` 用于插入工具描述。 - `"{{managed_agents_description}}"` 用于插入 managed agent 的描述(如果有)。 - 仅限 `CodeAgent`:`"{{authorized_imports}}"` 用于插入授权导入列表。 然后你可以根据如下,更改系统提示: ```py from smolagents.prompts import CODE_SYSTEM_PROMPT modified_system_prompt = CODE_SYSTEM_PROMPT + "\nHere you go!" # 在此更改系统提示 agent = CodeAgent( tools=[], model=HfApiModel(), system_prompt=modified_system_prompt ) ``` 这也适用于 [`ToolCallingAgent`]。 ### 4. 额外规划 我们提供了一个用于补充规划步骤的模型,agent 可以在正常操作步骤之间定期运行。在此步骤中,没有工具调用,LLM 只是被要求更新它知道的事实列表,并根据这些事实反推它应该采取的下一步。 ```py from smolagents import load_tool, CodeAgent, HfApiModel, DuckDuckGoSearchTool from dotenv import load_dotenv load_dotenv() # 从 Hub 导入工具 image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True) search_tool = DuckDuckGoSearchTool() agent = CodeAgent( tools=[search_tool], model=HfApiModel("Qwen/Qwen2.5-72B-Instruct"), planning_interval=3 # 这是你激活规划的地方! ) # 运行它! result = agent.run( "How long would a cheetah at full speed take to run the length of Pont Alexandre III?", ) ```
{ "source": "huggingface/smolagents", "title": "docs/source/zh/tutorials/building_good_agents.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/tutorials/building_good_agents.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 11859 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 安全代码执行 [[open-in-colab]] > [!TIP] > 如果你是第一次构建 agent,请先阅读 [agent 介绍](../conceptual_guides/intro_agents) 和 [smolagents 导览](../guided_tour)。 ### 代码智能体 [多项](https://huggingface.co/papers/2402.01030) [研究](https://huggingface.co/papers/2411.01747) [表明](https://huggingface.co/papers/2401.00812),让大语言模型用代码编写其动作(工具调用)比当前标准的工具调用格式要好得多,目前行业标准是 "将动作写成包含工具名称和参数的 JSON" 的各种变体。 为什么代码更好?因为我们专门为计算机执行的动作而设计编程语言。如果 JSON 片段是更好的方式,那么这个工具包就应该是用 JSON 片段编写的,魔鬼就会嘲笑我们。 代码就是表达计算机动作的更好方式。它具有更好的: - **组合性**:你能像定义 Python 函数那样,在 JSON 动作中嵌套其他 JSON 动作,或者定义一组 JSON 动作以便以后重用吗? - **对象管理**:你如何在 JSON 中存储像 `generate_image` 这样的动作的输出? - **通用性**:代码是为了简单地表达任何可以让计算机做的事情而构建的。 - **在 LLM 训练语料库中的表示**:天赐良机,为什么不利用已经包含在 LLM 训练语料库中的大量高质量动作呢? 下图展示了这一点,取自 [可执行代码动作引出更好的 LLM 智能体](https://huggingface.co/papers/2402.01030)。 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png"> 这就是为什么我们强调提出代码智能体,在本例中是 Python 智能体,这意味着我们要在构建安全的 Python 解释器上投入更多精力。 ### 本地 Python 解释器 默认情况下,`CodeAgent` 会在你的环境中运行 LLM 生成的代码。 这个执行不是由普通的 Python 解释器完成的:我们从零开始重新构建了一个更安全的 `LocalPythonInterpreter`。 这个解释器通过以下方式设计以确保安全: - 将导入限制为用户显式传递的列表 - 限制操作次数以防止无限循环和资源膨胀 - 不会执行任何未预定义的操作 我们已经在许多用例中使用了这个解释器,从未观察到对环境造成任何损害。 然而,这个解决方案并不是万无一失的:可以想象,如果 LLM 被微调用于恶意操作,仍然可能损害你的环境。例如,如果你允许像 `Pillow` 这样无害的包处理图像,LLM 可能会生成数千张图像保存以膨胀你的硬盘。 如果你自己选择了 LLM 引擎,这当然不太可能,但它可能会发生。 所以如果你想格外谨慎,可以使用下面描述的远程代码执行选项。 ### E2B 代码执行器 为了最大程度的安全性,你可以使用我们与 E2B 的集成在沙盒环境中运行代码。这是一个远程执行服务,可以在隔离的容器中运行你的代码,使代码无法影响你的本地环境。 为此,你需要设置你的 E2B 账户并在环境变量中设置 `E2B_API_KEY`。请前往 [E2B 快速入门文档](https://e2b.dev/docs/quickstart) 了解更多信息。 然后你可以通过 `pip install e2b-code-interpreter python-dotenv` 安装它。 现在你已经准备好了! 要将代码执行器设置为 E2B,只需在初始化 `CodeAgent` 时传递标志 `use_e2b_executor=True`。 请注意,你应该将所有工具的依赖项添加到 `additional_authorized_imports` 中,以便执行器安装它们。 ```py from smolagents import CodeAgent, VisitWebpageTool, HfApiModel agent = CodeAgent( tools = [VisitWebpageTool()], model=HfApiModel(), additional_authorized_imports=["requests", "markdownify"], use_e2b_executor=True ) agent.run("What was Abraham Lincoln's preferred pet?") ``` 目前 E2B 代码执行暂不兼容多 agent——因为把 agent 调用放在应该在远程执行的代码块里,是非常混乱的。但我们正在努力做到这件事!
{ "source": "huggingface/smolagents", "title": "docs/source/zh/tutorials/secure_code_execution.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/tutorials/secure_code_execution.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 2920 }
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 工具 [[open-in-colab]] 在这里,我们将学习高级工具的使用。 > [!TIP] > 如果你是构建 agent 的新手,请确保先阅读 [agent 介绍](../conceptual_guides/intro_agents) 和 [smolagents 导览](../guided_tour)。 - [工具](#工具) - [什么是工具,如何构建一个工具?](#什么是工具如何构建一个工具) - [将你的工具分享到 Hub](#将你的工具分享到-hub) - [将 Space 导入为工具](#将-space-导入为工具) - [使用 LangChain 工具](#使用-langchain-工具) - [管理你的 agent 工具箱](#管理你的-agent-工具箱) - [使用工具集合](#使用工具集合) ### 什么是工具,如何构建一个工具? 工具主要是 LLM 可以在 agent 系统中使用的函数。 但要使用它,LLM 需要被提供一个 API:名称、工具描述、输入类型和描述、输出类型。 所以它不能仅仅是一个函数。它应该是一个类。 因此,核心上,工具是一个类,它包装了一个函数,并带有帮助 LLM 理解如何使用它的元数据。 以下是它的结构: ```python from smolagents import Tool class HFModelDownloadsTool(Tool): name = "model_download_counter" description = """ This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint.""" inputs = { "task": { "type": "string", "description": "the task category (such as text-classification, depth-estimation, etc)", } } output_type = "string" def forward(self, task: str): from huggingface_hub import list_models model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return model.id model_downloads_tool = HFModelDownloadsTool() ``` 自定义工具继承 [`Tool`] 以继承有用的方法。子类还定义了: - 一个属性 `name`,对应于工具本身的名称。名称通常描述工具的功能。由于代码返回任务中下载量最多的模型,我们将其命名为 `model_download_counter`。 - 一个属性 `description`,用于填充 agent 的系统提示。 - 一个 `inputs` 属性,它是一个带有键 `"type"` 和 `"description"` 的字典。它包含帮助 Python 解释器对输入做出明智选择的信息。 - 一个 `output_type` 属性,指定输出类型。`inputs` 和 `output_type` 的类型应为 [Pydantic 格式](https://docs.pydantic.dev/latest/concepts/json_schema/#generating-json-schema),它们可以是以下之一:[`~AUTHORIZED_TYPES`]。 - 一个 `forward` 方法,包含要执行的推理代码。 这就是它在 agent 中使用所需的全部内容! 还有另一种构建工具的方法。在 [guided_tour](../guided_tour) 中,我们使用 `@tool` 装饰器实现了一个工具。[`tool`] 装饰器是定义简单工具的推荐方式,但有时你需要更多:在类中使用多个方法以获得更清晰的代码,或使用额外的类属性。 在这种情况下,你可以通过如上所述继承 [`Tool`] 来构建你的工具。 ### 将你的工具分享到 Hub 你可以通过调用 [`~Tool.push_to_hub`] 将你的自定义工具分享到 Hub。确保你已经在 Hub 上为其创建了一个仓库,并且使用的是具有读取权限的 token。 ```python model_downloads_tool.push_to_hub("{your_username}/hf-model-downloads", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") ``` 为了使推送到 Hub 正常工作,你的工具需要遵守一些规则: - 所有方法都是自包含的,例如使用来自其参数中的变量。 - 根据上述要点,**所有导入应直接在工具的函数中定义**,否则在尝试使用 [`~Tool.save`] 或 [`~Tool.push_to_hub`] 调用你的自定义工具时会出现错误。 - 如果你继承了 `__init__` 方法,除了 `self` 之外,你不能给它任何其他参数。这是因为在特定工具实例初始化期间设置的参数很难跟踪,这阻碍了将它们正确分享到 Hub。无论如何,创建特定类的想法是你已经可以为任何需要硬编码的内容设置类属性(只需在 `class YourTool(Tool):` 行下直接设置 `your_variable=(...)`)。当然,你仍然可以通过将内容分配给 `self.your_variable` 在代码中的任何地方创建类属性。 一旦你的工具被推送到 Hub,你就可以查看它。[这里](https://huggingface.co/spaces/m-ric/hf-model-downloads) 是我推送的 `model_downloads_tool`。它有一个漂亮的 gradio 界面。 在深入工具文件时,你可以发现所有工具的逻辑都在 [tool.py](https://huggingface.co/spaces/m-ric/hf-model-downloads/blob/main/tool.py) 下。这是你可以检查其他人分享的工具的地方。 然后你可以使用 [`load_tool`] 加载工具或使用 [`~Tool.from_hub`] 创建它,并将其传递给 agent 中的 `tools` 参数。 由于运行工具意味着运行自定义代码,你需要确保你信任该仓库,因此我们需要传递 `trust_remote_code=True` 来从 Hub 加载工具。 ```python from smolagents import load_tool, CodeAgent model_download_tool = load_tool( "{your_username}/hf-model-downloads", trust_remote_code=True ) ``` ### 将 Space 导入为工具 你可以使用 [`Tool.from_space`] 方法直接从 Hub 导入一个 Space 作为工具! 你只需要提供 Hub 上 Space 的 id、它的名称和一个帮助你的 agent 理解工具功能的描述。在底层,这将使用 [`gradio-client`](https://pypi.org/project/gradio-client/) 库来调用 Space。 例如,让我们从 Hub 导入 [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) Space 并使用它生成一张图片。 ```python image_generation_tool = Tool.from_space( "black-forest-labs/FLUX.1-schnell", name="image_generator", description="Generate an image from a prompt" ) image_generation_tool("A sunny beach") ``` 瞧,这是你的图片!🏖️ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sunny_beach.webp"> 然后你可以像使用任何其他工具一样使用这个工具。例如,让我们改进提示 `A rabbit wearing a space suit` 并生成它的图片。 ```python from smolagents import CodeAgent, HfApiModel model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct") agent = CodeAgent(tools=[image_generation_tool], model=model) agent.run( "Improve this prompt, then generate an image of it.", additional_args={'user_prompt': 'A rabbit wearing a space suit'} ) ``` ```text === Agent thoughts: improved_prompt could be "A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background" Now that I have improved the prompt, I can use the image generator tool to generate an image based on this prompt. >>> Agent is executing the code below: image = image_generator(prompt="A bright blue space suit wearing rabbit, on the surface of the moon, under a bright orange sunset, with the Earth visible in the background") final_answer(image) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit_spacesuit_flux.webp"> 这得有多酷?🤩 ### 使用 LangChain 工具 我们喜欢 Langchain,并认为它有一套非常吸引人的工具。 要从 LangChain 导入工具,请使用 `from_langchain()` 方法。 以下是如何使用它来重现介绍中的搜索结果,使用 LangChain 的 web 搜索工具。 这个工具需要 `pip install langchain google-search-results -q` 才能正常工作。 ```python from langchain.agents import load_tools search_tool = Tool.from_langchain(load_tools(["serpapi"])[0]) agent = CodeAgent(tools=[search_tool], model=model) agent.run("How many more blocks (also denoted as layers) are in BERT base encoder compared to the encoder from the architecture proposed in Attention is All You Need?") ``` ### 管理你的 agent 工具箱 你可以通过添加或替换工具来管理 agent 的工具箱。 让我们将 `model_download_tool` 添加到一个仅使用默认工具箱初始化的现有 agent 中。 ```python from smolagents import HfApiModel model = HfApiModel("Qwen/Qwen2.5-Coder-32B-Instruct") agent = CodeAgent(tools=[], model=model, add_base_tools=True) agent.tools[model_download_tool.name] = model_download_tool ``` 现在我们可以利用新工具: ```python agent.run( "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub but reverse the letters?" ) ``` > [!TIP] > 注意不要向 agent 添加太多工具:这可能会让较弱的 LLM 引擎不堪重负。 ### 使用工具集合 你可以通过使用 ToolCollection 对象来利用工具集合,使用你想要使用的集合的 slug。 然后将它们作为列表传递给 agent 初始化,并开始使用它们! ```py from smolagents import ToolCollection, CodeAgent image_tool_collection = ToolCollection.from_hub( collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f", token="<YOUR_HUGGINGFACEHUB_API_TOKEN>" ) agent = CodeAgent(tools=[*image_tool_collection.tools], model=model, add_base_tools=True) agent.run("Please draw me a picture of rivers and lakes.") ``` 为了加快启动速度,工具仅在 agent 调用时加载。
{ "source": "huggingface/smolagents", "title": "docs/source/zh/tutorials/tools.md", "url": "https://github.com/huggingface/smolagents/blob/main/docs/source/zh/tutorials/tools.md", "date": "2024-12-05T11:28:04", "stars": 10361, "description": "🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.", "file_size": 7297 }
<h1 style="text-align: center;">veRL: Volcano Engine Reinforcement Learning for LLM</h1> veRL is a flexible, efficient and production-ready RL training framework designed for large language models (LLMs). veRL is the open-source version of **[HybridFlow: A Flexible and Efficient RLHF Framework](https://arxiv.org/abs/2409.19256v2)** paper. veRL is flexible and easy to use with: - **Easy extension of diverse RL algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code. - **Seamless integration of existing LLM infra with modular APIs**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks. - **Flexible device mapping**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes. - Readily integration with popular HuggingFace models veRL is fast with: - **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, veRL achieves high generation and training throughput. - **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases. <p align="center"> | <a href="https://verl.readthedocs.io/en/latest/index.html"><b>Documentation</b></a> | <a href="https://arxiv.org/abs/2409.19256v2"><b>Paper</b></a> | <a href="https://join.slack.com/t/verlgroup/shared_invite/zt-2w5p9o4c3-yy0x2Q56s_VlGLsJ93A6vA"><b>Slack</b></a> | <a href="https://raw.githubusercontent.com/eric-haibin-lin/verl-community/refs/heads/main/WeChat.JPG"><b>Wechat</b></a> | <!-- <a href=""><b>Slides</b></a> | --> </p> ## News - [2024/12] The team presented <a href="https://neurips.cc/Expo/Conferences/2024/workshop/100677">Post-training LLMs: From Algorithms to Infrastructure</a> at NeurIPS 2024. [Slides](https://github.com/eric-haibin-lin/verl-data/tree/neurips) and [video](https://neurips.cc/Expo/Conferences/2024/workshop/100677) available. - [2024/10] veRL is presented at Ray Summit. [Youtube video](https://www.youtube.com/watch?v=MrhMcXkXvJU&list=PLzTswPQNepXntmT8jr9WaNfqQ60QwW7-U&index=37) available. - [2024/08] HybridFlow (verl) is accepted to EuroSys 2025. ## Key Features - **FSDP** and **Megatron-LM** for training. - **vLLM** and **TGI** for rollout generation, **SGLang** support coming soon. - huggingface models support - Supervised fine-tuning - Reward model training - Reinforcement learning from human feedback with PPO - flash-attention integration, sequence packing - scales up to 70B models and hundreds of GPUs - experiment tracking with wandb and mlflow ## Getting Started Checkout this [Jupyter Notebook](https://github.com/volcengine/verl/tree/main/examples/ppo_trainer/verl_getting_started.ipynb) to get started with PPO training with a single 24GB L4 GPU (**FREE** GPU quota provided by [Lighting Studio](https://lightning.ai/hlin-verl/studios/verl-getting-started))! **Quickstart:** - [Installation](https://verl.readthedocs.io/en/latest/start/install.html) - [Quickstart](https://verl.readthedocs.io/en/latest/start/quickstart.html) **Running an PPO example step-by-step:** - Data and Reward Preparation - [Prepare Data (Parquet) for Post-Training](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html) - [Implement Reward Function for Dataset](https://verl.readthedocs.io/en/latest/preparation/reward_function.html) - Understanding the PPO Example - [PPO Example Architecture](https://verl.readthedocs.io/en/latest/examples/ppo_code_architecture.html) - [Config Explanation](https://verl.readthedocs.io/en/latest/examples/config.html) - [Run GSM8K Example](https://verl.readthedocs.io/en/latest/examples/gsm8k_example.html) **Reproducible algorithm baselines:** - [PPO](https://verl.readthedocs.io/en/latest/experiment/ppo.html) **For code explanation and advance usage (extension):** - PPO Trainer and Workers - [PPO Ray Trainer](https://verl.readthedocs.io/en/latest/workers/ray_trainer.html) - [PyTorch FSDP Backend](https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html) - [Megatron-LM Backend](https://verl.readthedocs.io/en/latest/index.html) - Advance Usage and Extension - [Ray API Design Tutorial](https://verl.readthedocs.io/en/latest/advance/placement.html) - [Extend to other RL(HF) algorithms](https://verl.readthedocs.io/en/latest/advance/dpo_extension.html) - [Add models with the FSDP backend](https://verl.readthedocs.io/en/latest/advance/fsdp_extension.html) - [Add models with the Megatron-LM backend](https://verl.readthedocs.io/en/latest/advance/megatron_extension.html) ## Citation and acknowledgement If you find the project helpful, please cite: - [HybridFlow: A Flexible and Efficient RLHF Framework](https://arxiv.org/abs/2409.19256v2) - [A Framework for Training Large Language Models for Code Generation via Proximal Policy Optimization](https://i.cs.hku.hk/~cwu/papers/gmsheng-NL2Code24.pdf) ```tex @article{sheng2024hybridflow, title = {HybridFlow: A Flexible and Efficient RLHF Framework}, author = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu}, year = {2024}, journal = {arXiv preprint arXiv: 2409.19256} } ``` verl is inspired by the design of Nemo-Aligner, Deepspeed-chat and OpenRLHF. The project is adopted and supported by Anyscale, Bytedance, LMSys.org, Shanghai AI Lab, Tsinghua University, UC Berkeley, UCLA, UIUC, and University of Hong Kong. ## Publications Using veRL - [Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization](https://arxiv.org/abs/2410.09302) - [Flaming-hot Initiation with Regular Execution Sampling for Large Language Models](https://arxiv.org/abs/2410.21236) - [Process Reinforcement Through Implicit Rewards](https://github.com/PRIME-RL/PRIME/) We are HIRING! Send us an [email](mailto:haibin.lin@bytedance.com) if you are interested in internship/FTE opportunities in MLSys/LLM reasoning/multimodal alignment.
{ "source": "Jiayi-Pan/TinyZero", "title": "OLD_README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/OLD_README.md", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 6480 }
# TinyZero ![image](cover.png) TinyZero is a reproduction of [DeepSeek R1 Zero](https://github.com/deepseek-ai/DeepSeek-R1) in countdown and multiplication tasks. We built upon [veRL](https://github.com/volcengine/verl). Through RL, the 3B base LM develops self-verification and search abilities all on its own You can experience the Ahah moment yourself for < $30 Twitter thread: https://x.com/jiayi_pirate/status/1882839370505621655 Full experiment log: https://wandb.ai/jiayipan/TinyZero Paper's on it's way! ## Installation ``` conda create -n zero python=3.9 # install torch [or you can skip this step and let vllm to install the correct version for you] pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu121 # install vllm pip3 install vllm==0.6.3 # or you can install 0.5.4, 0.4.2 and 0.3.1 pip3 install ray # verl pip install -e . # flash attention 2 pip3 install flash-attn --no-build-isolation # quality of life pip install wandb IPython matplotlib ``` ## Countdown task **Data Preparation** ``` conda activate zero python ./examples/data_preprocess/countdown.py --local_dir {path_to_your_dataset} ``` ### Run Training ``` conda activate zero ``` For the following code, if you see Out-of-vram, try add `critic.model.enable_gradient_checkpointing=True` to the script, and checkout the discussion [here](https://github.com/Jiayi-Pan/TinyZero/issues/5#issuecomment-2624161643) **Single GPU** Works for model <= 1.5B. For Qwen2.5-0.5B base, we know it fails to learn reasoning. ``` export N_GPUS=1 export BASE_MODEL={path_to_your_model} export DATA_DIR={path_to_your_dataset} export ROLLOUT_TP_SIZE=1 export EXPERIMENT_NAME=countdown-qwen2.5-0.5b export VLLM_ATTENTION_BACKEND=XFORMERS bash ./scripts/train_tiny_zero.sh ``` **3B+ model** In this case, the base model is able to develop sophisticated reasoning skills. ``` export N_GPUS=2 export BASE_MODEL={path_to_your_model} export DATA_DIR={path_to_your_dataset} export ROLLOUT_TP_SIZE=2 export EXPERIMENT_NAME=countdown-qwen2.5-3b export VLLM_ATTENTION_BACKEND=XFORMERS bash ./scripts/train_tiny_zero.sh ``` ### Instruct Ablation We experiment with QWen-2.5-3B Instruct too. **Data Preparation** To follow chat template, we need to reprocess the data: ``` conda activate zero python examples/data_preprocess/countdown.py --template_type=qwen-instruct --local_dir={path_to_your_dataset} ``` **Training** ``` export N_GPUS=2 export BASE_MODEL={path_to_your_model} export DATA_DIR={path_to_your_dataset} export ROLLOUT_TP_SIZE=2 export EXPERIMENT_NAME=countdown-qwen2.5-3b-instruct export VLLM_ATTENTION_BACKEND=XFORMERS bash ./scripts/train_tiny_zero.sh ``` ## Acknowledge * We run our experiments based on [veRL](https://github.com/volcengine/verl). * We use Qwen2.5 series base model [Qwen2.5](https://github.com/QwenLM/Qwen2.5). ## Citation ``` @misc{tinyzero, author = {Jiayi Pan and Junjie Zhang and Xingyao Wang and Lifan Yuan and Hao Peng and Alane Suhr}, title = {TinyZero}, howpublished = {https://github.com/Jiayi-Pan/TinyZero}, note = {Accessed: 2025-01-24}, year = {2025} } ```
{ "source": "Jiayi-Pan/TinyZero", "title": "README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/README.md", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 3127 }
# veRL documents ## Build the docs ```bash # Install dependencies. pip install -r requirements-docs.txt # Build the docs. make clean make html ``` ## Open the docs with your browser ```bash python -m http.server -d _build/html/ ``` Launch your browser and open localhost:8000.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/README.md", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 281 }
Welcome to veRL's documentation! ================================================ .. _hf_arxiv: https://arxiv.org/pdf/2409.19256 veRL is a flexible, efficient and production-ready RL training framework designed for large language models (LLMs) post-training. It is an open source implementation of the `HybridFlow <hf_arxiv>`_ paper. veRL is flexible and easy to use with: - **Easy extension of diverse RL algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code. - **Seamless integration of existing LLM infra with modular APIs**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks. - **Flexible device mapping and parallelism**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes. - Readily integration with popular HuggingFace models veRL is fast with: - **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, veRL achieves high generation and training throughput. - **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases. -------------------------------------------- .. _Contents: .. toctree:: :maxdepth: 5 :caption: Quickstart :titlesonly: :numbered: start/install start/quickstart .. toctree:: :maxdepth: 5 :caption: Data Preparation :titlesonly: :numbered: preparation/prepare_data preparation/reward_function .. toctree:: :maxdepth: 2 :caption: PPO Example :titlesonly: :numbered: examples/ppo_code_architecture examples/config examples/gsm8k_example .. toctree:: :maxdepth: 1 :caption: PPO Trainer and Workers workers/ray_trainer workers/fsdp_workers workers/megatron_workers .. toctree:: :maxdepth: 1 :caption: Experimental Results experiment/ppo .. toctree:: :maxdepth: 1 :caption: Advance Usage and Extension advance/placement advance/dpo_extension advance/fsdp_extension advance/megatron_extension .. toctree:: :maxdepth: 1 :caption: FAQ faq/faq Contribution ------------- veRL is free software; you can redistribute it and/or modify it under the terms of the Apache License 2.0. We welcome contributions. Join us on `GitHub <https://github.com/volcengine/verl>`_, `Slack <https://join.slack.com/t/verlgroup/shared_invite/zt-2w5p9o4c3-yy0x2Q56s_VlGLsJ93A6vA>`_ and `Wechat <https://raw.githubusercontent.com/eric-haibin-lin/verl-community/refs/heads/main/WeChat.JPG>`_ for discussions. Code formatting ^^^^^^^^^^^^^^^^^^^^^^^^ We use yapf (Google style) to enforce strict code formatting when reviewing MRs. Run yapf at the top level of verl repo: .. code-block:: bash pip3 install yapf yapf -ir -vv --style ./.style.yapf verl examples tests
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/index.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/index.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 3288 }
Extend to other RL(HF) algorithms ================================= We already implemented the complete training pipeline of the PPO algorithms. To extend to other algorithms, we analyze the high-level principle to use veRL and provide a tutorial to implement the DPO algorithm. Users can follow the similar paradigm to extend to other RL algorithms. .. note:: **Key ideas**: Single process drives multi-process computation and data communication. Overall Approach ---------------- Step 1: Consider what multi-machine multi-GPU computations are needed for each model, such as ``generate_sequence`` , ``compute_log_prob`` and ``update_policy`` in the actor_rollout model. Implement distributed single-process-multiple-data (SPMD) computation and encapsulate them into APIs Step 2: Based on different distributed scenarios, including FSDP and 3D parallelism in Megatron-LM, implement single-process control of data interaction among multi-process computations. Step 3: Utilize the encapsulated APIs to implement the control flow Example: Online DPO ------------------- We use veRL to implement a simple online DPO algorithm. The algorithm flow of Online DPO is as follows: 1. There is a prompt (rollout) generator which has the same weight as the actor model. After a batch of prompts are fed into the generator, it generates N responses for each prompt. 2. Send all the prompts + responses to a verifier for scoring, which can be reward model or a rule-based function. Then sort them in pairs to form a training batch. 3. Use this training batch to train the actor model using DPO. During the process, a reference policy is needed. Step 1: What are the multi-machine multi-GPU computations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Sample Generator** Implementation details: .. code:: python from verl.single_controller.base import Worker from verl.single_controller.ray import RayWorkerGroup, RayClassWithInitArgs, RayResourcePool import ray @ray.remote class SampleGenerator(Worker): def __init__(self, config): super().__init__() self.config = config def generate_sequences(self, data): pass Here, ``SampleGenerator`` can be viewed as a multi-process pulled up by ``torchrun``, with each process running the same code (SPMD). ``SampleGenerator`` needs to implement a ``generate_sequences`` API for the control flow to call. The implementation details inside can use any inference engine including vllm, sglang and huggingface. Users can largely reuse the code in verl/verl/trainer/ppo/rollout/vllm_rollout/vllm_rollout.py and we won't go into details here. **ReferencePolicy inference** API: compute reference log probability .. code:: python from verl.single_controller.base import Worker import ray @ray.remote class ReferencePolicy(Worker): def __init__(self): super().__init__() self.model = Model() def infer(self, data): return self.model(data) **Actor update** API: Update actor model parameters .. code:: python from verl.single_controller.base import Worker import ray @ray.remote class DPOActor(Worker): def __init__(self): super().__init__() self.model = Model() self.model = FSDP(self.model) # or other distributed strategy self.optimizer = optim.Adam(self.model.parameters(), lr=1e-3) self.loss_fn = xxx def update(self, data): self.optimizer.zero_grad() logits = self.model(data) loss = self.loss_fn(logits) loss.backward() self.optimizer.step() **Notes: How to distinguish between control processes and distributed computation processes** ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - Control processes are generally functions directly decorated with ``@ray.remote`` - Computation processes are all wrapped into a ``RayWorkerGroup``. Users can reuse most of the distribtued computation logics implemented in PPO algorithm, including FSDP and Megatron-LM backend in verl/verl/trainer/ppo. Step 2: Based on different distributed scenarios, implement single-process control of multi-process data interaction ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **The core problem to solve here is how a single process sends data to multiple processes, drives multi-process computation, and how the control process obtains the results of multi-process computation.** First, we initialize the multi-process ``WorkerGroup`` in the control process. .. code:: python @ray.remote(num_cpus=1) def main_task(config): # construct SampleGenerator resource_pool = RayResourcePool(process_on_nodes=[8] * 2) # 16 GPUs ray_cls = RayClassWithInitArgs(SampleGenerator, config=config) # put SampleGenerator onto resource pool worker_group = RayWorkerGroup(resource_pool, ray_cls) # construct reference policy As we can see, in the control process, multiple processes are wrapped into a ``RayWorkerGroup``. Inside this ``WorkerGroup``, there is a ``self._workers`` member, where each worker is a RayActor (https://docs.ray.io/en/latest/ray-core/actors.html) of SampleGenerator. ray_trainer.md also provide an implementation of ``MegatronRayWorkerGroup``. Assuming the model is distributed using FSDP, and there is a batch of data on the control process, for data parallelism, the underlying calling process is: .. code:: python data = xxx data_list = data.chunk(dp_size) output = [] for d in data_list: # worker_group._workers[i] is a SampleGenerator output.append(worker_group._workers[i].generate_sequences.remote(d)) output = ray.get(output) output = torch.cat(output) Single process calling multiple processes involves the following 3 steps: 1. Split the data into DP parts on the control process. 2. Send the data to remote, call the remote computation through RPC, and utilize multi-process computation. 3. Obtain the computation results of each worker on the control process and merge them. Frequently calling these 3 steps on the controller process greatly hurts code readability. **In veRL, we have abstracted and encapsulated these 3 steps, so that the worker's method + dispatch + collect can be registered into the worker_group** .. code:: python from verl.single_controller.base.decorator import register def dispatch_data(worker_group, data): return data.chunk(worker_group.world_size) def collect_data(worker_group, data): return torch.cat(data) dispatch_mode = { 'dispatch_fn': dispatch_data, 'collect_fn': collect_data } @register(dispatch_mode=dispatch_mode) def generate_sequences(self, data): pass In this way, we can directly call the method inside the worker through the ``worker_group`` on the control (driver) process (which is a single process): .. code:: python output = worker_group.generate_sequences(data) This single line includes data splitting, data distribution and computation, and data collection. Furthermore, the model parallelism size of each model is usually fixed, including dp, tp, pp. So for these common distributed scenarios, we have pre-implemented specific dispatch and collect methods,in `decorator.py <https://github.com/volcengine/verl/blob/main/verl/single_controller/base/decorator.py>`_, which can be directly used to wrap the computations. .. code:: python from verl.single_controller.base.decorator import register, Dispatch @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def generate_sequences(self, data: DataProto) -> DataProto: pass Here it requires the data interface to be ``DataProto``. Definition of ``DataProto`` is in `protocol.py <https://github.com/volcengine/verl/blob/main/verl/protocol.py>`_. Step 3: Main training loop ~~~~~~~~~~~~~~~~~~~~~~~~~~ With the above training flows, we can implement the algorithm's control flow. It is recommended that ``main_task`` is also a ray remote process. .. code:: python @ray.remote(num_cpus=1) def main_task(config): # construct SampleGenerator resource_pool = RayResourcePool(process_on_nodes=[8] * 2) # 16 GPUs ray_cls = RayClassWithInitArgs(SampleGenerator, config=config) # put SampleGenerator onto resource pool sample_gen = RayWorkerGroup(resource_pool, ray_cls) # construct reference policy ray_cls = RayClassWithInitArgs(ReferencePolicy) ref_policy = RayWorkerGroup(resource_pool, ray_cls) # construct actor ray_cls = RayClassWithInitArgs(DPOActor) dpo_policy = RayWorkerGroup(resource_pool, ray_cls) dataloader = DataLoader() for data in dataloader: # generate data data = sample_gen.generate_sequences(data) # generate scores for each data data = generate_scores(data) # generate pairwise data using scores data = generate_pairwise_data(data) # generate ref_log_prob data.batch['ref_log_prob'] = ref_policy.infer(data) # update using dpo dpo_policy.update(data) # logging Here, different ``WorkerGroups`` can be placed in the same resource pool or in different resource pools using ``create_colocated_worker_cls`` similar as in `ray_trainer.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/ray_trainer.py>`_.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/advance/dpo_extension.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/dpo_extension.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 9680 }
Add models with the FSDP backend ================================== Model -------------------------- In principle, our FSDP backend can support any HF model and we can sychronoize the actor model weight with vLLM using `hf_weight_loader.py <https://github.com/volcengine/verl/blob/main/verl/third_party/vllm/vllm_v_0_5_4/hf_weight_loader.py>`_. However, ``hf_weight_loader`` is will gather the full state_dict of a model during synchronization, which may cause OOM. We suggest using ``dtensor_weight_loader`` which gather the full model parameter layer by layer to reduce the peak memory usage. We already support dtensor weight loader for the models below in `dtensor_weight_loader.py <https://github.com/volcengine/verl/blob/main/verl/third_party/vllm/vllm_v_0_5_4/dtensor_weight_loaders.py>`_.: - ``GPT2LMHeadModel`` - ``LlamaForCausalLM`` - ``LLaMAForCausalLM`` - ``MistralForCausalLM`` - ``InternLMForCausalLM`` - ``AquilaModel`` - ``AquilaForCausalLM`` - ``Phi3ForCausalLM`` - ``GemmaForCausalLM`` - ``Gemma2ForCausalLM`` - ``GPTBigCodeForCausalLM`` - ``Starcoder2ForCausalLM`` - ``Qwen2ForCausalLM`` - ``DeepseekV2ForCausalLM`` To implement ``dtensor_weight_loader`` of a model that's supported in vLLM, follow the guide of gemma model below: 1. Copy the ``load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]])`` from the vllm model class to ``dtensor_weight_loaders.py`` 2. Modify the arguments to ``(actor_weights: Dict, vllm_model: nn.Module)`` 3. Replace the ``self`` to ``vllm_model`` 4. Add the ``local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight)`` before each ``param = params_dict[name]`` and modify the following weight loading using ``local_loaded_weight``. 5. Register the implemented dtensor weight loader to ``__MODEL_DTENSOR_WEIGHT_LOADER_REGISTRY__``. .. code-block:: diff - def load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]]): + def gemma_dtensor_weight_loader(actor_weights: Dict, vllm_model: nn.Module) -> nn.Module: stacked_params_mapping = [ # (param_name, shard_name, shard_id) ("qkv_proj", "q_proj", "q"), ("qkv_proj", "k_proj", "k"), ("qkv_proj", "v_proj", "v"), ("gate_up_proj", "gate_proj", 0), ("gate_up_proj", "up_proj", 1), ] - params_dict = dict(self.named_parameters()) + params_dict = dict(vllm_model.named_parameters()) loaded_params = set() - for name, loaded_weight in weights: + for name, loaded_weight in actor_weights.items(): for (param_name, shard_name, shard_id) in stacked_params_mapping: if shard_name not in name: continue name = name.replace(shard_name, param_name) # Skip loading extra bias for GPTQ models. if name.endswith(".bias") and name not in params_dict: continue + local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight) param = params_dict[name] weight_loader = param.weight_loader - weight_loader(param, loaded_weight, shard_id) + weight_loader(param, local_loaded_weight.to(dtype=param.dtype), shard_id) break else: # lm_head is not used in vllm as it is tied with embed_token. # To prevent errors, skip loading lm_head.weight. if "lm_head.weight" in name: continue # Skip loading extra bias for GPTQ models. if name.endswith(".bias") and name not in params_dict: continue + local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight) param = params_dict[name] weight_loader = getattr(param, "weight_loader", default_weight_loader) - weight_loader(param, loaded_weight) + weight_loader(param, local_loaded_weight.to(dtype=param.dtype)) loaded_params.add(name) unloaded_params = params_dict.keys() - loaded_params if unloaded_params: raise RuntimeError( "Some weights are not initialized from checkpoints: " f"{unloaded_params}")
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/advance/fsdp_extension.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/fsdp_extension.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4399 }
Add models with the Megatron-LM backend ========================================= Model ----------- The most challenging aspect to use the Megatron-LM backend is implementing the models for training. Currently, we implement Llama model that support data parallelism, tensor parallelism, pipeline parallelism (also vPP) and sequence parallelism. We also implement remove padding (sequence packing) on Llama model, which can be found in `modeling_llama_megatron.py <https://github.com/volcengine/verl/blob/main/verl/models/llama/megatron/modeling_llama_megatron.py>`_. To support other model, users are required to implement: 1. Implemnt a model similar to ``modeling_llama_megatron.py`` that satisfy the parallelism requirements of Megatron-LM. Then register your model in the `registry.py <https://github.com/volcengine/verl/blob/main/verl/models/registry.py>`_. 2. Checkpoint utils that can load full checkpoint (e.g. huggingface checkpoint) to partitioned models during the runtime. Then register your loader to ``weight_loader_registry`` in `weight_loader_registry.py <https://github.com/volcengine/verl/blob/main/verl/models/weight_loader_registry.py>`_. 3. Weight loader that synchronize the weight from Megatron to rollout (vLLM) model. Note that both the actor model and rollout model are partitioned during runtime. So, it's advisable to map the model name in actor model implementation. Otherwise, you may need an additional name mapping and even weight transformation. The weight loader implementation is in `megatron_weight_loaders.py <https://github.com/volcengine/verl/blob/main/verl/third_party/vllm/vllm_v_0_6_3/megatron_weight_loaders.py>`_.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/advance/megatron_extension.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/megatron_extension.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 1688 }
Ray API Design Tutorial ======================================= We provide a tutorial for our Ray API design, including: - Ray basic concepts - Resource Pool and RayWorkerGroup - Data Dispatch, Execution and Collection - Initialize the RayWorkerGroup and execute the distributed computation in the given Resource Pool See details in `tutorial.ipynb <https://github.com/volcengine/verl/blob/main/examples/ray/tutorial.ipynb>`_.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/advance/placement.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/placement.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 429 }
.. _config-explain-page: Config Explaination =================== ppo_trainer.yaml for FSDP Backend --------------------------------- Data ~~~~ .. code:: yaml data: tokenizer: null train_files: ~/data/rlhf/gsm8k/train.parquet val_files: ~/data/rlhf/gsm8k/test.parquet prompt_key: prompt max_prompt_length: 512 max_response_length: 512 train_batch_size: 1024 val_batch_size: 1312 return_raw_input_ids: False # This should be set to true when the tokenizer between policy and rm differs return_raw_chat: False - ``data.train_files``: Training set parquet. Can be a list or a single file. The program will read all files into memory, so it can't be too large (< 100GB). The path can be either local path or HDFS path. For HDFS path, we provide utils to download it to DRAM and convert the HDFS path to local path. - ``data.val_files``: Validation parquet. Can be a list or a single file. - ``data.prompt_key``: The field in the dataset where the prompt is located. Default is 'prompt'. - ``data.max_prompt_length``: Maximum prompt length. All prompts will be left-padded to this length. An error will be reported if the length is too long - ``data.max_response_length``: Maximum response length. Rollout in RL algorithms (e.g. PPO) generates up to this length - ``data.train_batch_size``: Batch size sampled for one training iteration of different RL algorithms. - ``data.val_batch_size``: Batch size sampled for one validation iteration. - ``data.return_raw_input_ids``: Whether to return the original input_ids without adding chat template. This is mainly used to accommodate situations where the reward model's chat template differs from the policy. It needs to be decoded first, then apply the RM's chat template. If using a model-based RM, and the policy and RM chat_templates are different, this flag needs to be set - ``data.return_raw_chat``: - ``data.truncation``: Truncate the input_ids or prompt length if they exceed max_prompt_length. Default is 'error', not allow exceed the max_prompt_length. The users should increase the max_prompt_length if throwing the error. Actor/Rollout/Reference Policy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code:: yaml actor_rollout_ref: hybrid_engine: True model: path: ~/models/deepseek-llm-7b-chat external_lib: null override_config: {} enable_gradient_checkpointing: False actor: strategy: fsdp # This is for backward-compatibility ppo_mini_batch_size: 256 ppo_micro_batch_size: 64 grad_clip: 1.0 clip_ratio: 0.2 entropy_coeff: 0.001 ppo_epochs: 1 shuffle: True optim: lr: 1e-6 lr_warmup_steps_ratio: 0. # the total steps will be injected during runtime min_lr_ratio: null # only useful for warmup with cosine warmup_style: constant # select from constant/cosine total_training_steps: -1 # must be override by program fsdp_config: wrap_policy: # transformer_layer_cls_to_wrap: None min_num_params: 0 param_offload: False grad_offload: False optimizer_offload: False ref: fsdp_config: param_offload: False wrap_policy: # transformer_layer_cls_to_wrap: None min_num_params: 0 log_prob_micro_batch_size: 128 rollout: name: vllm temperature: 1.0 top_k: -1 # 0 for hf rollout, -1 for vllm rollout top_p: 1 response_length: ${data.max_response_length} # for vllm rollout dtype: bfloat16 # should align with FSDP gpu_memory_utilization: 0.5 ignore_eos: False enforce_eager: True free_cache_engine: True load_format: dummy_dtensor # or dummy_hf or dummy_megatron tensor_model_parallel_size: 2 max_num_batched_tokens: 8192 max_num_seqs: 1024 log_prob_micro_batch_size: 128 # for vllm and hf rollout do_sample: True **Common config for actor, rollout and reference model** - ``actor_rollout_ref.hybrid_engine``: Whether it's a hybrid engine, currently only supports hybrid engine - ``actor_rollout_ref.model.path``: Huggingface model path. This can be either local path or HDFS path. For HDFS path, we provide utils to download it to DRAM and convert the HDFS path to local path. - ``actor_rollout_ref.model.external_libs``: Additional Python packages that need to be imported. Used to register models or tokenizers into the Huggingface system. - ``actor_rollout_ref.model.override_config``: Used to override some of the model's original configurations, mainly dropout - ``actor_rollout_ref.model.enable_gradient_checkpointing``: Whether to enable gradient checkpointing for the actor **Actor model** - ``actor_rollout_ref.actor.strategy``: fsdp or megatron. In this example, we use fsdp backend. - ``actor_rollout_ref.actor.ppo_mini_batch_size``: One sample is split into multiple sub-batches with batch_size=ppo_mini_batch_size for PPO updates - ``actor_rollout_ref.actor.ppo_micro_batch_size``: Similar to gradient accumulation, the micro_batch_size for one forward pass, trading speed for GPU memory - ``actor_rollout_ref.actor.grad_clip``: Gradient clipping for actor updates - ``actor_rollout_ref.actor.clip_ratio``: PPO clip ratio - ``actor_rollout_ref.actor.entropy_coeff``: The weight of entropy when calculating PPO loss - ``actor_rollout_ref.actor.ppo_epochs``: Number of epochs for PPO updates on one set of sampled data - ``actor_rollout_ref.actor.shuffle``: Whether to shuffle data when there are multiple epochs - ``actor_rollout_ref.actor.optim``: Actor's optimizer parameters - ``actor_rollout_ref.actor.fsdp_config``: FSDP config for actor training - ``wrap_policy``: FSDP wrap policy. By default, it uses Huggingface's wrap policy, i.e., wrapping by DecoderLayer - No need to set transformer_layer_cls_to_wrap, so we comment it. - ``*_offload``: Whether to enable parameter, gradient and optimizer offload - Trading speed for GPU memory. **Reference Model** - ``actor_rollout_ref.ref``: FSDP config same as actor. **For models larger than 7B, it's recommended to turn on offload for ref by default** - ``actor_rollout_ref.ref.log_prob_micro_batch_size``: The batch size for one forward pass in the computation of ``ref_log_prob``. **Rollout Model** - ``actor_rollout_ref.rollout.name``: hf/vllm. We use vLLM by default because it's much efficient and our hybrid engine is implemented with vLLM. - Rollout (Auto-regressive) parameters. The key should be equal to the property name in vLLM's ``SamplingParams``. - ``temperature``, ``top_k``, ``top_p`` and others: Sampling parameters in ``SamplingParams``. - ``dtype``: Rollout model parameters type. This should be align with the actor model parameter type in FSDP/Megatron backend. - ``gpu_memory_utilization``: The proportion of the remaining GPU memory allocated for kv cache after other models have initialized when using vllm. - ``tensor_model_parallel_size``: TP size for rollout. Only effective for vllm. - ``log_prob_micro_batch_size``: Micro_batch_size (The batch size for one forward pass) for recalculating log_prob. - ``do_sample``: Whether to sample. If set to False, the rollout model will perform greedy sampling. We disable ``do_sample`` during validation. - ``actor_rollout_ref.rollout.ignore_eos``: Whether to ignore the EOS token and continue generating tokens after the EOS token is generated. - ``actor_rollout_ref.rollout.free_cache_engine``: Offload the KVCache after rollout generation stage. Default is True. When set to True, we need to disable the usage of CUDAGraph (set ``enforce_eager`` to True.) - ``actor_rollout_ref.rollout.enforce_eager``: Whether to use CUDAGraph in vLLM generation. Default set to True to disable CUDAGraph. - ``actor_rollout_ref.rollout.load_format``: Which weight loader to use to load the actor model weights to the rollout model. - ``auto``: Use Megatron weight loader. - ``megatron``: Use Megatron weight loader. Deployed with Megatron backend. The input model ``state_dict()`` is already partitioned along TP dimension and already gathered along PP dimension. This weight loader requires that the Rollout model and Actor model's parameters shape and name should be identical. - ``dtensor``: Default solution when using Huggingface weight loader. Deployed with FSDP backend and the state_dict_type is ``StateDictType.SHARDED_STATE_DICT``. Recommend to use this weight loader - ``hf``: Use Huggingface weight loader. Deployed with FSDP backend and the state_dict_type is ``StateDictType.FULL_STATE_DICT``. This solution doesn't need to rewrite the weight loader for each model implemented in vLLM but it results in larger peak memory usage. - ``dummy_hf``, ``dummy_megatron``, ``dummy_dtensor``: Random initialization. .. note:: **NOTED**: In this config field, users only need to select from ``dummy_megatron``, ``dummy_dtensor``, ``dummy_hf`` for rollout initialization and our hybrid engine will select the corresponding weight loader (i.e., ``megatron``, ``dtensor``, ``hf``) during actor/rollout weight synchronization. Critic Model ~~~~~~~~~~~~ Most parameters for Critic are similar to Actor Model. Reward Model ~~~~~~~~~~~~ .. code:: yaml reward_model: enable: False model: input_tokenizer: ${actor_rollout_ref.model.path} # set this to null if the chat template is identical path: ~/models/Anomy-RM-v0.1 external_lib: ${actor_rollout_ref.model.external_lib} fsdp_config: min_num_params: 0 param_offload: False micro_batch_size: 64 max_length: null - ``reward_model.enable``: Whether to enable reward model. If False, we compute the reward only with the user-defined reward functions. In GSM8K and Math examples, we disable reward model. For RLHF alignment example using full_hh_rlhf, we utilize reward model to assess the responses. If False, the following parameters are not effective. - ``reward_model.model`` - ``input_tokenizer``: Input tokenizer. If the reward model's chat template is inconsistent with the policy, we need to first decode to plaintext, then apply the rm's chat_template. Then score with RM. If chat_templates are consistent, it can be set to null. - ``path``: RM's HDFS path or local path. Note that RM only supports AutoModelForSequenceClassification. Other model types need to define their own RewardModelWorker and pass it from the code. Algorithm ~~~~~~~~~ .. code:: yaml algorithm: gamma: 1.0 lam: 1.0 adv_estimator: gae kl_penalty: kl # how to estimate kl divergence kl_ctrl: type: fixed kl_coef: 0.005 - ``gemma``: discount factor - ``lam``: Trade-off between bias and variance in the GAE estimator - ``adv_estimator``: gae. Currently only supports gae, will support GRPO in the future - ``kl_penalty``\ :Support ``kl``, ``abs``, ``mse`` and ``full``.How to calculate the kl divergence between actor and reference policy. For specific options, refer to `core_algos.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/core_algos.py#L192>`_ . Trainer ~~~~~~~ .. code:: yaml trainer: total_epochs: 30 project_name: verl_examples experiment_name: gsm8k logger: ['console', 'wandb'] nnodes: 1 n_gpus_per_node: 8 save_freq: -1 test_freq: 2 critic_warmup: 0 default_hdfs_dir: ~/experiments/gsm8k/ppo/${trainer.experiment_name} # hdfs checkpoint path default_local_dir: checkpoints/${trainer.project_name}/${trainer.experiment_name} # local checkpoint path - ``trainer.total_epochs``: Number of epochs in training. - ``trainer.project_name``: For wandb - ``trainer.experiment_name``: For wandb - ``trainer.logger``: Support console and wandb - ``trainer.nnodes``: Number of nodes used in the training. - ``trainer.n_gpus_per_node``: Number of GPUs per node. - ``trainer.save_freq``: The frequency (by iteration) to save checkpoint of the actor and critic model. - ``trainer.test_freq``: The validation frequency (by iteration). - ``trainer.critic_warmup``: The number of iteration to train the critic model before actual policy learning.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/examples/config.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/examples/config.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 12464 }
GSM8K Example ============= Introduction ------------ In this example, we train an LLM to tackle the GSM8k task. Paper: https://arxiv.org/pdf/2110.14168 Dataset: https://huggingface.co/datasets/gsm8k Note that the original paper mainly focuses on training a verifier (a reward model) to solve math problems via Best-of-N sampling. In this example, we train an RLHF agent using a rule-based reward model. Dataset Introduction -------------------- GSM8k is a math problem dataset. The prompt is an elementary school problem. The LLM model is required to answer the math problem. The training set contains 7473 samples and the test set contains 1319 samples. **An example** Prompt Katy makes coffee using teaspoons of sugar and cups of water in the ratio of 7:13. If she used a total of 120 teaspoons of sugar and cups of water, calculate the number of teaspoonfuls of sugar she used. Solution The total ratio representing the ingredients she used to make the coffee is 7+13 = <<7+13=20>>20 Since the fraction representing the number of teaspoons she used is 7/20, she used 7/20\ *120 = <<7/20*\ 120=42>>42 #### 42 Step 1: Prepare dataset ----------------------- .. code:: bash cd examples/data_preprocess python3 gsm8k.py --local_dir ~/data/gsm8k Step 2: Download Model ---------------------- There're three ways to prepare the model checkpoints for post-training: - Download the required models from hugging face .. code:: bash huggingface-cli download deepseek-ai/deepseek-math-7b-instruct --local-dir ~/models/deepseek-math-7b-instruct --local-dir-use-symlinks False - Already store your store model in the local directory or HDFS path. - Also, you can directly use the model name in huggingface (e.g., deepseek-ai/deepseek-math-7b-instruct) in ``actor_rollout_ref.model.path`` and ``critic.model.path`` field in the run script. Noted that users should prepare checkpoints for actor, critic and reward model. [Optional] Step 3: SFT your Model --------------------------------- We provide a SFT Trainer using PyTorch FSDP in `fsdp_sft_trainer.py <https://github.com/volcengine/verl/blob/main/verl/trainer/fsdp_sft_trainer.py>`_. Users can customize their own SFT script using our FSDP SFT Trainer. We also provide various training scripts for SFT on GSM8K dataset in `gsm8k sft directory <https://github.com/volcengine/verl/blob/main/examples/gsm8k/sft/>`_. .. code:: shell set -x torchrun -m verl.trainer.fsdp_sft_trainer \ data.train_files=$HOME/data/gsm8k/train.parquet \ data.val_files=$HOME/data/gsm8k/test.parquet \ data.prompt_key=question \ data.response_key=answer \ data.micro_batch_size=8 \ model.partial_pretrain=deepseek-ai/deepseek-coder-6.7b-instruct \ trainer.default_hdfs_dir=hdfs://user/verl/experiments/gsm8k/deepseek-coder-6.7b-instruct/ \ trainer.project_name=gsm8k-sft \ trainer.experiment_name=gsm8k-sft-deepseek-coder-6.7b-instruct \ trainer.total_epochs=4 \ trainer.logger=['console','wandb'] Step 4: Perform PPO training with your model on GSM8K Dataset ------------------------------------------------------------- - Prepare your own run.sh script. Here's an example for GSM8k dataset and deepseek-llm-7b-chat model. - Users could replace the ``data.train_files`` ,\ ``data.val_files``, ``actor_rollout_ref.model.path`` and ``critic.model.path`` based on their environment. - See :doc:`config` for detailed explaination of each config field. **Reward Model/Function** We use a rule-based reward model. We force the model to produce a final answer following 4 “#” as shown in the solution. We extract the final answer from both the solution and model's output using regular expression matching. We compare them and assign a reward of 1 to correct answer, 0.1 to incorrect answer and 0 to no answer. **Training Script** The training script example for FSDP and Megatron-LM backend are stored in examples/ppo_trainer directory. .. code:: bash cd ../ppo_trainer bash run_deepseek7b_llm.sh The script of run_deepseek7b_llm.sh .. code:: bash set -x python3 -m verl.trainer.main_ppo \ data.train_files=~/data/rlhf/gsm8k/train.parquet \ data.val_files=~/data/rlhf/gsm8k/test.parquet \ data.train_batch_size=1024 \ data.val_batch_size=1312 \ data.max_prompt_length=512 \ data.max_response_length=512 \ actor_rollout_ref.model.path=~/models/deepseek-llm-7b-chat \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.actor.ppo_mini_batch_size=256 \ actor_rollout_ref.actor.ppo_micro_batch_size=64 \ actor_rollout_ref.actor.fsdp_config.param_offload=False \ actor_rollout_ref.actor.fsdp_config.grad_offload=False \ actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \ actor_rollout_ref.rollout.micro_batch_size=256 \ actor_rollout_ref.rollout.log_prob_micro_batch_size=128 \ actor_rollout_ref.rollout.tensor_model_parallel_size=2 \ actor_rollout_ref.rollout.name=vllm \ actor_rollout_ref.rollout.gpu_memory_utilization=0.4 \ actor_rollout_ref.ref.log_prob_micro_batch_size=128 \ actor_rollout_ref.ref.fsdp_config.param_offload=True \ critic.optim.lr=1e-5 \ critic.model.path=~/models/deepseek-llm-7b-chat \ critic.model.enable_gradient_checkpointing=False \ critic.ppo_micro_batch_size=64 \ critic.model.fsdp_config.param_offload=False \ critic.model.fsdp_config.grad_offload=False \ critic.model.fsdp_config.optimizer_offload=False \ algorithm.kl_ctrl.kl_coef=0.001 \ trainer.critic_warmup=0 \ trainer.logger=['console','wandb'] \ trainer.project_name='verl_example_gsm8k' \ trainer.experiment_name='deepseek_llm_7b_function_rm' \ trainer.n_gpus_per_node=8 \ trainer.nnodes=1 \ trainer.save_freq=-1 \ trainer.total_epochs=15
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/examples/gsm8k_example.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/examples/gsm8k_example.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 5986 }
PPO Example Architecture ======================== Let's start with the Proximal Policy Optimization algorithm, which is most widely used algorithm in LLM post-training. The main entry point of the PPO algorithm example is: `main_ppo.py <https://github.com/volcengine/verl/blob/main/verl/trainer/main_ppo.py>`_. In this tutorial, we will go through the code architecture in `main_ppo.py <https://github.com/volcengine/verl/blob/main/verl/trainer/main_ppo.py>`_. Define the data --------------- Users need to preprocess and store the dataset in parquet files. And we implement `RLHFDataset` to load and tokenize the parquet files. For ``RLHFDataset`` (Default), at least 1 fields are required: - ``prompt``: Contains the string prompt We already provide some examples of processing the datasets to parquet files in `data_preprocess directory <https://github.com/volcengine/verl/blob/main/examples/data_preprocess>`_. Currently, we support preprocess of GSM8k, MATH, Hellasage, Full_hh_rlhf datasets. See :doc:`../preparation/prepare_data` for more information. Define the reward functions for different datasets -------------------------------------------------- In this main entry point, the users only need to define their own reward function based on the datasets (or applications) utilized in PPO training. For example, we already provide reward functions for `GSM8k <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/gsm8k.py>`_ and `MATH <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/math.py>`_ datasets in the ``_select_rm_score_fn``. In the ``RewardManager``, we will compute the reward score based on the data_source to select corresponding reward functions. For some RLHF datasets (e.g., full_hh_rlhf), the reward model is utilized to assess the responses without any reward functions. In this case, the ``RewardManager`` will return the ``rm_score`` computed by the reward model directly. See `reward functions <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score>`_ for detailed implementation. Define worker classes --------------------- .. code:: python if config.actor_rollout_ref.actor.strategy == 'fsdp': # for FSDP backend assert config.actor_rollout_ref.actor.strategy == config.critic.strategy from verl.workers.fsdp_workers import ActorRolloutRefWorker, CriticWorker from verl.single_controller.ray import RayWorkerGroup ray_worker_group_cls = RayWorkerGroup elif config.actor_rollout_ref.actor.strategy == 'megatron': # for Megatron backend assert config.actor_rollout_ref.actor.strategy == config.critic.strategy from verl.workers.megatron_workers import ActorRolloutRefWorker, CriticWorker from verl.single_controller.ray.megatron import NVMegatronRayWorkerGroup ray_worker_group_cls = NVMegatronRayWorkerGroup # Ray worker class for Megatron-LM else: raise NotImplementedError from verl.trainer.ppo.ray_trainer import ResourcePoolManager, Role role_worker_mapping = { Role.ActorRollout: ActorRolloutRefWorker, Role.Critic: CriticWorker, Role.RefPolicy: ActorRolloutRefWorker } global_pool_id = 'global_pool' resource_pool_spec = { global_pool_id: [config.trainer.n_gpus_per_node] * config.trainer.nnodes, } mapping = { Role.ActorRollout: global_pool_id, Role.Critic: global_pool_id, Role.RefPolicy: global_pool_id, } Step 1: Construct the mapping between roles and workers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A role represents a group of workers in the same process. We have pre-defined several roles in `ray_trainer.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/ray_trainer.py#L38>`_. .. code:: python class Role(Enum): """ To create more roles dynamically, you can subclass Role and add new members """ Actor = 0 # This worker only has Actor Rollout = 1 # This worker only has Rollout ActorRollout = 2 # This worker has both actor and rollout, it's a HybridEngine Critic = 3 # This worker only has critic RefPolicy = 4 # This worker only has reference policy RewardModel = 5 # This worker only has reward model ActorRolloutRef = 6 # This worker contains actor, rollout and reference policy simultaneously Step 2: Define the worker class corresponding to this role ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - We have pre-implemented the ``ActorRolloutRefWorker``. Through different configs, it can be a standalone actor, a standalone rollout, an ActorRollout HybridEngine, or an ActorRolloutRef HybridEngine - We also pre-implemented workers for ``Actor``, ``Rollout``, ``Critic``, ``Reward Model`` and ``Reference model`` on two different backend: PyTorch FSDP and Megatron-LM. See `FSDP Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/fsdp_workers.py>`_ and `Megatron-LM Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/megatron_workers.py>`_ for more information. Step 3: Define resource pool id and resource pool spec ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Resource pool is a division of global GPU resources, ``resource_pool_spec`` is a dict, mapping from id to # of GPUs - In the above example, we defined a global resource pool: global_pool_id, and then put all roles on this one resource pool with all the GPUs in this post-training task. This refers to *co-locate* placement where all the models share the same set of GPUs. - See resource pool and placement for advance usage. Defining reward model/function ------------------------------ .. code:: python # we should adopt a multi-source reward function here # - for rule-based rm, we directly call a reward score # - for model-based rm, we call a model # - for code related prompt, we send to a sandbox if there are test cases # - finally, we combine all the rewards together # - The reward type depends on the tag of the data if config.reward_model.enable: from verl.workers.fsdp_workers import RewardModelWorker role_worker_mapping[Role.RewardModel] = RewardModelWorker mapping[Role.RewardModel] = global_pool_id reward_fn = RewardManager(tokenizer=tokenizer, num_examine=0) # Note that we always use function-based RM for validation val_reward_fn = RewardManager(tokenizer=tokenizer, num_examine=1) resource_pool_manager = ResourcePoolManager(resource_pool_spec=resource_pool_spec, mapping=mapping) Since not all tasks use model-based RM, users need to define here whether it's a model-based RM or a function-based RM - If it's a model-based RM, directly add the ``RewardModel`` role in the resource mapping and add it to the resource pool mapping. - Note that the pre-defined ``RewardModelWorker`` only supports models with the structure of huggingface ``AutoModelForSequenceClassification``. If it's not this model, you need to define your own RewardModelWorker in `FSDP Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/fsdp_workers.py>`_ and `Megatron-LM Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/megatron_workers.py>`_. - If it's a function-based RM, the users are required to classified the reward function for each datasets. .. code:: python def _select_rm_score_fn(data_source): if data_source == 'openai/gsm8k': return gsm8k.compute_score elif data_source == 'lighteval/MATH': return math.compute_score else: raise NotImplementedError See reward functions implemented in `directory <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/>`_ for more information. Define, init and run the PPO Trainer ------------------------------------ .. code:: python trainer = RayPPOTrainer(config=config, tokenizer=tokenizer, role_worker_mapping=role_worker_mapping, resource_pool_manager=resource_pool_manager, ray_worker_group_cls=ray_worker_group_cls, reward_fn=reward_fn, val_reward_fn=val_reward_fn) trainer.init_workers() trainer.fit() - We first initialize the ``RayPPOTrainer`` with user config, tokenizer and all the above worker mapping, resource pool, worker group and reward functions - We first call the ``trainer.init_workers()`` to initialize the models on the allocated GPUs (in the resource pool) - The actual PPO training will be executed in ``trainer.fit()`` veRL can be easily extended to other RL algorithms by reusing the Ray model workers, resource pool and reward functions. See :doc:`extension<../advance/dpo_extension>` for more information. Details of the ``RayPPOTrainer`` is discussed in :doc:`Ray Trainer<../workers/ray_trainer>`.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/examples/ppo_code_architecture.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/examples/ppo_code_architecture.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 9044 }
.. _algo-baseline-page: Algorithm Baselines =================== GSM8k ------------------ Assuming GSM8k dataset is preprocess via ``python3 examples/data_preprocess/gsm8k.py`` Refer to the table below to reproduce PPO training from different pre-trained models. .. _Huggingface: https://huggingface.co/google/gemma-2-2b-it#benchmark-results .. _SFT Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/gemma-2-2b-it-sft-0.411.log .. _SFT+PPO Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/gemma-2-2b-it-ppo-bsz512_4-prompt1024-resp-512-0.640.log .. _wandb: https://api.wandb.ai/links/verl-team/h7ux8602 .. _Qwen Blog: https://qwenlm.github.io/blog/qwen2.5-llm/ .. _PPO Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/Qwen2.5-0.5B-bsz256_2-prompt1024-resp512-0.567.log +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | Model | Method | Test score | Details | +============================+========================+============+=====================+=========================================================================+ | google/gemma-2-2b-it | pretrained checkpoint | 23.9 | `Huggingface`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | google/gemma-2-2b-it | SFT | 52.06 | `SFT Command and logs`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | google/gemma-2-2b-it | SFT + PPO | 64.02 | `SFT+PPO Command and logs`_, `wandb`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | Qwen/Qwen2.5-0.5B-Instruct | pretrained checkpoint | 36.4 | `Qwen Blog`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | Qwen/Qwen2.5-0.5B-Instruct | PPO | 56.7 | `PPO Command and logs`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/experiment/ppo.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/experiment/ppo.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 3029 }
Frequently Asked Questions ==================================== Ray related ------------ How to add breakpoint for debugging with distributed Ray? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Please checkout the official debugging guide from Ray: https://docs.ray.io/en/latest/ray-observability/ray-distributed-debugger.html Distributed training ------------------------ How to run multi-node post-training with Ray? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can start a ray cluster and submit a ray job, following the official guide from Ray: https://docs.ray.io/en/latest/ray-core/starting-ray.html
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/faq/faq.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/faq/faq.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 798 }
Prepare Data (Parquet) for Post-Training ======================================== Before starting the post-training job, we need to prepare the data for the policy training. The data should be stored in the parquet format. We provide several data preprocess scripts for different datasets, including GSM8K, MATH, HelloSwag, Full_hh_rlhf. To prepare other datasets, we need to follow the following steps: The data preprocess script can be divided into two parts: 1. The first part is the common part, which loads the dataset from huggingface's ``datasets`` package. Then preprocess the datasets with the ``make_map_fn`` and then store in the parquet format. .. code:: python import re import os import datasets from verl.utils.hdfs_io import copy, makedirs import argparse # To extract the solution for each prompts in the dataset # def extract_solution(solution_str): # ... if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--local_dir', default='/opt/tiger/gsm8k') parser.add_argument('--hdfs_dir', default=None) args = parser.parse_args() num_few_shot = 5 data_source = 'openai/gsm8k' dataset = datasets.load_dataset(data_source, 'main') train_dataset = dataset['train'] test_dataset = dataset['test'] # Construct a `def make_map_fn(split)` for the corresponding datasets. # ... train_dataset = train_dataset.map(function=make_map_fn('train'), with_indices=True) test_dataset = test_dataset.map(function=make_map_fn('test'), with_indices=True) local_dir = args.local_dir hdfs_dir = args.hdfs_dir train_dataset.to_parquet(os.path.join(local_dir, 'train.parquet')) test_dataset.to_parquet(os.path.join(local_dir, 'test.parquet')) makedirs(hdfs_dir) copy(src=local_dir, dst=hdfs_dir) 2. The users are required to implement the ``make_map_fn()`` function (as well as the ``extract_solution``) on their own to support different datasets or tasks. We already implemented the data preprocess of GSM8k, MATH, Hellaswag and Full_hh_rlhf datasets. And we take the GSM8k dataset as an example: **GSM8K** In the ``make_map_fn``, each data field should consist of the following 5 fields: 1. ``data_source``: The name of the dataset. To index the corresponding reward function in the ``RewardModule`` 2. ``prompt``: This field should be constructed in the format of huggingface chat_template. The tokenizer in ``RLHFDataset`` will apply chat template and tokenize the prompt. 3. ``ability``: Define the task category. 4. ``reward_model``: Currently, we only utilize the ``ground_truth`` field during evaluation. The ``ground_truth`` is computed by the ``extract_solution`` function. **NOTED** that the implementation of the corresponding reward function should align with this extracted ``ground_truth``. 5. ``extra_info``: Record some information of the current prompt. Not use for now. .. code:: python def extract_solution(solution_str): solution = re.search("#### (\\-?[0-9\\.\\,]+)", solution_str) # extract the solution after #### assert solution is not None final_solution = solution.group(0) final_solution = final_solution.split('#### ')[1].replace(',', '') return final_solution instruction_following = "Let's think step by step and output the final answer after \"####\"." # add a row to each data item that represents a unique id def make_map_fn(split): def process_fn(example, idx): question = example.pop('question') question = question + ' ' + instruction_following answer = example.pop('answer') solution = extract_solution(answer) data = { "data_source": data_source, "prompt": [{ "role": "user", "content": question }], "ability": "math", "reward_model": { "style": "rule", "ground_truth": solution }, "extra_info": { 'split': split, 'index': idx } } return data return process_fn
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/preparation/prepare_data.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/preparation/prepare_data.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4335 }
Implement Reward Function for Dataset ====================================== For each dataset, we need to implement a reward function or utilize a reward model to compute the rewards for the generated responses. We already pre-implemented some reward functions in `reward_score directory <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score>`_. Currently, we support reward functions for GSM8k and MATH datasets. For RLHF datasets (e.g., full_hh_rlhf) and Code Generation (e.g., APPS), we utilize reward model and SandBox (will opensource soon) for evaluation respectively. RewardManager ------------- In the entrypoint of the PPO Post-Training script `main_ppo.py <https://github.com/volcengine/verl/blob/main/verl/trainer/main_ppo.py#L33>`_, we implement a ``RewardManager`` that utilze pre-implemented reward functions to compute the scores for each response. In the ``RewardManager``, we implemented a ``__call__`` function to compute the score for each response. All the reward functions are executed by ``compute_score_fn``. The input is a ``DataProto``, which includes: - ``input_ids``, ``attention_mask``: ``input_ids`` and ``attention_mask`` after applying chat_template, including prompt and response - ``responses``: response tokens - ``ground_truth``: The ground truth string of the current prompt. Stored in ``non_tensor_batch`` in the ``DataProto``, which should be preprocessed in the parquet files. - ``data_source``: The dataset name of the current prompt. Stored in ``non_tensor_batch`` in the ``DataProto``, which should be preprocessed in the parquet files. After detokenize the responses, the responses string and the ground truth string will be input to the ``compute_score_fn`` to compute the score for each response. Reward Functions ---------------- We already pre-implemented some reward functions in `reward_score directory <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score>`_. - In the `GSM8k example <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/gsm8k.py>`_, we force the response to output the final answer after four ####, then use string matching to compare with the ground truth. If completely correct, score 1 point; if the format is correct, score 0.1 points; if the format is incorrect, score 0 points. - In the `MATH example <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/math.py>`_, we follow the implementation in `lm-evaluation-harness repository <https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/hendrycks_math/utils.py>`_.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/preparation/reward_function.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/preparation/reward_function.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 2605 }
Installation ============ Requirements ------------ - **Python**: Version >= 3.9 - **CUDA**: Version >= 12.1 veRL supports various backends. Currently, the following configurations are available: - **FSDP** and **Megatron-LM** (optional) for training. - **vLLM** adn **TGI** for rollout generation, **SGLang** support coming soon. Training backends ------------------ We recommend using **FSDP** backend to investigate, research and prototype different models, datasets and RL algorithms. The guide for using FSDP backend can be found in `PyTorch FSDP Backend <https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html>`_. For users who pursue better scalability, we recommend using **Megatron-LM** backend. Currently, we support Megatron-LM@core_v0.4.0 with some internal patches (soon be updated to latest version directly relying on upstream Megatron-LM). The guide for using Megatron-LM backend can be found in `Megatron-LM Backend <https://verl.readthedocs.io/en/latest/workers/megatron_workers.html>`_. Install from docker image ------------------------- We provide pre-built Docker images for quick setup. Image and tag: ``verlai/verl:vemlp-th2.4.0-cu124-vllm0.6.3-ray2.10-te1.7-v0.0.3``. See files under ``docker/`` if you want to build your own image. 1. Launch the desired Docker image: .. code:: bash docker run --runtime=nvidia -it --rm --shm-size="10g" --cap-add=SYS_ADMIN -v <image:tag> 2. Inside the container, install veRL: .. code:: bash # install the nightly version (recommended) git clone https://github.com/volcengine/verl && cd verl && pip3 install -e . # or install from pypi via `pip3 install verl` 3. Setup Megatron (optional) If you want to enable training with Megatron, Megatron code must be added to PYTHONPATH: .. code:: bash cd .. git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git cp verl/patches/megatron_v4.patch Megatron-LM/ cd Megatron-LM && git apply megatron_v4.patch pip3 install -e . export PYTHONPATH=$PYTHONPATH:$(pwd) You can also get the Megatron code after verl's patch via .. code:: bash git clone -b core_v0.4.0_verl https://github.com/eric-haibin-lin/Megatron-LM Install from custom environment --------------------------------- To manage environment, we recommend using conda: .. code:: bash conda create -n verl python==3.9 conda activate verl For installing the latest version of veRL, the best way is to clone and install it from source. Then you can modify our code to customize your own post-training jobs. .. code:: bash # install verl together with some lightweight dependencies in setup.py git clone https://github.com/volcengine/verl.git cd verl pip3 install -e . You can also install veRL using ``pip3 install`` .. code:: bash # directly install from pypi pip3 install verl Dependencies ------------ veRL requires Python >= 3.9 and CUDA >= 12.1. veRL support various backend, we currently release FSDP and Megatron-LM for actor training and vLLM for rollout generation. The following dependencies are required for all backends, PyTorch FSDP and Megatron-LM. The pros, cons and extension guide for using PyTorch FSDP backend can be found in :doc:`FSDP Workers<../workers/fsdp_workers>`. .. code:: bash # install torch [or you can skip this step and let vllm to install the correct version for you] pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121 # install vllm pip3 install ray vllm==0.6.3 # or you can install 0.5.4, 0.4.2 and 0.3.1 # flash attention 2 pip3 install flash-attn --no-build-isolation For users who pursue better scalability, we recommend using Megatron-LM backend. Please install the above dependencies first. Currently, we support Megatron-LM\@core_v0.4.0 and we fix some internal issues of Megatron-LM. Here's the additional installation guide (optional). The pros, cons and extension guide for using Megatron-LM backend can be found in :doc:`Megatron-LM Workers<../workers/megatron_workers>`. .. code:: bash # Megatron-LM Backend (optional) # apex pip3 install -v --disable-pip-version-check --no-cache-dir --no-build-isolation \ --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" \ git+https://github.com/NVIDIA/apex # transformer engine pip3 install git+https://github.com/NVIDIA/TransformerEngine.git@v1.7 # megatron core v0.4.0: clone and apply the patch # You can also get the patched Megatron code patch via # git clone -b core_v0.4.0_verl https://github.com/eric-haibin-lin/Megatron-LM cd .. git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git cd Megatron-LM cp ../verl/patches/megatron_v4.patch . git apply megatron_v4.patch pip3 install -e . export PYTHONPATH=$PYTHONPATH:$(pwd)
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/start/install.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/start/install.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4914 }
.. _quickstart: ========================================================= Quickstart: Post-train a LLM using PPO with GSM8K dataset ========================================================= Post-train a LLM using GSM8K dataset =================================================================== Introduction ------------ .. _hf_dataset_gsm8k: https://huggingface.co/datasets/gsm8k In this example, we train an LLM to tackle the `GSM8k <hf_dataset_gsm8k>`_ task with function-based rewards. [1]_ Prerequisite: - the latest version of ``verl`` and its dependencies installed following the installation guide. Using the docker image is recommended. - an GPU with at least 24 GB HBM Dataset Introduction -------------------- GSM8k is a math problem dataset. The prompt is an elementary school problem. The LLM model is asked to solve the math problem. Below is an example: Prompt Katy makes coffee using teaspoons of sugar and cups of water in the ratio of 7:13. If she used a total of 120 teaspoons of sugar and cups of water, calculate the number of teaspoonfuls of sugar she used. Solution The total ratio representing the ingredients she used to make the coffee is 7+13 = <<7+13=20>>20 Since the fraction representing the number of teaspoons she used is 7/20, she used 7/20\ *120 = <<7/20*\ 120=42>>42 #### 42 Step 1: Prepare the dataset ---------------------------- We preprocess the dataset in parquet format so that (1) it contains necessary fields for computing RL rewards and (2) is faster to read. .. code-block:: bash python3 examples/data_preprocess/gsm8k.py --local_dir ~/data/gsm8k Step 2: Download a model for post-training ------------------------------------------- Usually we recommend starting with an "instruct" model variant so that the model follows instructions. In this example, we start with the ``Qwen2.5-0.5B-Instruct`` model. If you start from a "base" model variant, doing SFT before RL is recommended. Refer to the `sft directory <https://github.com/volcengine/verl/blob/main/examples/gsm8k/sft/>`_ and `SFT Trainer <https://github.com/volcengine/verl/blob/main/verl/trainer/fsdp_sft_trainer.py>`_ for further details. .. code-block:: bash python3 -c "import transformers; transformers.pipeline('text-generation', model='Qwen/Qwen2.5-0.5B-Instruct')" Step 3: Perform PPO training with the instruct model ---------------------------------------------------------------------- **Reward Model/Function** We use a pre-defined rule-based reward model. We force the model to produce a final answer following 4 “#” as shown in the solution. We extract the final answer from both the solution and model's output using regular expression matching. We assign a reward of 1 to correct answer, 0.1 to incorrect answer and 0 to no answer. For mode details, please refer to `verl/utils/reward_score/gsm8k.py <https://github.com/volcengine/verl/blob/v0.1/verl/utils/reward_score/gsm8k.py>`_. **Training Script** Now let's run PPO training with the dataset and model above. [2]_ Set the ``data.train_files`` ,\ ``data.val_files``, ``actor_rollout_ref.model.path`` and ``critic.model.path`` based on your dataset and model names or paths. .. code-block:: bash PYTHONUNBUFFERED=1 python3 -m verl.trainer.main_ppo \ data.train_files=$HOME/data/gsm8k/train.parquet \ data.val_files=$HOME/data/gsm8k/test.parquet \ data.train_batch_size=256 \ data.val_batch_size=1312 \ data.max_prompt_length=512 \ data.max_response_length=256 \ actor_rollout_ref.model.path=Qwen/Qwen2.5-0.5B-Instruct \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.actor.ppo_mini_batch_size=64 \ actor_rollout_ref.actor.ppo_micro_batch_size=4 \ actor_rollout_ref.rollout.log_prob_micro_batch_size=8 \ actor_rollout_ref.rollout.tensor_model_parallel_size=1 \ actor_rollout_ref.rollout.gpu_memory_utilization=0.4 \ actor_rollout_ref.ref.log_prob_micro_batch_size=4 \ critic.optim.lr=1e-5 \ critic.model.path=Qwen/Qwen2.5-0.5B-Instruct \ critic.ppo_micro_batch_size=4 \ algorithm.kl_ctrl.kl_coef=0.001 \ trainer.logger=['console'] \ +trainer.val_before_train=False \ trainer.default_hdfs_dir=null \ trainer.n_gpus_per_node=1 \ trainer.nnodes=1 \ trainer.save_freq=10 \ trainer.test_freq=10 \ trainer.total_epochs=15 2>&1 | tee verl_demo.log You are expected to see the following logs, indicating training in progress. The key metric ``val/test_score/openai/gsm8k`` is computed every ``trainer.test_freq`` steps: .. code-block:: bash step:0 - timing/gen:21.470 - timing/ref:4.360 - timing/values:5.800 - critic/kl:0.000 - critic/kl_coeff:0.001 - timing/adv:0.109 - timing/update_critic:15.664 - critic/vf_loss:14.947 - critic/vf_clipfrac:0.000 - critic/vpred_mean:-2.056 - critic/grad_norm:1023.278 - critic/lr(1e-4):0.100 - timing/update_actor:20.314 - actor/entropy_loss:0.433 - actor/pg_loss:-0.005 - actor/pg_clipfrac:0.000 - actor/ppo_kl:0.000 - actor/grad_norm:1.992 - actor/lr(1e-4):0.010 - critic/score/mean:0.004 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.004 - critic/rewards/max:1.000 - critic/rewards/min:0.000 - critic/advantages/mean:-0.000 - critic/advantages/max:2.360 - critic/advantages/min:-2.280 - critic/returns/mean:0.003 - critic/returns/max:0.000 - critic/returns/min:0.000 - critic/values/mean:-2.045 - critic/values/max:9.500 - critic/values/min:-14.000 - response_length/mean:239.133 - response_length/max:256.000 - response_length/min:77.000 - prompt_length/mean:104.883 - prompt_length/max:175.000 - prompt_length/min:68.000 step:1 - timing/gen:23.020 - timing/ref:4.322 - timing/values:5.953 - critic/kl:0.000 - critic/kl_coeff:0.001 - timing/adv:0.118 - timing/update_critic:15.646 - critic/vf_loss:18.472 - critic/vf_clipfrac:0.384 - critic/vpred_mean:1.038 - critic/grad_norm:942.924 - critic/lr(1e-4):0.100 - timing/update_actor:20.526 - actor/entropy_loss:0.440 - actor/pg_loss:0.000 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.060 - actor/lr(1e-4):0.010 - critic/score/mean:0.000 - critic/score/max:0.000 - critic/score/min:0.000 - critic/rewards/mean:0.000 - critic/rewards/max:0.000 - critic/rewards/min:0.000 - critic/advantages/mean:0.000 - critic/advantages/max:2.702 - critic/advantages/min:-2.616 - critic/returns/mean:0.000 - critic/returns/max:0.000 - critic/returns/min:0.000 - critic/values/mean:-2.280 - critic/values/max:11.000 - critic/values/min:-16.000 - response_length/mean:232.242 - response_length/max:256.000 - response_length/min:91.000 - prompt_length/mean:102.398 - prompt_length/max:185.000 - prompt_length/min:70.000 Checkout :ref:`algo-baseline-page` for full training and validation logs for reference. The checkpoint is saved at the following dir by default: ``checkpoints/${trainer.project_name}/${trainer.experiment_name}`` To enable ``wandb`` for experiment tracking, set the following configs: .. code-block:: bash trainer.logger=['console','wandb'] \ trainer.project_name=$YOUR_PROJECT_NAME \ trainer.experiment_name=$YOUR_RUN_NAME \ If you encounter out of memory issues with HBM less than 32GB, enable the following configs would help: .. code-block:: bash actor_rollout_ref.actor.ppo_micro_batch_size=1 \ critic.ppo_micro_batch_size=1 \ For the full set of configs, please refer to :ref:`config-explain-page` for detailed explaination and performance tuning. .. [1] The original paper (https://arxiv.org/pdf/2110.14168) mainly focuses on training a verifier (a reward model) to solve math problems via Best-of-N sampling. In this example, we train an RL agent using a rule-based reward model. .. [2] More training script examples for FSDP and Megatron-LM backend are stored in `examples/ppo_trainer <https://github.com/volcengine/verl/tree/main/examples/ppo_trainer>`_ directory.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/start/quickstart.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/start/quickstart.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 7899 }
PyTorch FSDP Backend ====================== We support PyTorch FSDP Backend by implementing various workers for actor, critic, reference, rollout and reward models. We also implement the ``FSDPVLLMShardingManager`` that reshard weight between FSDP and vLLM in `fsdp_vllm.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/hybrid_engine/fsdp_vllm.py>`_. **Pros** - Readily support various models. - Users only need to implement the corresponding ``dtensor_weight_loader`` for weight synchronization between FSDP and vLLM. While for ``hf_weight_loader``, users can directly apply any models supported both in HF and vLLM without any code change. - Easy to organize the forward and backward computation for each model. **Cons** - Poor scalability when it comes to large-scale models (e.g. Llama 70B and 405B) - The resharding overhead between actor and rollout could be larger than Megatron-LM backend. Due to the simplicity, we recommend using FSDP backend for algorithm research and prototyping. FSDP Workers -------------- ActorRolloutRefWorker ^^^^^^^^^^^^^^^^^^^^^ Actor/Rollout HybridEngine '''''''''''''''''''''''''' 1. HybridEngine, Actor and Rollout initialization API. .. code:: python @register(dispatch_mode=Dispatch.ONE_TO_ALL) def init_model(self): ``ONE_TO_ALL``: when calling the ``init_model`` function from the driver process, each worker (on a GPU) will execute the following model initialization process. The initialization details of HybridEngine, Actor and Rollout are highlighted below: 1. ``DataParallelPPOActor`` implements the simple PPO computation logics when the model is built with FSDP, including compute log prob, model update. 2. ``vLLMRollout`` support generation with vLLM. We modify the vLLM Engine and make it executed under SPMD to fit into our ``WorkerGroup`` design. 3. ``FSDPVLLMShardingManager`` a context manager to perform actual resharding between actor and rollout. See `source code <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/fsdp_workers.py#L42>`_. for more information. 1. Generate sequence and recompute log prob .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def generate_sequences(self, prompts: DataProto): - ``Dispatch.DP_COMPUTE_PROTO``: The data will be dispatched and collected along the DP dimension - In this function, the rollout model will perform auto-regressive generation and the actor model will recompute the old log prob for the generetad response. 3. Update actor model .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def update_actor(self, data: DataProto): - Update the actor model weight using PPO & entropy loss. ReferenceModel '''''''''''''' 1. Reference model initialization The reference model is initialized using the same function as the actor model without initializing the HybridEngine and Optimizer. Then the actor model is also wrapped by the ``DataParallelPPOActor``. 2. Compute reference log prob .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def compute_ref_log_prob(self, data: DataProto): - In this function, the reference model will call the compute log prob function in ``DataParallelPPOActor`` to compute the reference log prob. CriticWorker and RewardWorker ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1. Model initialization Quite similar to reference model. The CriticWorker will perform additional initialization for the Optimizer. 2. Compute Values for CriticWorker .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def compute_values(self, data: DataProto): 3. Update Critic .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def update_critic(self, data: DataProto): 4. Compute Reward .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def compute_rm_score(self, data: DataProto): HybridShard ------------ We didn't support FSDP `HybridShard`. To support this, we may need to construct a 2D device mesh and test the corresponding ``dtensor_weight_loader`` and ``hf_weight_loader`` for each model.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/workers/fsdp_workers.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/workers/fsdp_workers.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4166 }
Megatron-LM Backend ===================== We support Megatron Backend by implementing various workers for actor, critic, reference, rollout and reward models. We also implement the ``3DHybridEngine`` using Megatron-LM and vLLM in `megatron_vllm.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/hybrid_engine/megatron_vllm.py>`_. **Pros** - Support 3D parallelism and sequence parallelism for best scalablility and throughput. - 3D HybridEngine can significantly reduce peak memory usage and reduce weight synchronize overhead between actor and rollout. **Cons** - Users should implement their own models for Megatron-LM - Users should implement the corresponding weight_loader to - synchronize the model weight between actor (in Megatron) and rollout (in vLLM). - load weights from checkpoints to corresponding model in Megatron-LM Megatron Workers ---------------- MegatronWorker ^^^^^^^^^^^^^^ ``MegatronWorker`` is the base class of different megatron worker classes. In this class, ``get_megatron_global_info`` and ``get_megatron_rank_info`` function to retrive the 3D parallel world size and rank of each ``Worker`` running on specific GPU. These information will be used in transfer protocol for Megatron Backend. The following ``Worker`` class for different models will be utilized to construct the ``WorkerGroup`` . We implement various of APIs for each ``Worker`` class decorated by the ``@register(dispatch_mode=)`` . These APIs can be called by the ray driver process. The data can be correctly collect and dispatch following the ``dispatch_mode`` on each function. The supported dispatch_model (i.e., transfer protocols) can be found in `decorator.py <https://github.com/volcengine/verl/blob/main/verl/single_controller/base/decorator.py>`_. ActorRolloutRefWorker ^^^^^^^^^^^^^^^^^^^^^ This class is implemented for Actor/Rollout HybridEngine or for the reference model to initialize their model and perform computation. Actor/Rollout HybridEngine '''''''''''''''''''''''''' 1. HybridEngine, Actor and Rollout initialization API. .. code:: python @register(dispatch_mode=Dispatch.ONE_TO_ALL) def init_model(self): ``ONE_TO_ALL``: when calling the ``init_model`` function from the driver process, each worker (on a GPU) will execute the following model initialization process. The initialization details of HybridEngine, Actor and Rollout are highlighted below: 1. ``AllGatherPPModel`` holds memory buffer for both Actor and Rollout and support weight resharding between actor and rollout. 2. ``MegatronPPOActor`` implements the simple PPO computation logics when the model is built with Megatron, including compute log prob, model update. 3. ``vLLMRollout`` support generation with vLLM. We modify the vLLM Engine and make it executed under SPMD to fit into our ``WorkerGroup`` design. 4. ``MegatronVLLMShardingManager`` a context manager to perform actual resharding between actor and rollout. See `source code <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/megatron_workers.py#L63>`_ for more information. .. code:: python # Initialize the 3D HybridEngine hybrid_engine = AllGatherPPModel(model_provider=megatron_actor_model_provider) # Fetch the model at current rank actor_module = hybrid_engine.this_rank_models ... # build actor model self.actor = MegatronPPOActor(config=self.config.actor, model_config=self.actor_model_config, megatron_config=megatron_config, actor_module=self.actor_module, actor_optimizer=self.actor_optimizer, actor_optimizer_config=self.actor_optim_config) # build rollout # rollout initialization rollout = vLLMRollout(actor_module=params, config=self.config.rollout, tokenizer=self.tokenizer, model_hf_config=self.actor_model_config, train_tp=mpu.get_tensor_model_parallel_world_size()) # perform weight resharding between actor and rollout sharding_manager = MegatronVLLMShardingManager(module=self.hybrid_engine, inference_engine=rollout.inference_engine, model_config=self.actor_model_config, layer_name_mapping=layer_name_mapping) ... 2. Generate sequence and recompute log prob .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_PP_AS_DP_PROTO) def generate_sequences(self, prompts: DataProto): - ``Dispatch.MEGATRON_PP_AS_DP_PROTO``: The PP dimension of the actor model will be regarded as DP dimension. Then the driver process will dispatch and collect the data according to this reorganization. This is because, in HybridEngine, the actor weight, which usually applied larger 3D parallel sizes, will be gathered along the PP dimension and TP dimension. Therefore, the corresponding data should be dispatched and collected through the 3D parallel group of the rollout model, rather than the actor model. However, the world_size and rank information can only be retrived from ``get_megatron_global_info`` and ``get_megatron_rank_info``, which records the 3D information for the actor model. Moreover, the data resharding inside TP dimension will be processed within the HybridEngine. - In this function, the rollout model will perform auto-regressive generation and the actor model will recompute the old log prob for the generetad response. 3. Update actor model .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def update_actor(self, data: DataProto): - ``Dispatch.MEGATRON_COMPUTE_PROTO``: User passes the data partitioned by DP dimension. The data is dispatched to all tp/pp ranks within the same dp group, and ultimately only collects output data from tp=0 and the last pp. - Update the actor model weight using PPO & entropy loss. ReferenceModel '''''''''''''' 1. Reference model initialization The reference model is initialized using the same function as the actor model without initializing the HybridEngine and Optimizer. Then the actor model is also wrapped by the ``MegatronPPOActor``. 2. Compute reference log prob .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def compute_ref_log_prob(self, data: DataProto): - In this function, the reference model will call the compute log prob function in ``MegatronPPOActor`` to compute the reference log prob. CriticWorker and RewardWorker ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1. Model initialization Quite similar to reference model. The CriticWorker will perform additional initialization for the Optimizer. 2. Compute Values for CriticWorker .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def compute_values(self, data: DataProto): 3. Update Critic .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def update_critic(self, data: DataProto): 4. Compute Reward .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def compute_rm_score(self, data: DataProto): Context Parallel ---------------- This require the developer/contributor to implement the context parallel both in Megatron-LM and models.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/workers/megatron_workers.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/workers/megatron_workers.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 7477 }
PPO Ray Trainer =============== We implement the RayPPOTrainer, which is a trainer runs on the driver process on a single CPU/GPU node (default is CPU). The PPORayTrainer include 3 core functions for data preparation, WorkerGroup initialization and PPO training loop. Data Preparation ---------------- The ``PPORayTrainer``, as a single process, is responsible for loading a complete batch of samples (prompts) from the dataset and then dispatch to different worker_groups runnning on different GPUs. To generalize the data loading, we implement the ``RLHFDataset`` class to load the preprocessed parquet files, apply chat templates to the prompts, add padding, truncate prompts that exceed max prompt length and then tokenize. .. code:: python self.train_dataset = RLHFDataset(parquet_files=self.config.data.train_files, tokenizer=self.tokenizer, prompt_key=self.config.data.prompt_key, max_prompt_length=self.config.data.max_prompt_length, filter_prompts=True, return_raw_chat=self.config.data.get('return_raw_chat', False), truncation='error') Then, the dataloader will iterate the dataset under PPO mini batch size. WorkerGroup Initialization -------------------------- We first introduce a basic implementation of initializing the ``WorkerGroup`` of the actor model on a given set of GPUs. .. code:: python # max_colocate_count means the number of WorkerGroups (i.e. processes) in each RayResourcePool # For FSDP backend, we recommend using max_colocate_count=1 that merge all WorkerGroups into one. # For Megatron backend, we recommend using max_colocate_count>1 that can utilize different WorkerGroup for differnt models resource_pool = RayResourcePool(process_on_nodes=[config.trainer.n_gpus_per_node] * config.trainer.nnodes, use_gpu=True, max_colocate_count=1) # define actor rollout cls to be init on remote actor_rollout_cls = RayClassWithInitArgs(cls=ActorRolloutWorker) # define actor_rollout worker group actor_rollout_worker_group = MegatronRayWorkerGroup(resource_pool=resource_pool, ray_cls_with_init=actor_rollout_cls, default_megatron_kwargs=config.actor_rollout.megatron) Different WorkerGroups, like ``actor_rollout_worker_group`` , ``critic_worker_group`` and ``ref_worker_group`` lies on a separate process in the above implementation. The driver process can then call the distributed compute function within the ``actor_rollout_worker_group`` and other roles to construct the RL training loop. For models colocated in the same set of GPUs, we further provide a fine-grain optimization, which merge the ``worker_group`` of different roles in the same process. This optimization can save the redundant CUDA/distributed context in different processes. .. code:: python # initialize WorkerGroup # NOTE: if you want to use a different resource pool for each role, which can support different parallel size, # you should not use `create_colocated_worker_cls`. Instead, directly pass different resource pool to different worker groups. # See TODO(url) for more information. all_wg = {} for resource_pool, class_dict in self.resource_pool_to_cls.items(): worker_dict_cls = create_colocated_worker_cls(class_dict=class_dict) wg_dict = self.ray_worker_group_cls(resource_pool=resource_pool, ray_cls_with_init=worker_dict_cls) spawn_wg = wg_dict.spawn(prefix_set=class_dict.keys()) all_wg.update(spawn_wg) if self.use_critic: self.critic_wg = all_wg['critic'] self.critic_wg.init_model() if self.use_reference_policy: self.ref_policy_wg = all_wg['ref'] self.ref_policy_wg.init_model() if self.use_rm: self.rm_wg = all_wg['rm'] self.rm_wg.init_model() # we should create rollout at the end so that vllm can have a better estimation of kv cache memory self.actor_rollout_wg = all_wg['actor_rollout'] self.actor_rollout_wg.init_model() .. note:: For megatron backend, if we merge the ``worker_groups`` into the same processes, all the roles will utilize the same 3D parallel size. To optimize this, we may need to maintain several 3D process groups for each role in the same distributed context. If you want to use different 3D parallel size for different roles, please follow the similar architecture of the first code block to initialize each role's ``worker_group`` PPO Training Loop ----------------- We implement the PPO training loop by calling the functions in worker_group of each role. The input and output data of each function is a ``DataProto`` object implemented in `protocol.py <https://github.com/volcengine/verl/blob/main/verl/protocol.py>`_. In the training loop, trainer will dispatch/collect the data to/from different GPUs following the transfer protocols wrapped in the workers' functions. The computation of PPO micro batches is processed in ``update_actor`` and ``update_critic`` functions. To extend to other RLHF algorithms, such as DPO, GRPO, please refer to :doc:`../advance/dpo_extension`. .. code:: python def fit(self): """ The training loop of PPO. The driver process only need to call the compute functions of the worker group through RPC to construct the PPO dataflow. The light-weight advantage computation is done on the driver process. """ from verl.utils.tracking import Tracking from omegaconf import OmegaConf logger = Tracking(project_name=self.config.trainer.project_name, experiment_name=self.config.trainer.experiment_name, default_backend=self.config.trainer.logger, config=OmegaConf.to_container(self.config, resolve=True)) global_steps = 0 # perform validation before training # currently, we only support validation using the reward_function. if self.val_reward_fn is not None: val_metrics = self._validate() pprint(f'Initial validation metrics: {val_metrics}') for epoch in range(self.config.trainer.total_epochs): for batch_dict in self.train_dataloader: metrics = {} batch: DataProto = DataProto.from_single_dict(batch_dict) # batch = batch.to('cuda') # pop those keys for generation gen_batch = batch.pop(batch_keys=['input_ids', 'attention_mask', 'position_ids']) # generate a batch with Timer(name='gen', logger=None) as timer: gen_batch_output = self.actor_rollout_wg.generate_sequences(gen_batch) metrics['timing/gen'] = timer.last batch = batch.union(gen_batch_output) if self.use_reference_policy: # compute reference log_prob with Timer(name='ref', logger=None) as timer: ref_log_prob = self.ref_policy_wg.compute_ref_log_prob(batch) batch = batch.union(ref_log_prob) metrics['timing/ref'] = timer.last # compute values with Timer(name='values', logger=None) as timer: values = self.critic_wg.compute_values(batch) batch = batch.union(values) metrics['timing/values'] = timer.last with Timer(name='adv', logger=None) as timer: # compute scores. Support both model and function-based. # We first compute the scores using reward model. Then, we call reward_fn to combine # the results from reward model and rule-based results. if self.use_rm: # we first compute reward model score reward_tensor = self.rm_wg.compute_rm_score(batch) batch = batch.union(reward_tensor) # we combine with rule-based rm reward_tensor = self.reward_fn(batch) batch.batch['token_level_scores'] = reward_tensor # compute rewards. apply_kl_penalty if available batch, kl_metrics = apply_kl_penalty(batch, kl_ctrl=self.kl_ctrl, kl_penalty=self.config.algorithm.kl_penalty) metrics.update(kl_metrics) # compute advantages, executed on the driver process batch = compute_advantage(batch, self.config.algorithm.gamma, self.config.algorithm.lam, adv_estimator=self.config.algorithm.adv_estimator) metrics['timing/adv'] = timer.last # update critic if self.use_critic: with Timer(name='update_critic', logger=None) as timer: critic_output = self.critic_wg.update_critic(batch) metrics['timing/update_critic'] = timer.last critic_output_metrics = reduce_metrics(critic_output.meta_info['metrics']) metrics.update(critic_output_metrics) # implement critic warmup if self.config.trainer.critic_warmup <= global_steps: # update actor with Timer(name='update_actor', logger=None) as timer: actor_output = self.actor_rollout_wg.update_actor(batch) metrics['timing/update_actor'] = timer.last actor_output_metrics = reduce_metrics(actor_output.meta_info['metrics']) metrics.update(actor_output_metrics) # validate if self.val_reward_fn is not None and (global_steps + 1) % self.config.trainer.test_freq == 0: with Timer(name='testing', logger=None) as timer: val_metrics: dict = self._validate() val_metrics = {f'val/{key}': val for key, val in val_metrics.items()} metrics['timing/testing'] = timer.last metrics.update(val_metrics) # collect metrics data_metrics = compute_data_metrics(batch=batch) metrics.update(data_metrics) # TODO: make a canonical logger that supports various backend logger.log(data=metrics, step=global_steps) if self.config.trainer.save_freq > 0 and (global_steps + 1) % self.config.trainer.save_freq == 0: actor_local_path = os.path.join(self.config.trainer.default_local_dir, 'actor', f'global_step_{global_steps}') actor_remote_path = os.path.join(self.config.trainer.default_hdfs_dir, 'actor') self.actor_rollout_wg.save_checkpoint(actor_local_path, actor_remote_path) if self.use_critic: critic_local_path = os.path.join(self.config.trainer.default_local_dir, 'critic', f'global_step_{global_steps}') critic_remote_path = os.path.join(self.config.trainer.default_hdfs_dir, 'critic') self.critic_wg.save_checkpoint(critic_local_path, critic_remote_path) global_steps += 1 # perform validation after training if self.val_reward_fn is not None: val_metrics = self._validate() pprint(f'Final validation metrics: {val_metrics}')
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/workers/ray_trainer.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/workers/ray_trainer.rst", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 12036 }
# Split Placement Example Here we introduce how to run the naive implementation of the split placement of PPO algorithm. We will release the complete version of flexible placement in the near future. For quickstart, you can only follow Step 2 to modify the code and then follow Step 4 to execute the split placement example. ### Step 1: Placing the models to different GPUs Specify the placement and resource allocation. In the example, we place the actor and reference in the first half of the GPUs while map the critic and reward model (if any) to the second half of the GPUs. ```python actor_rollout_ref_pool_id = 'actor_rollout_ref_pool' critic_pool_id = 'critic_pool' if config.trainer.nnodes // 2 == 0 and config.trainer.n_gpus_per_node // 2 > 0: resource_pool_spec = { actor_rollout_ref_pool_id: [config.trainer.n_gpus_per_node // 2] * config.trainer.nnodes, critic_pool_id: [config.trainer.n_gpus_per_node // 2] * config.trainer.nnodes, } else: resource_pool_spec = { actor_rollout_ref_pool_id: [config.trainer.n_gpus_per_node] * (config.trainer.nnodes // 2), critic_pool_id: [config.trainer.n_gpus_per_node] * (config.trainer.nnodes // 2), } print(f'resource_pool_spec: {resource_pool_spec}') mapping = { Role.ActorRollout: actor_rollout_ref_pool_id, Role.Critic: critic_pool_id, Role.RefPolicy: actor_rollout_ref_pool_id, } mapping[Role.RewardModel] = critic_pool_id ``` ### Step 2: Make the models executed asynchronously Based on the model placement, we need to make the models executed asynchronously. To do so, you need to turn off the `blocking` flag (i.e., `blocking=False`) in our decorator of some model operations. For example, we hope the actor update and critic update can be executed in parallel, then we need to make the following modification in `fsdp_workers.py` ``` @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO, blocking=False) def update_actor(self, data: DataProto): ... @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO, blocking=False) def update_critic(self, data: DataProto): ... ``` We can also parallelize the computation of `ref_log_prob` and `values` and `rewards` in the split placement. For simplicity of the tutorial, we ### Step 3: Execute these operation in parallel in the single controller process To implement the parallel execution of the actor and critic update, the only thing we need to modify in the `ray_trainer.py` is to `get` the concurrent `futures` on the single controller process. ```python critic_output = critic_output.get() actor_output = actor_output.get() ``` ### Step 4: Run the split placement example ``` bash run_deepseek7b_llm.sh ```
{ "source": "Jiayi-Pan/TinyZero", "title": "examples/split_placement/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/examples/split_placement/README.md", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 2686 }
# Models Common modelzoo such as huggingface/transformers stuggles when using Pytorch native model parallelism. Following the design principle of vLLM, we keep a simple, parallelizable, highly-optimized with packed inputs in verl. ## Adding a New Huggingface Model ### Step 1: Copy the model file from HF to verl - Add a new file under verl/models/hf - Copy ONLY the model file from huggingface/transformers/models to verl/models/hf ### Step 2: Modify the model file to use packed inputs - Remove all the code related to inference (kv cache) - Modify the inputs to include only - input_ids (total_nnz,) - cu_seqlens (total_nnz + 1,) - max_seqlen_in_batch: int - Note that this requires using flash attention with causal mask. ### Step 2.5: Add tests - Add a test to compare this version and the huggingface version - Following the infrastructure and add tests to tests/models/hf ### Step 3: Add a function to apply tensor parallelism - Please follow - https://pytorch.org/docs/stable/distributed.tensor.parallel.html - https://pytorch.org/tutorials/intermediate/TP_tutorial.html - General comments - Tensor Parallelism in native Pytorch is NOT auto-parallelism. The way it works is to specify how model parameters and input/output reshards using configs. These configs are then registered as hooks to perform input/output resharding before/after model forward. ### Step 4: Add a function to apply data parallelism - Please use FSDP2 APIs - See demo here https://github.com/pytorch/torchtitan/blob/main/torchtitan/parallelisms/parallelize_llama.py#L413 ### Step 5: Add a function to apply pipeline parallelism - Comes in Pytorch 2.4 - Currently only in alpha in nightly version - Check torchtitan for more details
{ "source": "Jiayi-Pan/TinyZero", "title": "verl/models/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/verl/models/README.md", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 1742 }
# Detached Worker ## How to run (Only on a single node) - Start a local ray cluster: ```bash ray start --head --port=6379 ``` - Run the server ```bash python3 server.py ``` - On another terminal, Run the client ```bash python3 client.py ```
{ "source": "Jiayi-Pan/TinyZero", "title": "tests/ray/detached_worker/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/tests/ray/detached_worker/README.md", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 241 }
# Dataset Format ## RLHF dataset We combine all the data sources into a single parquet files. We directly organize the prompt into the chat format so that multi-turn chats can be easily incorporated. In the prompt, we may add instruction following texts to guide the model output the answers in a particular format so that we can extract the answers. Math problems ```json { "data_source": "openai/gsm8k", "prompt": [{"role": "user", "content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? Let's think step by step and output the final answer after \"####\""}], "ability": "math", "reward_model": { "style": "rule", "ground_truth": ["72"] }, } ```
{ "source": "Jiayi-Pan/TinyZero", "title": "verl/utils/dataset/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/verl/utils/dataset/README.md", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 796 }
# Digit completion This is an example of solving a digit completion problem. The problem is defined as below: The prompt is a sequence of numbers with fixed difference. The agent's goal is to complete the next N numbers. If the max number is reached, the next number should be modulo with max number. For example, - prompt = [1, 2, 3] - N = 5 - max_number = 6 The response should be [4, 5, 6, 7%6, 8%6] = [4, 5, 6, 0, 1]. # Environment definition The core definition of the task is defined in verl/envs/digit_completion/task.py It is highly recommended to take a look at it for better understanding. # Run experiments The users are required to specify the config path and config name (and the relative model config path to the current working directory) ```bash # cd examples/arithmetic_sequence/rl # Specify the config path and config name (current working dir) python3 -m verl.trainer.ppo.ray_megatron_train_synchronous --config-path=$(pwd)/config --config-name='ray_megatron' # The default relative path of model config is 'config/model_config', if you want to change it, you can rewrite it in ray_megatron.yaml or using: python3 -m verl.trainer.ppo.ray_megatron_train_synchronous --config-path=$(pwd)/config --config-name='ray_megatron' ++model.base_path=config/model_config ```
{ "source": "Jiayi-Pan/TinyZero", "title": "tests/e2e/arithmetic_sequence/rl/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/tests/e2e/arithmetic_sequence/rl/README.md", "date": "2025-01-21T16:49:12", "stars": 9907, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 1297 }
# Open Source License Attribution Cosmos uses Open Source components. You can find the details of these open-source projects along with license information below, sorted alphabetically. We are grateful to the developers for their contributions to open source and acknowledge these below. ## Better-Profanity - [MIT License](https://github.com/snguyenthanh/better_profanity/blob/master/LICENSE) ``` Copyright (c) 2018 The Python Packaging Authority Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ## FFmpeg - [FFMPEG License](https://github.com/FFmpeg/FFmpeg/blob/master/LICENSE.md) ``` # License Most files in FFmpeg are under the GNU Lesser General Public License version 2.1 or later (LGPL v2.1+). Read the file `COPYING.LGPLv2.1` for details. Some other files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to FFmpeg. Some optional parts of FFmpeg are licensed under the GNU General Public License version 2 or later (GPL v2+). See the file `COPYING.GPLv2` for details. None of these parts are used by default, you have to explicitly pass `--enable-gpl` to configure to activate them. In this case, FFmpeg's license changes to GPL v2+. Specifically, the GPL parts of FFmpeg are: - libpostproc - optional x86 optimization in the files - `libavcodec/x86/flac_dsp_gpl.asm` - `libavcodec/x86/idct_mmx.c` - `libavfilter/x86/vf_removegrain.asm` - the following building and testing tools - `compat/solaris/make_sunver.pl` - `doc/t2h.pm` - `doc/texi2pod.pl` - `libswresample/tests/swresample.c` - `tests/checkasm/*` - `tests/tiny_ssim.c` - the following filters in libavfilter: - `signature_lookup.c` - `vf_blackframe.c` - `vf_boxblur.c` - `vf_colormatrix.c` - `vf_cover_rect.c` - `vf_cropdetect.c` - `vf_delogo.c` - `vf_eq.c` - `vf_find_rect.c` - `vf_fspp.c` - `vf_histeq.c` - `vf_hqdn3d.c` - `vf_kerndeint.c` - `vf_lensfun.c` (GPL version 3 or later) - `vf_mcdeint.c` - `vf_mpdecimate.c` - `vf_nnedi.c` - `vf_owdenoise.c` - `vf_perspective.c` - `vf_phase.c` - `vf_pp.c` - `vf_pp7.c` - `vf_pullup.c` - `vf_repeatfields.c` - `vf_sab.c` - `vf_signature.c` - `vf_smartblur.c` - `vf_spp.c` - `vf_stereo3d.c` - `vf_super2xsai.c` - `vf_tinterlace.c` - `vf_uspp.c` - `vf_vaguedenoiser.c` - `vsrc_mptestsrc.c` Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then the configure parameter `--enable-version3` will activate this licensing option for you. Read the file `COPYING.LGPLv3` or, if you have enabled GPL parts, `COPYING.GPLv3` to learn the exact legal terms that apply in this case. There are a handful of files under other licensing terms, namely: * The files `libavcodec/jfdctfst.c`, `libavcodec/jfdctint_template.c` and `libavcodec/jrevdct.c` are taken from libjpeg, see the top of the files for licensing details. Specifically note that you must credit the IJG in the documentation accompanying your program if you only distribute executables. You must also indicate any changes including additions and deletions to those three files in the documentation. * `tests/reference.pnm` is under the expat license. ## External libraries FFmpeg can be combined with a number of external libraries, which sometimes affect the licensing of binaries resulting from the combination. ### Compatible libraries The following libraries are under GPL version 2: - avisynth - frei0r - libcdio - libdavs2 - librubberband - libvidstab - libx264 - libx265 - libxavs - libxavs2 - libxvid When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by passing `--enable-gpl` to configure. The following libraries are under LGPL version 3: - gmp - libaribb24 - liblensfun When combining them with FFmpeg, use the configure option `--enable-version3` to upgrade FFmpeg to the LGPL v3. The VMAF, mbedTLS, RK MPI, OpenCORE and VisualOn libraries are under the Apache License 2.0. That license is incompatible with the LGPL v2.1 and the GPL v2, but not with version 3 of those licenses. So to combine these libraries with FFmpeg, the license version needs to be upgraded by passing `--enable-version3` to configure. The smbclient library is under the GPL v3, to combine it with FFmpeg, the options `--enable-gpl` and `--enable-version3` have to be passed to configure to upgrade FFmpeg to the GPL v3. ### Incompatible libraries There are certain libraries you can combine with FFmpeg whose licenses are not compatible with the GPL and/or the LGPL. If you wish to enable these libraries, even in circumstances that their license may be incompatible, pass `--enable-nonfree` to configure. This will cause the resulting binary to be unredistributable. The Fraunhofer FDK AAC and OpenSSL libraries are under licenses which are incompatible with the GPLv2 and v3. To the best of our knowledge, they are compatible with the LGPL. ``` ## Hydra-core [MIT License](https://github.com/facebookresearch/hydra/blob/main/LICENSE) ``` MIT License Copyright (c) Facebook, Inc. and its affiliates. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ## ImageIo - [BSD 2-Clause "Simplified" License](https://github.com/imageio/imageio/blob/master/LICENSE) ``` Copyright (c) 2014-2022, imageio developers All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ## Iopath - [MIT License](https://github.com/facebookresearch/iopath/blob/main/LICENSE) ``` MIT License Copyright (c) Facebook, Inc. and its affiliates. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ## Loguru - [MIT License](https://github.com/Delgan/loguru/blob/master/LICENSE) ``` MIT License Copyright (c) 2017 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ## Mediapy - [Apache License 2.0](https://github.com/google/mediapy/blob/main/LICENSE) ``` Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ## Nltk - [Apache License 2.0](https://github.com/nltk/nltk/blob/develop/LICENSE.txt) ``` Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ## PEFT - [Apache License 2.0](https://github.com/huggingface/peft/blob/main/LICENSE) ``` Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ## Pillow - [MIT License](https://github.com/python-pillow/Pillow/blob/main/LICENSE) ``` The Python Imaging Library (PIL) is Copyright © 1997-2011 by Secret Labs AB Copyright © 1995-2011 by Fredrik Lundh and contributors Pillow is the friendly PIL fork. It is Copyright © 2010 by Jeffrey A. Clark and contributors Like PIL, Pillow is licensed under the open source MIT-CMU License: By obtaining, using, and/or copying this software and/or its associated documentation, you agree that you have read, understood, and will comply with the following terms and conditions: Permission to use, copy, modify and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appears in all copies, and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Secret Labs AB or the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ``` ## PyAV - [BSD 3-Clause "New" or "Revised" License](https://github.com/PyAV-Org/PyAV/blob/main/LICENSE.txt) ``` Copyright retained by original committers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the project nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` ## Pytorch_Retinaface - [MIT License](https://github.com/biubug6/Pytorch_Retinaface/blob/master/LICENSE.MIT) ``` MIT License Copyright (c) 2019 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ## Sentencepiece - [Apache License 2.0](https://github.com/google/sentencepiece/blob/master/LICENSE) ``` Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ## Termcolor - [MIT License](https://github.com/termcolor/termcolor/blob/main/COPYING.txt) ``` Copyright (c) 2008-2011 Volvox Development Team Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ## Transformers [Apache License 2.0](https://github.com/huggingface/transformers/blob/main/LICENSE) ``` Copyright 2018- The Hugging Face team. All rights reserved. Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ```
{ "source": "NVIDIA/Cosmos", "title": "ATTRIBUTIONS.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/ATTRIBUTIONS.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 77232 }
# How to Contribute We'd love to receive your patches and contributions. Please keep your PRs as draft until such time that you would like us to review them. ## Code Reviews All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult [GitHub Help](https://help.github.com/articles/about-pull-requests/) for more information on using pull requests. ## Pipeline Ensure you run the linter prior to submitting your pull request and the CI-CD pipeline is green before removing the draft designation. ```bash ./cosmos1/scripts/format.sh ``` ## Signing Your Work * We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license. * Any contribution which contains commits that are not Signed-Off will not be accepted. * To sign off on a commit you simply use the `--signoff` (or `-s`) option when committing your changes: ```bash $ git commit -s -m "Add cool feature." ``` This will append the following to your commit message: ``` Signed-off-by: Your Name <your@email.com> ``` * Full text of the DCO: ``` Developer Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 1 Letterman Drive Suite D4700 San Francisco, CA, 94129 Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. ``` ``` Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. ```
{ "source": "NVIDIA/Cosmos", "title": "CONTRIBUTING.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/CONTRIBUTING.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 2669 }
# Cosmos Installation We have only tested the installation with Ubuntu 24.04, 22.04, and 20.04. 1. Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html). 2. Clone the repository. ```bash git clone git@github.com:NVIDIA/Cosmos.git cd Cosmos ``` 3. Build a Docker image using `Dockerfile` and run the Docker container. ```bash docker build -t cosmos . docker run -d --name cosmos_container --gpus all --ipc=host -it -v $(pwd):/workspace cosmos docker attach cosmos_container ```
{ "source": "NVIDIA/Cosmos", "title": "INSTALL.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/INSTALL.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 560 }
![Cosmos Logo](assets/cosmos-logo.png) -------------------------------------------------------------------------------- ### [Website](https://www.nvidia.com/en-us/ai/cosmos/) | [HuggingFace](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) | [GPU-free Preview](https://build.nvidia.com/explore/discover) | [Paper](https://arxiv.org/abs/2501.03575) | [Paper Website](https://research.nvidia.com/labs/dir/cosmos1/) [NVIDIA Cosmos](https://www.nvidia.com/cosmos/) is a developer-first world foundation model platform designed to help Physical AI developers build their Physical AI systems better and faster. Cosmos contains 1. pre-trained models, available via [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) that allows commercial use of the models for free 2. training scripts under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0), offered through [NVIDIA Nemo Framework](https://github.com/NVIDIA/NeMo) for post-training the models for various downstream Physical AI applications Details of the platform is described in the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai). Preview access is avaiable at [build.nvidia.com](https://build.nvidia.com). ## Key Features - [Pre-trained Diffusion-based world foundation models](cosmos1/models/diffusion/README.md) for Text2World and Video2World generation where a user can generate visual simulation based on text prompts and video prompts. - [Pre-trained Autoregressive-based world foundation models](cosmos1/models/autoregressive/README.md) for Video2World generation where a user can generate visual simulation based on video prompts and optional text prompts. - [Video tokenizers](cosmos1/models/tokenizer) for tokenizing videos into continuous tokens (latent vectors) and discrete tokens (integers) efficiently and effectively. - Video curation pipeline for building your own video dataset. [Coming soon] - [Post-training scripts](cosmos1/models/POST_TRAINING.md) via NeMo Framework to post-train the pre-trained world foundation models for various Physical AI setup. - Pre-training scripts via NeMo Framework for building your own world foundation model. [[Diffusion](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/diffusion)] [[Autoregressive](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/multimodal_autoregressive)] [[Tokenizer](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/diffusion/vae)]. ## Model Family | Model name | Description | Try it out | |------------|----------|----------| | [Cosmos-1.0-Diffusion-7B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) | | [Cosmos-1.0-Diffusion-14B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) | | [Cosmos-1.0-Diffusion-7B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) | | [Cosmos-1.0-Diffusion-14B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) | | [Cosmos-1.0-Autoregressive-4B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-4B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) | | [Cosmos-1.0-Autoregressive-12B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-12B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) | | [Cosmos-1.0-Autoregressive-5B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-5B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) | | [Cosmos-1.0-Autoregressive-13B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-13B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) | | [Cosmos-1.0-Guardrail](https://huggingface.co/nvidia/Cosmos-1.0-Guardrail) | Guardrail contains pre-Guard and post-Guard for safe use | Embedded in model inference scripts | ## Example Usage ### Inference Follow the [Cosmos Installation Guide](INSTALL.md) to setup the docker. For inference with the pretrained models, please refer to [Cosmos Diffusion Inference](cosmos1/models/diffusion/README.md) and [Cosmos Autoregressive Inference](cosmos1/models/autoregressive/README.md). The code snippet below provides a gist of the inference usage. ```bash PROMPT="A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves. \ The robot's metallic body gleams under the bright, even lighting, highlighting its futuristic design and intricate joints. \ A glowing blue light emanates from its chest, adding a touch of advanced technology. The background is dominated by rows of boxes, \ suggesting a highly organized storage system. The floor is lined with wooden pallets, enhancing the industrial setting. \ The camera remains static, capturing the robot's poised stance amidst the orderly environment, with a shallow depth of \ field that keeps the focus on the robot while subtly blurring the background for a cinematic effect." # Example using 7B model PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \ --checkpoint_dir checkpoints \ --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \ --prompt "$PROMPT" \ --offload_prompt_upsampler \ --video_save_name Cosmos-1.0-Diffusion-7B-Text2World ``` <video src="https://github.com/user-attachments/assets/db7bebfe-5314-40a6-b045-4f6ce0a87f2a"> Your browser does not support the video tag. </video> We also offer [multi-GPU inference](cosmos1/models/diffusion/nemo/inference/README.md) support for Diffusion Text2World WFM models through NeMo Framework. ### Post-training NeMo Framework provides GPU accelerated post-training with general post-training for both [diffusion](cosmos1/models/diffusion/nemo/post_training/README.md) and [autoregressive](cosmos1/models/autoregressive/nemo/post_training/README.md) models, with other types of post-training coming soon. ## License and Contact This project will download and install additional third-party open source software projects. Review the license terms of these open source projects before use. NVIDIA Cosmos source code is released under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0). NVIDIA Cosmos models are released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). For a custom license, please contact [cosmos-license@nvidia.com](mailto:cosmos-license@nvidia.com).
{ "source": "NVIDIA/Cosmos", "title": "README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 7207 }
# Release Cadence | Version | Description | Date | |------------|----------|----------| | [v1.0](release_notes/v0p1.md) | Initial diffusion and autoregressive WFMs release | 2025-01-06 | | [v0.1](release_notes/v0p1.md) | Initial tokenizer release | 2024-11-06 |
{ "source": "NVIDIA/Cosmos", "title": "RELEASE.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/RELEASE.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 263 }
# Checkpoint directory Follow our instructions for downloading checkpoints in [Cosmos Diffusion Inference](../cosmos1/models/diffusion/README.md#download-checkpoints) and [Cosmos Autoregressive Inference](../cosmos1/models/autoregressive/README.md). Cosmos checkpoints will be downloaded to this directory.
{ "source": "NVIDIA/Cosmos", "title": "checkpoints/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/checkpoints/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 307 }
# Release note - Cosmos 0.1 was released with the [Cosmos Tokenizer Webage](https://research.nvidia.com/labs/dir/cosmos-tokenizer/). - 10 tokenizers were released in the [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) as shown in the table below. - Inference scripts for the models were released in the [Cosmos Tokenizer repository](https://github.com/NVIDIA/Cosmos-Tokenizer). ## Released Models | Item | Model name | Description | Try it out | |--|------------|----------|----------| |1| [Cosmos-0.1-Tokenizer-CI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI8x8) | Continuous image tokenizer with 8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) | |2| [Cosmos-0.1-Tokenizer-CI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI16x16) | Continuous image tokenizer with 16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) | |3| [Cosmos-0.1-Tokenizer-DI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI8x8) | Discrete image tokenizer with 8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) | |4| [Cosmos-0.1-Tokenizer-DI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI16x16) | Discrete image tokenizer with 16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) | |5| [Cosmos-0.1-Tokenizer-CV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV4x8x8) | Continuous video tokenizer with 4x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) | |6| [Cosmos-0.1-Tokenizer-CV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x8x8) | Continuous video tokenizer with 8x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) | |7| [Cosmos-0.1-Tokenizer-CV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x16x16) | Continuous video tokenizer with 8x16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) | |8| [Cosmos-0.1-Tokenizer-DV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV4x8x8) | Discrete video tokenizer with 4x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) | |9| [Cosmos-0.1-Tokenizer-DV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x8x8) | Discrete video tokenizer with 8x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) | |10| [Cosmos-0.1-Tokenizer-DV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x16x16) | Discrete video tokenizer with 8x16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |
{ "source": "NVIDIA/Cosmos", "title": "release_notes/v0p1.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/release_notes/v0p1.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 3021 }
# Release Notes # [02/10/2025](https://github.com/NVIDIA/Cosmos/commit/868ff171b9d676c53e094c4324a45a5f06d749e2) - Cosmos Tokenizer inference and post-training support - Cosmos Video2World post-training support # [01/27/2025](https://github.com/NVIDIA/Cosmos/commit/c82c9dc6f9a2f046033d0a26ec525bc389b641ef) - Stability and safety improvements # [01/09/2025](https://github.com/NVIDIA/Cosmos/commit/a6e2fdd49053ae75836cedc2a99c7c84bc1c8c1b) - Support [General Post-Training](../cosmos1/models/POST_TRAINING.md) through NeMO # [01/06/2025](https://github.com/NVIDIA/Cosmos/commit/00d50f897a111069d43386e626aecb2167259bca) - Initial release of Cosmos 1.0 along with the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai) - 13 models were released in the [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) as shown in the table below. - Inference scripts for the models were released in the [Cosmos repository](https://github.com/NVIDIA/Cosmos). | Item | Model name | Description | Try it out | |--|------------|----------|----------| |1| [Cosmos-1.0-Diffusion-7B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) | |2| [Cosmos-1.0-Diffusion-14B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) | |3| [Cosmos-1.0-Diffusion-7B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) | |4| [Cosmos-1.0-Diffusion-14B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) | |5| [Cosmos-1.0-Autoregressive-4B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-4B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) | |6| [Cosmos-1.0-Autoregressive-12B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-12B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) | |7| [Cosmos-1.0-Autoregressive-5B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-5B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) | |8| [Cosmos-1.0-Autoregressive-13B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-13B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) | |9| [Cosmos-1.0-Tokenizer-CV8x8x8](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-CV8x8x8) | Continuous video tokenizer with 8x8x8 compression ratio | [Inference](cosmos1/models/diffusion/README.md) | |10| [Cosmos-1.0-Tokenizer-DV8x16x16](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-DV8x16x16) | Discrete video tokenizer with 16x8x8 compression ratio | [Inference](cosmos1/models/autoregressive/README.md) | |11| [Cosmos-1.0-PromptUpsampler-12B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Prompt-Upsampler-12B-Text2World) | Prompt upsampler for Text2World | [Inference](cosmos1/models/diffusion/README.md) | |12| [Cosmos-1.0-Diffusion-7B-Decoder-DV8x16x16ToCV8x8x8](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Decoder-DV8x16x16ToCV8x8x8) | Diffusion decoder for enhancing Cosmos 1.0 autoregressive WFMs' outputs | [Inference](cosmos1/models/autoregressive/README.md) | |13| [Cosmos-1.0-Guardrail](https://huggingface.co/nvidia/Cosmos-1.0-Guardrail) | Guardrail contains pre-Guard and post-Guard for safe use | Embedded in model inference scripts |
{ "source": "NVIDIA/Cosmos", "title": "release_notes/v1p0.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/release_notes/v1p0.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 3886 }
# Cosmos Post-training In the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai), we discuss several post-training examples of Cosmos pre-trained World Foundation Models (WFMs) for various Physical AI tasks, including - General Post-Training: Fine-tune the WFM to generate a target distribution of videos based on the custom dataset. The target distribution could include a specific camera spec or a specific domain such as a factory. - Instruction Control: Post-trains models for robotic manipulation to predict videos based on textual instructions, enabling robots to visually simulate tasks like folding clothes or picking up objects. - Action Control: Post-trains models for robotic manipulation to predict the next visual frame based on action vectors, simulating robotic tasks like object handling or movement planning. - Camera Control: Adds camera pose conditioning to generate 3D-consistent video simulations from single images, enabling joystick-like navigation in virtual environments. - Multi-View Generation: Post-trains models for autonomous vehicles to generate synchronized multi-view videos from text prompts, simulating driving scenarios with multiple camera perspectives. - Multi-View Generation with Vehicle Trajectory Control: Extends multi-view generation by incorporating trajectory inputs, enabling precise simulation of driving environments for autonomous vehicles, adhering to specified paths. Except for the instruction control where the WFM is post-trained on a dataset of instruction-video pairs, all other cases require minor modifications of the network architectures. Post-training tasks will be supported by NeMo Framework. In this initial release, we provide post-training scripts for the general post-training of both diffusion and autorgressive WFMs. Scripts of the other post-training tasks will be provided in a future release. ## Post-training Support Matrix | Post-training Task | Diffusion WFM | Autoregressive WFM | |---------------------|---------------|--------------------| | General post-training | [Supported](../models/diffusion/nemo/post_training/README.md) | [Supported](../models/autoregressive/nemo/post_training/README.md) | | Instruction control | Coming soon | Coming soon | | Action control | Coming soon | Coming soon | | Camera control | Coming soon | Coming soon | | Multi-view generation | Coming soon | Coming soon | | Multi-view generation with vehicle trajectory control | Coming soon | Coming soon |
{ "source": "NVIDIA/Cosmos", "title": "cosmos1/models/POST_TRAINING.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/POST_TRAINING.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 2533 }
# Cosmos Autoregressive-based World Foundation Models ## Table of Contents - [Getting Started](#getting-started) - [Set Up Docker Environment](#set-up-docker-environment) - [Download Checkpoints](#download-checkpoints) - [Usage](#usage) - [Model Types](#model-types) - [Single and Batch Generation](#single-and-batch-generation) - [Sample Commands](#sample-commands) - [Base Models (4B/12B)](#base-basepy-4b-and-12b) - [Video2World Models (5B/13B)](#video2world-video2worldpy-5b-and-13b) - [Arguments](#arguments) - [Common Parameters](#common-parameters) - [Base Specific Parameters](#base-specific-parameters) - [Video2World Specific Parameters](#video2world-specific-parameters) - [Safety Features](#safety-features) This page details the steps for using the Cosmos autoregressive-based world foundation models. ## Getting Started ### Set Up Docker Environment Follow our [Installation Guide](../../../INSTALL.md) to set up the Docker environment. All commands on this page should be run inside Docker. ### Download Checkpoints 1. Generate a [Hugging Face](https://huggingface.co/settings/tokens) access token. Set the access token to 'Read' permission (default is 'Fine-grained'). 2. Log in to Hugging Face with the access token: ```bash huggingface-cli login ``` 3. Download the Cosmos model weights from [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6): ```bash PYTHONPATH=$(pwd) python cosmos1/scripts/download_autoregressive.py --model_sizes 4B 5B 12B 13B ``` 4. The downloaded files should be in the following structure: ``` checkpoints/ ├── Cosmos-1.0-Autoregressive-4B │ ├── model.pt │ └── config.json ├── Cosmos-1.0-Autoregressive-5B-Video2World │ ├── model.pt │ └── config.json ├── Cosmos-1.0-Autoregressive-12B │ ├── model.pt │ └── config.json ├── Cosmos-1.0-Autoregressive-13B-Video2World │ ├── model.pt │ └── config.json ├── Cosmos-1.0-Tokenizer-CV8x8x8 │ ├── decoder.jit │ ├── encoder.jit │ └── mean_std.pt ├── Cosmos-1.0-Tokenizer-DV8x16x16 │ ├── decoder.jit │ └── encoder.jit ├── Cosmos-1.0-Diffusion-7B-Decoder-DV8x16x16ToCV8x8x8 │ ├── aux_vars.pt │ └── model.pt └── Cosmos-1.0-Guardrail ├── aegis/ ├── blocklist/ ├── face_blur_filter/ └── video_content_safety_filter/ ``` ## Usage ### Model Types There are two model types available for autoregressive world generation: 1. **Base**: Supports world generation from image/video input * Models: `Cosmos-1.0-Autoregressive-4B` and `Cosmos-1.0-Autoregressive-12B` * Inference script: [base.py](/cosmos1/models/autoregressive/inference/base.py) 2. **Video2World**: Supports world generation from image/video input and text input * Models: `Cosmos-1.0-Autoregressive-5B-Video2World` and `Cosmos-1.0-Autoregressive-13B-Video2World` * Inference script: [video2world.py](/cosmos1/models/autoregressive/inference/video2world.py) Our models now support video extension up to 33 frames. Starting from either a single image or a 9-frame video input, they can generate the remaining frames to reach the 33-frame length (generating 32 or 24 frames, respectively). We have evaluated all eight possible configurations (4 models × 2 vision input types: image or video) using 100 test videos on physical AI topics. Below are the failure rates for each configuration: | Model | Image input | Video input (9 frames) | |:------------------------------------------|:--------------:|:-------------------------:| | Cosmos-1.0-Autoregressive-4B | 15% | 1% | | Cosmos-1.0-Autoregressive-5B-Video2World | 7% | 2% | | Cosmos-1.0-Autoregressive-12B | 2% | 1% | | Cosmos-1.0-Autoregressive-13B-Video2World | 3% | 0% | We define failure cases as videos with severe distortions, such as: * Sudden appearance of large unexpected objects * Video degrading to a single solid color Note that the following are not considered failures in our analysis: * Static video frames * Minor object distortions or artifacts ### Single and Batch Generation We support both single and batch video generation. For generating a single video, `base` mode requires the input argument `--input_image_or_video_path` (image/video input), while `video2world` mode requires both `--input_image_or_video_path` (image/video input) and `--prompt` (text input). Note that our model only works with 1024x640 resolution videos. If the input image/video is not in this resolution, it will be resized and cropped. For generating a batch of videos, both `base` and `video2world` require `--batch_input_path` (path to a JSONL file). For `base`, the JSONL file should contain one visual input per line in the following format, where each line must contain a "visual_input" field: ```json {"visual_input": "path/to/video1.mp4"} {"visual_input": "path/to/video2.mp4"} ``` For `video2world`, each line in the JSONL file must contain both "prompt" and "visual_input" fields: ```json {"prompt": "prompt1", "visual_input": "path/to/video1.mp4"} {"prompt": "prompt2", "visual_input": "path/to/video2.mp4"} ``` ### Sample Commands There are two main demo scripts for autoregressive world generation: `base.py` and `video2world.py`. Below you will find sample commands for single and batch generation, as well as commands for running with low-memory GPUs using model offloading. We also provide a memory usage table comparing different offloading strategies to help with configuration. #### Base (base.py): 4B and 12B Generates world from image/video input. The `input_type` argument can be either `video` or `image`. We have tuned the sampling parameters `top_p` and `temperature` to achieve the best performance. Please use the provided values in the command examples. Note that the command examples below all use video input. If you want to use image input, please change the `input_type` to `image`. ##### Single Generation ```bash # Example using 4B model CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \ --input_type=video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --video_save_name=Cosmos-1.0-Autoregressive-4B \ --ar_model_dir=Cosmos-1.0-Autoregressive-4B \ --top_p=0.8 \ --temperature=1.0 # Example for low-memory GPUs using 4B model with model offloading CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \ --input_type=video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --video_save_name=Cosmos-1.0-Autoregressive-4B \ --ar_model_dir=Cosmos-1.0-Autoregressive-4B \ --top_p=0.8 \ --temperature=1.0 \ --offload_guardrail_models \ --offload_diffusion_decoder \ --offload_ar_model \ --offload_tokenizer # Example using 12B model CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \ --input_type=video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --video_save_name=Cosmos-1.0-Autoregressive-12B \ --ar_model_dir=Cosmos-1.0-Autoregressive-12B \ --top_p=0.9 \ --temperature=1.0 # Example for low-memory GPUs using 12B model with model offloading CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \ --input_type=video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --video_save_name=Cosmos-1.0-Autoregressive-12B \ --ar_model_dir=Cosmos-1.0-Autoregressive-12B \ --top_p=0.9 \ --temperature=1.0 \ --offload_guardrail_models \ --offload_diffusion_decoder \ --offload_ar_model \ --offload_tokenizer ``` ##### Batch Generation ```bash # Example using 4B model CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \ --input_type=video \ --batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/base.jsonl \ --video_save_folder=outputs/Cosmos-1.0-Autoregressive-4B \ --ar_model_dir=Cosmos-1.0-Autoregressive-4B \ --top_p=0.8 \ --temperature=1.0 # Example using 12B model CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \ --input_type=video \ --batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/base.jsonl \ --video_save_folder=outputs/Cosmos-1.0-Autoregressive-12B \ --ar_model_dir=Cosmos-1.0-Autoregressive-12B \ --top_p=0.9 \ --temperature=1.0 ``` ##### Example Output Here is an example output video generated using base.py with image input, using `Cosmos-1.0-Autoregressive-12B`: <video src="https://github.com/user-attachments/assets/634403a5-1873-42d7-8dd0-eb7fb4ac8cf4"> Your browser does not support the video tag. </video> The input image used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.jpg`. The image is from [BDD dataset](http://bdd-data.berkeley.edu/). Here is an example output video generated using base.py with 9-frame video input, using `Cosmos-1.0-Autoregressive-12B`: <video src="https://github.com/user-attachments/assets/1a3ff099-87d7-41e8-b149-a25cfcd4f40b"> Your browser does not support the video tag. </video> The input video used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.mp4`. ##### Inference Time and GPU Memory Usage These numbers may vary based on system specifications and are provided for reference only. | Offloading Strategy | Cosmos-1.0-Autoregressive-4B | Cosmos-1.0-Autoregressive-12B | |-------------|---------|---------| | No offloading | 31.3 GB | 47.5 GB | | Guardrails | 28.9 GB | 45.2 GB | | Guardrails & Diffusion decoder | 28.5 GB | 43.1 GB | | Guardrails & Diffusion decoder & Tokenizer | 27.3 GB | 42.9 GB | | Guardrails & Diffusion decoder & Tokenizer & AR model | 18.7 GB | 27.4 GB | End-to-end inference runtime on one H100 without offloading and after model initialization: | Cosmos-1.0-Autoregressive-4B | Cosmos-1.0-Autoregressive-12B | |---------|---------| | ~62 seconds | ~119 seconds | #### Video2World (video2world.py): 5B and 13B Generates world from image/video and text input. The `input_type` argument can be either `text_and_video` or `text_and_image`. We have tuned the sampling parameters `top_p` and `temperature` to achieve the best performance. Please use the provided values in the command examples. Note that the command examples below all use video input. If you want to use image input, please change the `input_type` to `text_and_image`. ##### Single Generation ```bash # Example using 5B model CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \ --input_type=text_and_video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \ --video_save_name=Cosmos-1.0-Autoregressive-5B-Video2World \ --ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \ --top_p=0.7 \ --temperature=1.0 # Example for low-memory GPUs using 5B model with model offloading CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \ --input_type=text_and_video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \ --video_save_name=Cosmos-1.0-Autoregressive-5B-Video2World \ --ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \ --top_p=0.7 \ --temperature=1.0 \ --offload_guardrail_models \ --offload_diffusion_decoder \ --offload_ar_model \ --offload_tokenizer \ --offload_text_encoder_model # Example using 13B model CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \ --input_type=text_and_video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \ --video_save_name=Cosmos-1.0-Autoregressive-13B-Video2World \ --ar_model_dir=Cosmos-1.0-Autoregressive-13B-Video2World \ --top_p=0.8 \ --temperature=1.0 \ --offload_guardrail_models # Example for low-memory GPUs using 13B model with model offloading CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \ --input_type=text_and_video \ --input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \ --prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \ --video_save_name=Cosmos-1.0-Autoregressive-13B-Video2World \ --ar_model_dir=Cosmos-1.0-Autoregressive-13B-Video2World \ --top_p=0.8 \ --temperature=1.0 \ --offload_guardrail_models \ --offload_diffusion_decoder \ --offload_ar_model \ --offload_tokenizer \ --offload_text_encoder_model ``` ##### Batch Generation ```bash # Example using 5B model CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \ --input_type=text_and_video \ --batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/video2world.jsonl \ --video_save_folder=outputs/Cosmos-1.0-Autoregressive-5B-Video2World \ --ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \ --top_p=0.7 \ --temperature=1.0 # Example using 13B model CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \ --input_type=text_and_video \ --batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/video2world.jsonl \ --video_save_folder=outputs/Cosmos-1.0-Autoregressive-13B-Video2World \ --ar_model_dir=Cosmos-1.0-Autoregressive-13B-Video2World \ --top_p=0.8 \ --temperature=1.0 \ --offload_guardrail_models ``` ##### Example Output Here is an example output video generated using video2world.py with image input, using `Cosmos-1.0-Autoregressive-13B-Video2World`: <video src="https://github.com/user-attachments/assets/869f3b81-fabd-462e-a545-c04cdd9c1d22"> Your browser does not support the video tag. </video> The input image used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.jpg`. The prompt for generating the video is: ``` A driving video captures a serene urban street scene on a sunny day. The camera is mounted on the dashboard of a moving vehicle, providing a first-person perspective as it travels down a two-lane road. The street is lined with parked cars on both sides, predominantly black and silver sedans and SUVs. The road is flanked by a mix of residential and commercial buildings, with a prominent red-brick building on the left side, featuring multiple windows and a flat roof. The sky is clear with a few scattered clouds, casting soft shadows on the street. Trees with lush green foliage line the right side of the road, providing a natural contrast to the urban environment. The camera remains steady, maintaining a consistent forward motion, suggesting a leisurely drive. Traffic is light, with a few vehicles moving in the opposite direction, including a black sedan and a yellow taxi. Street signs are visible, including a no-parking sign on the right. The overall atmosphere is calm and peaceful, with no pedestrians visible, emphasizing the focus on the drive and the surrounding urban landscape. ``` Here is an example output video generated using video2world.py with 9-frame video input, using `Cosmos-1.0-Autoregressive-13B-Video2World`: <video src="https://github.com/user-attachments/assets/81840e1c-624b-4b01-9240-ab7db3722e58"> Your browser does not support the video tag. </video> The input video used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.mp4`. The prompt for generating the video is: ``` A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions. ``` ##### Inference Time and GPU Memory Usage These numbers may vary based on system specifications and are provided for reference only. | Offloading Strategy | Cosmos-1.0-Autoregressive-5B-Video2World | Cosmos-1.0-Autoregressive-13B-Video2World | |-------------|---------|---------| | No offloading | 66.2 GB | > 80 GB | | Guardrails | 58.7 GB | 76.6 GB | | Guardrails & T5 encoder | 41.3 GB | 58.0 GB | | Guardrails & T5 encoder & Diffusion decoder | 29.0 GB | 46.9 GB | | Guardrails & T5 encoder & Diffusion decoder & Tokenizer | 28.8 GB | 46.7 GB | | Guardrails & T5 encoder & Diffusion decoder & Tokenizer & AR model | 21.1 GB | 30.9 GB | End-to-end inference runtime on one H100 with no offloading for 5B model and guardrail offloading for 13B, after model initialization: | Cosmos-1.0-Autoregressive-5B-Video2World | Cosmos-1.0-Autoregressive-13B-Video2World | |---------|---------| | ~73 seconds | ~150 seconds | ### Arguments #### Common Parameters | Parameter | Description | Default | |-----------|-------------|---------| | `--checkpoint_dir` | Directory containing model weights | "checkpoints" | | `--video_save_name` | Output video filename for single video generation | "output" | | `--video_save_folder` | Folder where all output videos are stored | "outputs/" | | `--input_image_or_video_path` | Input image or video path. Required for single video generation | None | | `--batch_input_path` | Folder containing input images or videos. Required for batch video generation | None | | `--num_input_frames` | Number of input frames to use for Video2World prediction | 9 | | `--temperature` | Temperature used while sampling | 1.0 (recommend using values in sample commands provided) | | `--top_p` | Top-p value for top-p sampling | 0.8 (recommend using values in sample commands provided) | | `--seed` | Random seed | 0 | | `--disable_diffusion_decoder` | When set to True, use discrete tokenizer to decode discrete tokens to video. Otherwise, use diffusion decoder to decode video | False | | `--offload_guardrail_models` | Offload guardrail models after inference, used for low-memory GPUs | False | | `--offload_diffusion_decoder` | Offload diffusion decoder after inference, used for low-memory GPUs | False | | `--offload_ar_model` | Offload AR model after inference, used for low-memory GPUs | False | | `--offload_prompt_upsampler` | Offload prompt upsampler after inference, used for low-memory GPUs | False | #### Base Specific Parameters | Parameter | Description | Default | |-----------|-------------|---------| | `--ar_model_dir` | Directory containing AR model weight | "Cosmos-1.0-Autoregressive-4B" | | `--input_type` | Input type, either `video` or `image` | "video" | #### Video2World Specific Parameters | Parameter | Description | Default | |-----------|-------------|---------| | `--ar_model_dir` | Directory containing AR model weight | "Cosmos-1.0-Autoregressive-4B" | | `--input_type` | Input type, either `text_and_video` or `text_and_image` | "text_and_video" | | `--prompt` | Text prompt for single video generation. Required for single video generation | None | | `--input_prompts_path` | Path to JSONL file for batch video generation. Required for batch video generation | None | | `--offload_text_encoder_model` | Offload text encoder after inference, used for low-memory GPUs | False | ### Safety Features The model uses a built-in safety guardrail system that cannot be disabled. Generating human faces is not allowed and will be blurred by the guardrail. For more information, check out the [Cosmos Guardrail Documentation](../guardrail/README.md).
{ "source": "NVIDIA/Cosmos", "title": "cosmos1/models/autoregressive/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 20314 }
# Cosmos Diffusion-based World Foundation Models ## Table of Contents - [Getting Started](#getting-started) - [Set Up Docker Environment](#set-up-docker-environment) - [Download Checkpoints](#download-checkpoints) - [Usage](#usage) - [Model Types](#model-types) - [Single and Batch Generation](#single-and-batch-generation) - [Sample Commands](#sample-commands) - [Text2World](#text2world-text2worldpy-7b-and-14b) - [Video2World](#video2world-video2worldpy-7b-and-14b) - [Arguments](#arguments) - [Common Parameters](#common-parameters) - [Text2World Specific Parameters](#text2world-specific-parameters) - [Video2World Specific Parameters](#video2world-specific-parameters) - [Safety Features](#safety-features) - [Prompting Instructions](#prompting-instructions) This page details the steps for using the Cosmos diffusion-based world foundation models. ## Getting Started ### Set Up Docker Environment Follow our [Installation Guide](../../../INSTALL.md) to set up the Docker environment. All commands on this page should be run inside Docker. ### Download Checkpoints 1. Generate a [Hugging Face](https://huggingface.co/settings/tokens) access token. Set the access token to 'Read' permission (default is 'Fine-grained'). 2. Log in to Hugging Face with the access token: ```bash huggingface-cli login ``` 3. Request access to Mistral AI's Pixtral-12B model by clicking on `Agree and access repository` on [Pixtral's Hugging Face model page](https://huggingface.co/mistralai/Pixtral-12B-2409). This step is required to use Pixtral 12B for the Video2World prompt upsampling task. 4. Download the Cosmos model weights from [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6): ```bash PYTHONPATH=$(pwd) python cosmos1/scripts/download_diffusion.py --model_sizes 7B 14B --model_types Text2World Video2World ``` 5. The downloaded files should be in the following structure: ``` checkpoints/ ├── Cosmos-1.0-Diffusion-7B-Text2World │ ├── model.pt │ └── config.json ├── Cosmos-1.0-Diffusion-14B-Text2World │ ├── model.pt │ └── config.json ├── Cosmos-1.0-Diffusion-7B-Video2World │ ├── model.pt │ └── config.json ├── Cosmos-1.0-Diffusion-14B-Video2World │ ├── model.pt │ └── config.json ├── Cosmos-1.0-Tokenizer-CV8x8x8 │ ├── decoder.jit │ ├── encoder.jit │ └── mean_std.pt ├── Cosmos-1.0-Prompt-Upsampler-12B-Text2World │ ├── model.pt │ └── config.json ├── Pixtral-12B │ ├── model.pt │ ├── config.json └── Cosmos-1.0-Guardrail ├── aegis/ ├── blocklist/ ├── face_blur_filter/ └── video_content_safety_filter/ ``` ## Usage ### Model Types There are two model types available for diffusion world generation: 1. **Text2World**: Supports world generation from text input * Models: `Cosmos-1.0-Diffusion-7B-Text2World` and `Cosmos-1.0-Diffusion-14B-Text2World` * Inference script: [text2world.py](/cosmos1/models/diffusion/inference/text2world.py) 2. **Video2World**: Supports world generation from text and image/video input * Models: `Cosmos-1.0-Diffusion-7B-Video2World` and `Cosmos-1.0-Diffusion-14B-Video2World` * Inference script: [video2world.py](/cosmos1/models/diffusion/inference/video2world.py) ### Single and Batch Generation We support both single and batch video generation. For generating a single video, `Text2World` mode requires the input argument `--prompt` (text input). `Video2World` mode requires `--input_image_or_video_path` (image/video input). Additionally for Video2World, if the prompt upsampler is disabled, a text prompt must also be provided using the `--prompt` argument. For generating a batch of videos, both `Text2World` and `Video2World` require `--batch_input_path` (path to a JSONL file). For `Text2World`, the JSONL file should contain one prompt per line in the following format, where each line must contain a "prompt" field: ```json {"prompt": "prompt1"} {"prompt": "prompt2"} ``` For `Video2World`, each line in the JSONL file must contain a "visual_input" field: ```json {"visual_input": "path/to/video1.mp4"} {"visual_input": "path/to/video2.mp4"} ``` If you disable the prompt upsampler by setting the `--disable_prompt_upsampler` flag, each line in the JSONL file will need to include both "prompt" and "visual_input" fields. ```json {"prompt": "prompt1", "visual_input": "path/to/video1.mp4"} {"prompt": "prompt2", "visual_input": "path/to/video2.mp4"} ``` ### Sample Commands There are two main demo scripts for diffusion world generation: `text2world.py` and `video2world.py`. Below you will find sample commands for single and batch generation, as well as commands for running with low-memory GPUs using model offloading. We also provide a memory usage table comparing different offloading strategies to help with configuration. #### Text2World (text2world.py): 7B and 14B Generates world from text input. ##### Single Generation ```bash PROMPT="A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves. \ The robot's metallic body gleams under the bright, even lighting, highlighting its futuristic design and intricate joints. \ A glowing blue light emanates from its chest, adding a touch of advanced technology. The background is dominated by rows of boxes, \ suggesting a highly organized storage system. The floor is lined with wooden pallets, enhancing the industrial setting. \ The camera remains static, capturing the robot's poised stance amidst the orderly environment, with a shallow depth of \ field that keeps the focus on the robot while subtly blurring the background for a cinematic effect." # Example using 7B model PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \ --checkpoint_dir checkpoints \ --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \ --prompt "$PROMPT" \ --offload_prompt_upsampler \ --video_save_name Cosmos-1.0-Diffusion-7B-Text2World # Example using the 7B model on low-memory GPUs with model offloading. The speed is slower if using batch generation. PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \ --checkpoint_dir checkpoints \ --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \ --prompt "$PROMPT" \ --video_save_name Cosmos-1.0-Diffusion-7B-Text2World_memory_efficient \ --offload_tokenizer \ --offload_diffusion_transformer \ --offload_text_encoder_model \ --offload_prompt_upsampler \ --offload_guardrail_models # Example using 14B model with prompt upsampler offloading (required on H100) PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \ --checkpoint_dir checkpoints \ --diffusion_transformer_dir Cosmos-1.0-Diffusion-14B-Text2World \ --prompt "$PROMPT" \ --video_save_name Cosmos-1.0-Diffusion-14B-Text2World \ --offload_prompt_upsampler \ --offload_guardrail_models ``` ##### Batch Generation ```bash # Example using 7B model PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \ --checkpoint_dir checkpoints \ --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \ --batch_input_path cosmos1/models/diffusion/assets/v1p0/batch_inputs/text2world.jsonl \ --video_save_folder outputs/Cosmos-1.0-Diffusion-7B-Text2World \ --offload_prompt_upsampler ``` ##### Example Output Here is an example output video generated using text2world.py, using `Cosmos-1.0-Diffusion-7B-Text2World`: <video src="https://github.com/user-attachments/assets/db7bebfe-5314-40a6-b045-4f6ce0a87f2a"> Your browser does not support the video tag. </video> The upsampled prompt used to generate the video is: ``` In a sprawling, meticulously organized warehouse, a sleek humanoid robot stands sentinel amidst towering shelves brimming with neatly stacked cardboard boxes. The robot's metallic body, adorned with intricate joints and a glowing blue chest light, radiates an aura of advanced technology, its design a harmonious blend of functionality and futuristic elegance. The camera captures this striking figure in a static, wide shot, emphasizing its poised stance against the backdrop of industrial wooden pallets. The lighting is bright and even, casting a warm glow that accentuates the robot's form, while the shallow depth of field subtly blurs the rows of boxes, creating a cinematic depth that draws the viewer into this high-tech realm. The absence of human presence amplifies the robot's solitary vigil, inviting contemplation of its purpose within this vast, organized expanse. ``` If you disable the prompt upsampler by using the `--disable_prompt_upsampler` flag, the output video will be generated using the original prompt: <video src="https://github.com/user-attachments/assets/b373c692-9900-4e73-80c2-4016caa47a82"> Your browser does not support the video tag. </video> The original prompt is: ``` A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves. The robot's metallic body gleams under the bright, even lighting, highlighting its futuristic design and intricate joints. A glowing blue light emanates from its chest, adding a touch of advanced technology. The background is dominated by rows of boxes, suggesting a highly organized storage system. The floor is lined with wooden pallets, enhancing the industrial setting. The camera remains static, capturing the robot's poised stance amidst the orderly environment, with a shallow depth of field that keeps the focus on the robot while subtly blurring the background for a cinematic effect. ``` Note that the robot face could be blurred sometimes by the guardrail in this example. ##### Inference Time and GPU Memory Usage The numbers provided below may vary depending on system specs and are for reference only. We report the maximum observed GPU memory usage during end-to-end inference. Additionally, we offer a series of model offloading strategies to help users manage GPU memory usage effectively. For GPUs with limited memory (e.g., RTX 3090/4090 with 24 GB memory), we recommend fully offloading all models. For higher-end GPUs, users can select the most suitable offloading strategy considering the numbers provided below. | Offloading Strategy | 7B Text2World | 14B Text2World | |-------------|---------|---------| | Offload prompt upsampler | 74.0 GB | > 80.0 GB | | Offload prompt upsampler & guardrails | 57.1 GB | 70.5 GB | | Offload prompt upsampler & guardrails & T5 encoder | 38.5 GB | 51.9 GB | | Offload prompt upsampler & guardrails & T5 encoder & tokenizer | 38.3 GB | 51.7 GB | | Offload prompt upsampler & guardrails & T5 encoder & tokenizer & diffusion model | 24.4 GB | 39.0 GB | The table below presents the end-to-end inference runtime on a single H100 GPU, excluding model initialization time. | 7B Text2World (offload prompt upsampler) | 14B Text2World (offload prompt upsampler, guardrails) | |---------|---------| | ~380 seconds | ~590 seconds | #### Video2World (video2world.py): 7B and 14B Generates world from text and image/video input. ##### Single Generation Note that our prompt upsampler is enabled by default for Video2World, and it will generate the prompt from the input image/video. If the prompt upsampler is disabled, you can provide a prompt manually using the `--prompt` flag. ```bash # Example using the 7B model PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \ --checkpoint_dir checkpoints \ --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \ --input_image_or_video_path cosmos1/models/diffusion/assets/v1p0/video2world_input0.jpg \ --num_input_frames 1 \ --video_save_name Cosmos-1.0-Diffusion-7B-Video2World \ --offload_prompt_upsampler # Example using the 7B model on low-memory GPUs with model offloading. The speed is slower if using batch generation. PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \ --checkpoint_dir checkpoints \ --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \ --input_image_or_video_path cosmos1/models/diffusion/assets/v1p0/video2world_input0.jpg \ --num_input_frames 1 \ --video_save_name Cosmos-1.0-Diffusion-7B-Video2World_memory_efficient \ --offload_tokenizer \ --offload_diffusion_transformer \ --offload_text_encoder_model \ --offload_prompt_upsampler \ --offload_guardrail_models # Example using 14B model with prompt upsampler offloading (required on H100) PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \ --checkpoint_dir checkpoints \ --diffusion_transformer_dir Cosmos-1.0-Diffusion-14B-Video2World \ --input_image_or_video_path cosmos1/models/diffusion/assets/v1p0/video2world_input0.jpg \ --num_input_frames 1 \ --video_save_name Cosmos-1.0-Diffusion-14B-Video2World \ --offload_prompt_upsampler \ --offload_guardrail_models ``` ##### Batch Generation ```bash # Example using 7B model with 9 input frames PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \ --checkpoint_dir checkpoints \ --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \ --batch_input_path cosmos1/models/diffusion/assets/v1p0/batch_inputs/video2world_ps.jsonl \ --video_save_folder outputs/Cosmos-1.0-Diffusion-7B-Video2World \ --offload_prompt_upsampler \ --num_input_frames 9 # Example using 7B model with 9 input frames without prompt upsampler, using 'prompt' field in the JSONL file PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \ --checkpoint_dir checkpoints \ --diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \ --batch_input_path cosmos1/models/diffusion/assets/v1p0/batch_inputs/video2world_wo_ps.jsonl \ --video_save_folder outputs/Cosmos-1.0-Diffusion-7B-Video2World_wo_ps \ --disable_prompt_upsampler \ --num_input_frames 9 ``` ##### Example Output Here is an example output video generated using video2world.py, using `Cosmos-1.0-Diffusion-14B-Video2World`: <video src="https://github.com/user-attachments/assets/a840a338-5090-4f50-9790-42b7ede86ba6"> Your browser does not support the video tag. </video> The upsampled prompt (generated by the prompt upsampler) used to generate the video is: ``` The video depicts a long, straight highway stretching into the distance, flanked by metal guardrails. The road is divided into multiple lanes, with a few vehicles visible in the far distance. The surrounding landscape features dry, grassy fields on one side and rolling hills on the other. The sky is mostly clear with a few scattered clouds, suggesting a bright, sunny day. ``` ##### Inference Time and GPU Memory Usage The numbers provided below may vary depending on system specs and are for reference only. | Offloading Strategy | 7B Video2World | 14B Video2World | |----------------------------------------------------------------------------------|---------|---------| | Offload prompt upsampler | 76.5 GB | > 80.0 GB | | Offload prompt upsampler & guardrails | 59.9 GB | 73.3 GB | | Offload prompt upsampler & guardrails & T5 encoder | 41.3 GB | 54.8 GB | | Offload prompt upsampler & guardrails & T5 encoder & tokenizer | 41.1 GB | 54.5 GB | | Offload prompt upsampler & guardrails & T5 encoder & tokenizer & diffusion model | 27.3 GB | 39.0 GB | The following table shows the end-to-end inference runtime on a single H100 GPU, excluding model initialization time: | 7B Video2World (offload prompt upsampler) | 14B Video2World (offload prompt upsampler, guardrails) | |---------|---------| | ~383 seconds | ~593 seconds | ### Arguments #### Common Parameters | Parameter | Description | Default | |-----------|-------------|---------| | `--checkpoint_dir` | Directory containing model weights | "checkpoints" | | `--tokenizer_dir` | Directory containing tokenizer weights | "Cosmos-1.0-Tokenizer-CV8x8x8" | | `--video_save_name` | Output video filename for single video generation | "output" | | `--video_save_folder` | Output directory for batch video generation | "outputs/" | | `--prompt` | Text prompt for single video generation. Required for single video generation. | None | | `--batch_input_path` | Path to JSONL file for batch video generation. Required for batch video generation. | None | | `--negative_prompt` | Negative prompt for improved quality | "The video captures a series of frames showing ugly scenes..." | | `--num_steps` | Number of diffusion sampling steps | 35 | | `--guidance` | CFG guidance scale | 7.0 | | `--num_video_frames` | Number of frames to generate | 121 | | `--height` | Output video height | 704 | | `--width` | Output video width | 1280 | | `--fps` | Frames per second | 24 | | `--seed` | Random seed | 1 | | `--disable_prompt_upsampler` | Disable automatic prompt enhancement | False | | `--offload_diffusion_transformer` | Offload DiT model after inference, used for low-memory GPUs | False | | `--offload_tokenizer` | Offload VAE model after inference, used for low-memory GPUs | False | | `--offload_text_encoder_model` | Offload text encoder after inference, used for low-memory GPUs | False | | `--offload_prompt_upsampler` | Offload prompt upsampler after inference, used for low-memory GPUs | False | | `--offload_guardrail_models` | Offload guardrail models after inference, used for low-memory GPUs | False | Note: we support various aspect ratios, including 1:1 (960x960 for height and width), 4:3 (960x704), 3:4 (704x960), 16:9 (1280x704), and 9:16 (704x1280). The frame rate is also adjustable within a range of 12 to 40 fps. The current version of the model only supports 121 frames. #### Text2World Specific Parameters | Parameter | Description | Default | |-----------|-------------|---------| | `--diffusion_transformer_dir` | Directory containing DiT weights | "Cosmos-1.0-Diffusion-7B-Text2World" | | `--prompt_upsampler_dir` | Directory containing prompt upsampler weights | "Cosmos-1.0-Prompt-Upsampler-12B-Text2World" | | `--word_limit_to_skip_upsampler` | Skip prompt upsampler for better robustness if the number of words in the prompt is greater than this value | 250 | #### Video2World Specific Parameters | Parameter | Description | Default | |-----------|-------------|---------| | `--diffusion_transformer_dir` | Directory containing DiT weights | "Cosmos-1.0-Diffusion-7B-Video2World" | | `--prompt_upsampler_dir` | Directory containing prompt upsampler weights | "Pixtral-12B" | | `--input_image_or_video_path` | Input video/image path for single video generation. Required for single video generation. | None | | `--num_input_frames` | Number of video frames (1 or 9) | 1 | ### Safety Features The model uses a built-in safety guardrail system that cannot be disabled. Generating human faces is not allowed and will be blurred by the guardrail. For more information, check out the [Cosmos Guardrail Documentation](../guardrail/README.md). ### Prompting Instructions The input prompt is the most important parameter under the user's control when interacting with the model. Providing rich and descriptive prompts can positively impact the output quality of the model, whereas short and poorly detailed prompts can lead to subpar video generation. Here are some recommendations to keep in mind when crafting text prompts for the model: 1. **Describe a single, captivating scene**: Focus on a single scene to prevent the model from generating videos with unnecessary shot changes. 2. **Limit camera control instructions**: The model doesn't handle prompts involving camera control well, as this feature is still under development. 3. **Prompt upsampler limitations**: The current version of the prompt upsampler may sometimes deviate from the original intent of your prompt, adding unwanted details. If this happens, you can disable the upsampler with the --disable_prompt_upsampler flag and edit your prompt manually. We recommend using prompts of around 120 words for optimal quality. #### Cosmos-1.0-Prompt-Upsampler The prompt upsampler automatically expands brief prompts into more detailed descriptions (Text2World) or generates detailed prompts based on input images (Video2World). ##### Text2World When enabled (default), the upsampler will: 1. Take your input prompt 2. Process it through a finetuned Mistral model to generate a more detailed description 3. Use the expanded description for video generation This can help generate better quality videos by providing more detailed context to the video generation model. To disable this feature, use the `--disable_prompt_upsampler` flag. ##### Video2World When enabled (default), the upsampler will: 1. Take your input image or video 2. Process it through a Pixtral model to generate a detailed description 3. Use the generated description for video generation Please note that the Video2World prompt upsampler does not consider any user-provided text prompt. To disable this feature, use the `--disable_prompt_upsampler` flag.
{ "source": "NVIDIA/Cosmos", "title": "cosmos1/models/diffusion/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 21295 }
# Cosmos Guardrail This page outlines a set of tools to ensure content safety in Cosmos. For implementation details, please consult the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai). ## Overview Our guardrail system consists of two stages: pre-Guard and post-Guard. Cosmos pre-Guard models are applied to text input, including input prompts and upsampled prompts. * Blocklist: a keyword list checker for detecting harmful keywords * Aegis: an LLM-based approach for blocking harmful prompts Cosmos post-Guard models are applied to video frames generated by Cosmos models. * Video Content Safety Filter: a classifier trained to distinguish between safe and unsafe video frames * Face Blur Filter: a face detection and blurring module ## Usage Cosmos Guardrail models are integrated into the diffusion and autoregressive world generation pipelines in this repo. Check out the [Cosmos Diffusion Documentation](../diffusion/README.md) and [Cosmos Autoregressive Documentation](../autoregressive/README.md) to download the Cosmos Guardrail checkpoints and run the end-to-end demo scripts with our Guardrail models.
{ "source": "NVIDIA/Cosmos", "title": "cosmos1/models/guardrail/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/guardrail/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 1187 }
<!-- # SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. # SPDX-License-Identifier: Apache-2.0 # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. --> # Cosmos Tokenizer: A suite of image and video neural tokenizers. ### [Website](https://research.nvidia.com/labs/dir/cosmos-tokenizer) | [Paper](https://arxiv.org/abs/2501.03575) | [NVIDIA Cosmos](https://www.nvidia.com/en-us/ai/cosmos/) | [NVIDIA Blog](https://developer.nvidia.com/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/) | [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6) | [YouTube](https://youtu.be/Soy_myOfWIU) | [TokenBench](https://github.com/NVlabs/TokenBench) We present [**NVIDIA Cosmos Tokenizer**](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer), a suite of image and video tokenizers that advances the state-of-the-art in visual tokenization, paving the way for scalable, robust and efficient development of large auto-regressive transformers (such as LLMs) or diffusion generators. Cosmos Tokenizer is the core component of the [**NVIDIA Cosmos**](https://github.com/NVIDIA/Cosmos), a developer-first video foundation model platform designed to help Physical AI developers build their Physical AI systems better and faster. Please check out our [demo video](https://youtu.be/Soy_myOfWIU). | | Continuous ( C ) | Discrete ( D ) | | ------------------|---------------------|---------------------| | **Images ( I )** | Cosmos-Tokenizer-CI | Cosmos-Tokenizer-DI | | **Videos ( V )** | Cosmos-Tokenizer-CV | Cosmos-Tokenizer-DV | <video src="https://github.com/user-attachments/assets/a40b0cc0-17dc-42e9-a97c-fe1c8bb03548" controls poster="https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer/assets/cosmos-tokenizer.jpg?raw=true"> Your browser does not support the video tag. </video> Given an image or video, Cosmos Tokenizer outputs either continuous latents or discrete tokens. Cosmos Tokenizer achieves spatial compression rates of 8x or 16x and temporal compression factors of 4x or 8x, resulting in a total compression factor of up to 2048x (=8x16x16). Cosmos Tokenizer delivers 8x more total compression than state-of-the-art (SOTA) methods, while simultaneously maintaining higher image quality and running up to 12x faster than the best available SOTA tokenizers. ![Arch](assets/arch_diagram.jpg) ## Web Demo * Image Tokenization [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nvidia/Cosmos/blob/main/cosmos1/models/tokenizer/notebook/Image_Tokenization.ipynb) * Video Tokenization [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nvidia/Cosmos/blob/main/cosmos1/models/tokenizer/notebook/Video_Tokenization.ipynb) ## Licenses - **Models**: The models are licensed under [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf). Under the NVIDIA Open Model License, NVIDIA confirms: - Models are commercially usable. - You are free to create and distribute Derivative Models. - NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models. - **GitHub Code**: This repository is licensed under the [Apache 2.0 license](https://github.com/NVIDIA/Cosmos/blob/main/LICENSE). ## Installation Follow our [Installation Guide](../../../INSTALL.md) to set up the Docker environment. All commands on this page should be run inside Docker. ## Download Pre-trained Checkpoints from Hugging Face We host 12 Cosmos-Tokenizer models on [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6), with the following model names. You can use this snippet to download: ```python from huggingface_hub import login, snapshot_download import os login(token="<YOUR-HF-TOKEN>", add_to_git_credential=True) model_names = [ "Cosmos-0.1-Tokenizer-CI8x8", "Cosmos-0.1-Tokenizer-CI16x16", "Cosmos-0.1-Tokenizer-CV4x8x8", "Cosmos-0.1-Tokenizer-CV8x8x8", "Cosmos-0.1-Tokenizer-CV8x16x16", "Cosmos-0.1-Tokenizer-DI8x8", "Cosmos-0.1-Tokenizer-DI16x16", "Cosmos-0.1-Tokenizer-DV4x8x8", "Cosmos-0.1-Tokenizer-DV8x8x8", "Cosmos-0.1-Tokenizer-DV8x16x16", "Cosmos-1.0-Tokenizer-CV8x8x8", "Cosmos-1.0-Tokenizer-DV8x16x16", ] for model_name in model_names: hf_repo = "nvidia/" + model_name local_dir = "checkpoints/" + model_name print(f"downloading {model_name}...") snapshot_download(repo_id=hf_repo, local_dir=local_dir) ``` Under the checkpoint repository `checkpoints/{model_name}`, we provide the encoder, decoder and the full autoencoder JIT models. ```bash ├── Cosmos-1.0-Tokenizer-CV8x8x8/ │ ├── encoder.jit │ ├── decoder.jit │ ├── autoencoder.jit ``` ## Running the codes You can use the following example commands to encode and decode images or videos. <br /> For each, the same command works for both continuous and discrete tokenization. Simply provide the proper JIT-compiled ckpt to `checkpoint_enc`, `checkpoint_dec`, or the full autoencoder ckpt to `checkpoint`. ### Encoding into Continuous Latent Space ```python import torch from cosmos1.models.tokenizer.inference.video_lib import CausalVideoTokenizer model_name = "Cosmos-0.1-Tokenizer-CV4x8x8" input_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) # [B, C, T, H, W] encoder = CausalVideoTokenizer(checkpoint_enc=f'checkpoints/{model_name}/encoder.jit') (latent,) = encoder.encode(input_tensor) torch.testing.assert_close(latent.shape, (1, 16, 3, 64, 64)) # The input tensor can be reconstructed by the decoder as: decoder = CausalVideoTokenizer(checkpoint_dec=f'checkpoints/{model_name}/decoder.jit') reconstructed_tensor = decoder.decode(latent) torch.testing.assert_close(reconstructed_tensor.shape, input_tensor.shape) ``` The `latent` will have the shape `(1, 16, 3, 64, 64)`, where the first of the three latents represents the first frame, and C=16 is the number of channels of the latent. ### Encoding into Discrete Tokens ```python import torch from cosmos1.models.tokenizer.inference.video_lib import CausalVideoTokenizer model_name = "Cosmos-0.1-Tokenizer-DV4x8x8" input_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) # [B, C, T, H, W] encoder = CausalVideoTokenizer(checkpoint_enc=f'checkpoints/{model_name}/encoder.jit') (indices, codes) = encoder.encode(input_tensor) torch.testing.assert_close(indices.shape, (1, 3, 64, 64)) torch.testing.assert_close(codes.shape, (1, 6, 3, 64, 64)) # The input tensor can be reconstructed by the decoder as: decoder = CausalVideoTokenizer(checkpoint_dec=f'checkpoints/{model_name}/decoder.jit') reconstructed_tensor = decoder.decode(indices) torch.testing.assert_close(reconstructed_tensor.shape, input_tensor.shape) ``` The `indices` will have the shape `(1, 3, 64, 64)` and contain integral values in the range `[1..64K]`, where the first of the three integral maps represents the first frame. The `codes` will contain the pre-quantization continuous latent with shape `(1, 6, 3, 64, 64)`, where C=6 represents the number of FSQ levels. ## Torchscript (PyTorch JIT) Inference APIs The following instructions run the various tokenizer on the example image and video provided in `cosmos1/models/tokenizer/test_data/`. - Autoencoding images. Accepts an input image, and outputs a reconstruction of the image obtained by decoding the encoded latents. ```bash # Autoencoding images using `Cosmos-CI` with a compression rate of 8x8. model_name="Cosmos-0.1-Tokenizer-CI8x8" python3 -m cosmos1.models.tokenizer.inference.image_cli \ --image_pattern 'cosmos1/models/tokenizer/test_data/image.png' \ --checkpoint_enc checkpoints/${model_name}/encoder.jit \ --checkpoint_dec checkpoints/${model_name}/decoder.jit ``` If `--output_dir` is not specified, you can find the reconstructed image at `cosmos1/models/tokenizer/test_data/reconstructions/image.png`. - Autoencoding videos. Accepts an input video, and outputs a reconstruction of the video obtained by decoding the encoded latents. ```bash # Autoencoding videos using `Cosmos-DV` with a compression rate of 4x8x8. model_name="Cosmos-0.1-Tokenizer-DV4x8x8" python3 -m cosmos1.models.tokenizer.inference.video_cli \ --video_pattern 'cosmos1/models/tokenizer/test_data/video.mp4' \ --checkpoint_enc checkpoints/${model_name}/encoder.jit \ --checkpoint_dec checkpoints/${model_name}/decoder.jit ``` If `--output_dir` is not specified, then you can find the reconstructed video at `cosmos1/models/tokenizer/test_data/reconstructions/video.mp4`. ## PyTorch Inference APIs To run the tokenizers in native PyTorch, append your commands with `--mode=torch`. <br /> In PyTorch mode, the model is constructed from the native network definition scripts, which requires providing additional arguments to configure the model for instantiation. For example, to instantiate a `Cosmos-DI` with a spatial compression factor of 8, append the following command line arguments: - `--mode=torch` - `--tokenizer_type=DI` - `--spatial_compression=8` Note that the `--checkpoint_enc`, `--checkpoint_dec`, and `--checkpoint` should still refer to JIT files. <br /> The necessary `state_dict`s will be extracted from the loaded JIT models to initialize the weights of the constructed native PyTorch model. ```bash # Autoencoding images using `Cosmos-DI` with a compression rate of 8x8. model_name="Cosmos-0.1-Tokenizer-DI8x8" python3 -m cosmos1.models.tokenizer.inference.image_cli \ --image_pattern 'cosmos1/models/tokenizer/test_data/*.png' \ --mode=torch \ --tokenizer_type=DI \ --spatial_compression=8 \ --checkpoint_enc checkpoints/${model_name}/encoder.jit \ --checkpoint_dec checkpoints/${model_name}/decoder.jit ``` To instantiate a `Cosmos-CV` with a temporal factor of 8 and a spatial compression factor of 8, append the following command line arguments: - `--mode=torch` - `--tokenizer_type=CV` - `--temporal_compression=8` - `--spatial_compression=8` ```bash # Autoencoding videos using `Cosmos-CV` with a compression rate of 8x8x8. model_name="Cosmos-1.0-Tokenizer-CV8x8x8" python3 -m cosmos1.models.tokenizer.inference.video_cli \ --video_pattern 'cosmos1/models/tokenizer/test_data/*.mp4' \ --mode=torch \ --tokenizer_type=CV \ --temporal_compression=8 \ --spatial_compression=8 \ --checkpoint_enc checkpoints/${model_name}/encoder.jit \ --checkpoint_dec checkpoints/${model_name}/decoder.jit ``` ## Inference & dataset tokenization with NeMo (JIT/TensorRT) TensorRT inference is coming soon, which will be available in [Cosmos Tokenizer README within the NeMo repository](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/common/video_tokenizers) ### JIT inference Please install NeMo from the GitHub `main` branch following the instructions [here](https://github.com/NVIDIA/NeMo?tab=readme-ov-file#pip-from-a-source-branch). Run the following code to tokenize the video: ```python import torch from nemo.collections.common.video_tokenizers.cosmos_vision_tokenizer import CausalVideoTokenizer model_name = "Cosmos-0.1-Tokenizer-CV4x8x8" model = CausalVideoTokenizer.from_pretrained(model_name) input_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) (latent, ) = model.encode(input_tensor) ``` ### dataset tokenization and multimodal model training Please see the [Cosmos Tokenizer README within the NeMo repository](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/common/video_tokenizers) for additional examples to create multimodal training datasets with the Cosmos Tokenizer. ## Evaluation Quantitative comparision of our tokenizer and previous tokenizers on DAVIS (Perazzi et al., 2016) dataset. Cosmos Tokenizer achieves state-of-the-art results. Even at higer compression rates (8x8x8 and 8x16x16), Cosmos Tokenizer outperforms previous methods, demonstrating excellent compression-quality trade-off. ![Arch](assets/Davis-results.jpg) ## Performance Comparision of parameter counts and average encoding and decoding times per image or per video frame on a single A100 80GB GPU. Cosmos Tokenizer achieves 2x to 12x faster speeds than previous methods while maintaining smallest model sizes, demonstrating high tokenization efficiency. ![Arch](assets/Performance.jpg) ## [TokenBench](https://github.com/NVlabs/TokenBench) TokenBench is a comprehensive benchmark that we have curated to standardize the evaluation of [Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer). It covers a wide variety of domains including robotic manipulation, driving, egocentric, and web videos. It consists of high-resolution, long-duration videos, and is designed to benchmark video tokenizers. We have made TokenBench publicly available at [github.com/NVlabs/TokenBench](https://github.com/NVlabs/TokenBench). ## Core Contributors Fitsum Reda, Jinwei Gu, Xian Liu, Songwei Ge, Ting-Chun Wang, Haoxiang Wang, Ming-Yu Liu ## Citation If you find Cosmos Tokenizer useful in your works, please acknowledge it appropriately by citing: ``` @article{agarwal2025cosmos, title={Cosmos World Foundation Model Platform for Physical AI}, author={NVIDIA et. al.}, journal={arXiv preprint arXiv:2501.03575}, year={2025} } ``` ## Acknowledgments We would like to acknowledge the following projects where parts of the codes in the [cosmos1/models/tokenizer/modules](cosmos1/models/tokenizer/modules) folder is derived from: - [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) - [lucidrains/magvit2-pytorch](https://github.com/lucidrains/magvit2-pytorch) - [lucidrains/vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch) - [CompVis/taming-transformers](https://github.com/CompVis/taming-transformers)
{ "source": "NVIDIA/Cosmos", "title": "cosmos1/models/tokenizer/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 14604 }
# Cosmos Tokenizer: NeMo Framework Finetuning User Guide Post-train the Cosmos Tokenizer using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) to more accurately model previously unseen scenarios in your customer data, particularly for self-driving applications. By adapting the Cosmos Tokenizer to the specific characteristics and complexities of your in-house video content, you equip it to handle unique visual and temporal patterns that may have been missed during its initial pre-training. This enhanced modeling capability is essential for downstream diffusion models, which rely on the Tokenizer’s output to generate realistic physical scenes—ultimately boosting the performance and safety of your self-driving car systems. ## Model Support Matrix The NeMo Framework currently supports the following Cosmos Tokenizer models. Review the available models for post-training. | Model Name | Model Ckpt | |-------------------------|----------------------------| | Cosmos-1.0-Tokenizer-CV8x8x8 | [HF Download](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-CV8x8x8) | | Cosmos-1.0-Tokenizer-DV8x16x16 | [HF Download](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-DV8x16x16) | For optimal performance, we recommend utilizing GPUs such as the H100-80GB or A100-80GB. Note: Have a use case that would benefit from an alternative tokenizer? We'd love to hear from you. You can submit a request via a GitHub issue. ## Post-Training Support Matrix Cosmos Tokenizer can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks: | Post-training Task | Support Status | |-------------------------|--------------------| | General post-training and validation | **Supported** | ## Prerequisites ### 1. Review General Requirements - System Configuration - **NVIDIA GPU and driver**: Ensure you have access to the 80G H100 or A100 to run the model(s). - **Containerization Platform**: We recommend using NVIDIA [NeMo Docker](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo/tags) Runtime (alternatively, you may use NVIDIA enroot). - Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference. - Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking. ### 2. Clone the Cosmos Repository ```bash git clone git@github.com:NVIDIA/Cosmos.git ``` ### 3. Start the Container The [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos Tokenizer models. Run the following command to download and start the container: ```bash docker run --ipc=host -it --gpus=all \ -v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \ nvcr.io/nvidia/nemo:24.12.01 bash ``` ### 4. Download Checkpoints Follow the links provided in the Model Support Matrix to download the Cosmos Tokenizer checkpoints from Hugging Face. Detailed instructions for the download process are available on the Hugging Face page. ## Post-train Post-training a Cosmos Tokenizer enables you to train the model to compress videos that are more specific to your Physical AI use case. There are 3 steps to post-trainig: preparing a dataset, preprocessing the data, and post-training the model. ### 1. Prepare a Dataset The first step is to prepare your dataset. Organize your data into a folder containing multiple video tars, each contains MP4 format videos (preferably at least 720p resolution). The recommended folder structure is as follows: - `000000.tar` - `1.mp4` - `2.mp4` - `000001.tar` - `3.mp4` - `4.mp4` Here, 000000.tar and 000001.tar represent separate shards, and you may include additional shards as needed. Next we need to index the webdataset with [energon](<https://github.com/NVIDIA/Megatron-Energon>). Navigate to the dataset directory and run the following command: ```bash energon prepare . --num-workers 8 --shuffle-tars ``` Interactively select dataset type `ImageWebdataset` and specify the type `mp4`. Below is an example of the interactive setup: ``` Found 2925 tar files in total. The first and last ones are: - 000000.tar - 002924.tar If you want to exclude some of them, cancel with ctrl+c and specify an exclude filter in the command line. Please enter a desired train/val/test split like "0.5, 0.2, 0.3" or "8,1,1": 99,1,0 Indexing shards [####################################] 2925/2925 Sample 0, keys: - mp4 Sample 1, keys: - mp4 Found the following part types in the dataset: mp4 Do you want to create a dataset.yaml interactively? [Y/n]: The following dataset classes are available: 0. CaptioningWebdataset 1. CrudeWebdataset 2. ImageClassificationWebdataset 3. ImageWebdataset 4. InterleavedWebdataset 5. MultiChoiceVQAWebdataset 6. OCRWebdataset 7. SimilarityInterleavedWebdataset 8. TextWebdataset 9. VQAOCRWebdataset 10. VQAWebdataset 11. VidQAWebdataset Please enter a number to choose a class: 3 The dataset you selected uses the following sample type: @dataclass class ImageSample(Sample): """Sample type for an image, e.g. for image reconstruction.""" #: The input image tensor in the shape (C, H, W) image: torch.Tensor Do you want to set a simple field_map[Y] (or write your own sample_loader [n])? [Y/n]: For each field, please specify the corresponding name in the WebDataset. Available types in WebDataset: mp4 Leave empty for skipping optional field You may also access json fields e.g. by setting the field to: json[field][field] You may also specify alternative fields e.g. by setting to: jpg,png Please enter the field_map for ImageWebdataset: Please enter a webdataset field name for 'image' (<class 'torch.Tensor'>): That type doesn't exist in the WebDataset. Please try again. Please enter a webdataset field name for 'image' (<class 'torch.Tensor'>): mp4 Done ``` ### 3. Post-train the Model The third step is to post-train the Cosmos tokenizer using the NeMo Framework. #### Run the Post-training Script Complete the following steps to post-train the Cosmos tokenizer Cosmos-1.0-Tokenizer-CV8x8x8. 1. Install the dependencies under cosmos1/models/tokenizer/nemo: ```bash pip install megatron-energon==4.0.0 pyav pip install git+https://github.com/NVIDIA/NeMo-Run.git pip install moviepy==1.0.3 imageio # switch to NeMo branch supporting tokenizer post-training cd /opt/NeMo && git fetch origin cosmos_tokenizer && git checkout cosmos_tokenizer ``` 2. Run the following command to post-train Cosmos-1.0-Tokenizer-CV8x8x8: ```bash export CKPT_PTH="<path/to/your/HF/checkpoints/folder>" export DATA="<path/to/your/data>" # Optionally, you can monitor training progress with Weights and Biases (wandb). export WANDB_API_KEY="</your/wandb/api/key>" export WANDB_PROJECT_NAME="cosmos-diffusion-nemo-post-training" export WANDB_RUN_ID="cosmos_diffusion_7b_text2world" torchrun --nproc-per-node 8 cosmos1/models/tokenizer/nemo/train_tokenizer.py --yes \ data.path=$DATA \ model.jit_ckpt_pth=$CKPT_PTH \ model.model="Cosmos-1.0-Tokenizer-CV8x8x8" ``` ##### Configurable Hyperparameters For a comprehensive list of configurable hyperparameters, please refer to the `train_tokenizer.py` script. The script supports four major configuration components: 1. **model**: Select a model for post-training and pass the model checkpoint. 2. **data**: Define batch size and dataloader related hyper-parameters. 3. **trainer**: Define the training loop. 4. **optim**: Specify the post-training optimizer hyperparameters. You can configure any hyperparameter of these four components by setting the value in the launch script using the following format: ```bash model.jit_ckpt_pth=<your/desired/path> trainer.max_epochs=<your/desired/epochs> ``` Adjust the values as needed to suit your training requirements. After a few hundred iterations, you should observe that the 'loss' reported in Weights & Biases (`wandb`) starts decreasing. <p align="center"> <img src="./assets/loss.png" alt="Image description" width="50%"> </p>
{ "source": "NVIDIA/Cosmos", "title": "cosmos1/models/tokenizer/nemo/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer/nemo/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 8312 }
# Cosmos Autoregressive-based World Foundation Models: NeMo Framework User Guide Learn how to [run inference](#run-inference) with Cosmos Autoregressive-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide. ## Model Support Matrix The NeMo Framework supports the following Cosmos Autoregressive (AR) models. Review the available models and their compute requirements for post-training and inference to determine the best model for your use case. | Model Name | Model Status | Compute Requirements for Inference | Multi-GPU Support | |----------------------------------------------|------------------|------------------------------------------|---------| | Cosmos-1.0-Autoregressive-4B | **Supported** | 1 NVIDIA GPU* | **Coming Soon** | | Cosmos-1.0-Autoregressive-12B | **Supported** | 1 NVIDIA GPU* | **Coming Soon** | | Cosmos-1.0-Autoregressive-5B-Video2World | **Supported** | 1 NVIDIA GPU* | **Coming Soon** | | Cosmos-1.0-Autoregressive-13B-Video2World | **Supported** | 1 NVIDIA GPU* | **Coming Soon** | **\*** `H100-80GB` or `A100-80GB` GPUs are recommended. ## Post-Training Inference Support Matrix Cosmos Autoregressive-based WFMs can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks: | Post-training Task | Inference Support Status | |-------------------------|--------------------| | General post-training | **Supported** | | Instruction control | **Supported** | | Action control | **Coming Soon** | | Camera control | **Coming Soon** | | Multi-view generation | **Coming Soon** | | Multi-view generation with vehicle trajectory control | **Coming Soon** | ## Prerequisites ### 1. Review General Requirements - System Configuration - **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix. - **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot). - Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference. - Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking. ### 2. Clone the Cosmos Repository ```bash git clone git@github.com:NVIDIA/Cosmos.git ``` ### 3. Start the Container The [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos AR models. Run the following command to download and start the container: ```bash docker run --ipc=host -it --gpus=all \ -v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \ nvcr.io/nvidia/nemo:25.02.rc1 bash ``` ### 4. Download Checkpoints To help you get started, we've provided a [download script](../download_autoregressive_nemo.py) to get the Cosmos Autoregressive checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework. 1. Set the following environment variables: ```bash # You must set HF_HOME before running this script. export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" ``` 2. Run the following command to download the models: ```bash cd /workspace/Cosmos python cosmos1/models/autoregressive/nemo/download_autoregressive_nemo.py ``` ## Run Inference Running inference with Cosmos AR models lets you predict video frames and generate a new video that continues the scene from a given input video. In this guide, we'll use this [example inference script](./general.py) to tokenize the input video into a sequence of tokens, which serve as prompts for the model. The model then generates new tokens representing the next set of frames. Finally, the new tokens are decoded back into video format. Only the last 9 frames of the input video are used to generate the next 24 frames. ### Run the Inference Script with Base Models #### 4B and 12B Models Complete the following steps to run inference on the 4B model. 1. Set the following environment variables: ```bash # Install required packages pip install --no-cache-dir imageio[ffmpeg] pyav iopath better_profanity peft git+https://github.com/NVlabs/Pytorch_Retinaface.git@b843f45 export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Path to the the mp4 file (In git-lfs) export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4 ``` 2. Run the following command: ```bash cd /workspace/Cosmos git lfs pull $INPUT_DATA NVTE_FLASH_ATTN=1 \ NVTE_FUSED_ATTN=0 \ NVTE_UNFUSED_ATTN=0 \ torchrun --nproc-per-node 1 cosmos1/models/autoregressive/nemo/inference/general.py \ --input_image_or_video_path $INPUT_DATA \ --video_save_name "Cosmos-1.0-Autoregressive-4B.mp4" \ --ar_model_dir nvidia/Cosmos-1.0-Autoregressive-4B ``` #### 5B and 13B Models Complete the following steps to run inference on the 5B model. 1. Set the following environment variables: ```bash # Install required packages pip install --no-cache-dir imageio[ffmpeg] pyav iopath better_profanity peft git+https://github.com/NVlabs/Pytorch_Retinaface.git@b843f45 export HF_TOKEN=<YOUR HF TOKEN> export HF_HOME="<path/to/store/checkpoints>" # Path to the the mp4 file (In git-lfs) export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4 ``` 2. Run the following command: ```bash cd /workspace/Cosmos git lfs pull $INPUT_DATA NVTE_FLASH_ATTN=1 \ NVTE_FUSED_ATTN=0 \ NVTE_UNFUSED_ATTN=0 \ python3 cosmos1/models/autoregressive/nemo/inference/video2world.py \ --input_type video \ --input_image_or_video_path 'cosmos1/models/autoregressive/assets/v1p0/input.mp4' \ --prompt "A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \ --disable_diffusion_decoder \ --ar_model_dir nvidia/Cosmos-1.0-Autoregressive-5B-Video2World ``` ### Run the Inference Script with Post-trained Models You must [create a post-trained model](../post_training/README.md) before completing this section. #### 4B and 12B Models Complete the following steps to generate a new output video using a post-trained Base model. 1. Set the following environment variables: ```bash export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Inference with post-trained model. # NOTE: Dont use the checkpoint with -last suffix. export NEMO_CHECKPOINT=./logs/default/checkpoints/epoch\=0-step\=9 # Path to the the mp4 file (In git-lfs) export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4 ``` 2. Run the following command: ```bash cd /workspace/Cosmos git lfs pull $INPUT_DATA # change --ar_model_dir to a post-trained checkpoint under ./logs/default/checkpoints/ NVTE_FLASH_ATTN=1 \ NVTE_FUSED_ATTN=0 \ NVTE_UNFUSED_ATTN=0 \ torchrun --nproc-per-node 1 cosmos1/models/autoregressive/nemo/inference/general.py \ --input_image_or_video_path $INPUT_DATA \ --video_save_name "Cosmos-1.0-Autoregressive-4B.mp4" \ --ar_model_dir "$NEMO_CHECKPOINT" ``` #### 5B and 13B Models Complete the following steps to generate a new output video using a post-trained Video2World model. 1. Set the following environment variables: ```bash export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Inference with post-trained model. # NOTE: Dont use the checkpoint with -last suffix. export NEMO_CHECKPOINT=./logs/default/checkpoints/epoch\=2-step\=9-last # Path to the the mp4 file (In git-lfs) export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4 ``` 2. Run the following command: ```bash cd /workspace/Cosmos git lfs pull $INPUT_DATA # change --ar_model_dir to a post-trained checkpoint under ./logs/default/checkpoints/ NVTE_FLASH_ATTN=1 \ NVTE_FUSED_ATTN=0 \ NVTE_UNFUSED_ATTN=0 \ python3 cosmos1/models/autoregressive/nemo/inference/video2world.py \ --input_image_or_video_path $INPUT_DATA \ --video_save_name "Cosmos-1.0-Autoregressive-5B-Video2World.mp4" \ --prompt "A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \ --ar_model_dir "$NEMO_CHECKPOINT" ``` #### Example Output The following output is an example video generated from the post-trained model using [`general.py`](./general.py): <video src="https://github.com/user-attachments/assets/e744a5a4-2ce0-4de3-9497-7152b25c9022"> Your browser doesn't support the video tag. </video> Generated videos are saved at the location configured in the `--video_save_name` parameter. The input video used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.mp4`. > **Disclaimer**: > The post-training example in this documentation is a demonstration of general post-training and not a guaranteed recipe for success. Post-training outcomes depend heavily on the quality and diversity of the dataset. To achieve good results, ensure your dataset is clean, well-structured, diverse, and properly labeled. Poorly prepared data can lead to issues like overfitting, bias, or poor performance. Carefully curate your dataset to reflect the desired use case for reliable results. ### Configuration Options The following table details the parameters that can be modified for accelerated inference with NeMo. You can adjust these parameters to optimize performance based on your specific requirements | Parameter | Description | Default | |--------------------------------|---------------------------------------------------------------------------------|---------| | `--input_type` | The input type (image or video) | `video` | | `--input_image_or_video_path` | Path to the input video to run inference | `cosmos1/models/autoregressive/assets/v1p0/input.mp4` | | `--video_save_name` | Path to generated video | `./nemo_generated_video.mp4` | | `--ar_model_dir` | Model name or path to model `nvidia/Cosmos-1.0-Autoregressive-4B` or `nvidia/Cosmos-1.0-Autoregressive-12B` | `nvidia/Cosmos-1.0-Autoregressive-4B` | | `--encoder_path` | Path to encoder | `nvidia/Cosmos-1.0-Tokenizer-DV8x16x16` | | `--decoder_path` | Path to the decoder | `nvidia/Cosmos-1.0-Tokenizer-DV8x16x1"` | | `--guardrail_dir` | Path to guardrails | `nvidia/Cosmos-1.0-Guardrail` | | `--top_p` | Top-p inference parameter | `0.9` | | `--temperature` | Sampling temperature | `1` | | `--disable_diffusion_decoder` | Disables running diffusion decoder on the generated result | `False` |
{ "source": "NVIDIA/Cosmos", "title": "cosmos1/models/autoregressive/nemo/inference/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/nemo/inference/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 11866 }
# Cosmos Autoregressive-based World Foundation Models: NeMo Framework User Guide Learn how to [post-train](#post-train) Cosmos Autoregressive-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide. ## Model Support Matrix The NeMo Framework supports the following Cosmos Autoregressive (AR) models. Review the available models and their compute requirements for post-training and inference to determine the best model for your use case. | Model Name | Model Status | Compute Requirements for Post-Training | |-------------------------|----------------------------|-------------------------------------------| | Cosmos-1.0-Autoregressive-4B | **Supported** | 2 NVIDIA GPUs* | | Cosmos-1.0-Autoregressive-12B | **Supported** | 8 NVIDIA GPUs* | | Cosmos-1.0-Autoregressive-5B-Video2World | **Supported** | 2 NVIDIA GPUs* | | Cosmos-1.0-Autoregressive-13B-Video2World | **Supported** | 8 NVIDIA GPUs* | **\*** `H100-80GB` or `A100-80GB` GPUs are recommended. ## Post-Training Support Matrix Cosmos Autoregressive-based WFMs can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks: | Post-training Task | Support Status | |-------------------------|--------------------| | General post-training | **Supported** | | Instruction control | **Coming Soon** | | Action control | **Coming Soon** | | Camera control | **Coming Soon** | | Multi-view generation | **Coming Soon** | | Multi-view generation with vehicle trajectory control | **Coming Soon** | ## Prerequisites ### 1. Review General Requirements - System Configuration - **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix. - **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot). - Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference. - Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking. ### 2. Clone the Cosmos Repository ```bash git clone git@github.com:NVIDIA/Cosmos.git ``` ### 3. Start the Container The [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos AR models. Run the following command to download and start the container: ```bash docker run --ipc=host -it --gpus=all \ -v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \ nvcr.io/nvidia/nemo:25.02.rc1 bash ``` ### 4. Download Checkpoints To help you get started, we've provided a [download script](../download_autoregressive_nemo.py) to get the Cosmos Autoregressive checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework. 1. Set the following environment variables: ```bash # You must set HF_HOME before running this script. export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" ``` 2. Run the following command to download the models: ```bash cd /workspace/Cosmos python cosmos1/models/autoregressive/nemo/download_autoregressive_nemo.py ``` ## Post-train Post-training a Cosmos Autoregressive-based WFM enables you to train the model to generate videos using frame predictions that are more specific to your Physical AI use case. For example, if you want to generate action sequences for a specific robot, you can post-train the model to generate videos that are more aligned with typical actions/outcomes for that robot. There are 3 steps to post-training: preparing a dataset, preprocessing the data, and post-training the model. ### 1. Prepare a Dataset The first step is to prepare a dataset. Post-training a Cosmos-1.0-Autoregressive model enables you to get better video-frame predictions for your specific use case. You must provide a folder containing a collection of videos in **MP4 format**, preferably 720p. In this guide, we'll use the sample videos located in the `cosmos1/models/autoregressive/assets/v1p0/batch_inputs` directory. ### 2. Preprocess Data #### 4B and 12B Models The second step is to preprocess the data to create an [indexed dataset](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core/datasets). The `IndexedDataset` class is the lowest-level data interface in Megatron Core and creates a `.bin` and `.idx` file. Before proceeding, ensure all videos are in **RGB format**. Complete the following steps to preprocess the data. 1. Set the following environment variables: ```bash export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Path to Raw mp4 videos. export RAW_DATA="cosmos1/models/autoregressive/assets/v1p0/batch_inputs" # Path to Processed Dataset. export OUTPUT_PREFIX="./indexed_videos" ``` 2. Run the following command to preprocess the data: ```bash cd /workspace/Cosmos git lfs pull --include=$RAW_DATA python cosmos1/models/autoregressive/nemo/post_training/prepare_dataset.py \ --input_videos_dir $RAW_DATA \ --output_prefix $OUTPUT_PREFIX ``` Executing the [data preprocessing script](./prepare_dataset.py) for the base model generates the following files for each video: - **`[i].idx` File**: This file contains metadata at the dataset level: - **Index Header**: Ensures backward compatibility. - **Index Version**: Maintains backward compatibility. - **Data Type Code**: Numeric code indicating the data type used in the data file. - **Sequence Count**: Total number of sequences in the dataset. - **Document Count**: Total number of documents in the dataset. - **`[i].bin` File**: This file includes metadata at the document and sequence levels: - **Elements per Sequence**: Number of elements in each sequence. - **Byte Offset per Sequence**: Pointer indicating the start of each sequence. - **Sequence Index Range**: Consecutive index range `[...)` for each document. #### 5B and 13B Models The second step is to preprocess the data to pre compute the text and video embeddings for finetuning.. Before proceeding, ensure all videos are in **RGB format**. Complete the following steps to preprocess the data. 1. Set the following environment variables: ```bash export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Path to Raw mp4 videos. export RAW_DATA="cosmos1/models/autoregressive/assets/v1p0/batch_inputs" # Path to Processed Dataset. export OUTPUT_PREFIX="./indexed_videos" ``` 2. Run the following command to preprocess the data: ```bash cd /workspace/Cosmos git lfs pull --include=$RAW_DATA python3 cosmos1/models/autoregressive/nemo/post_training/video2world_prepare_dataset.py \ --input_jsonl $RAW_DATA/video2world.jsonl \ --output_dir $OUTPUT_PREFIX ``` Executing the [data preprocessing script](./video2world_prepare_dataset.py) for the base model generates Executing the [data preprocessing script](./prepare_dataset.py) for the base model generates the following files for each video: - **`[i].pt` File**: This file contains video tokens or prompt embeddings: - It has a format `<train/test/val>_<prompt/video>_<idx>.pt` - **`[i]metadata.json` File**: This file includes metadata: - It tells you the number of train test and validation samples ### 3. Post-train the Model The third step is to post-train the model. This step uses NeMo Framework's data and model parallelism capabilities to train the model on the post-training samples. This is accomplished by utilizing Tensor Parallelism. - **Tensor Parallelism**: Spreads the parameter tensor of individual layers across GPUs. #### Run the Post-training Script ##### 4B AND 12B Models Complete the following steps to post-train the Cosmos-1.0-Autoregressive-4B model. 1. Set the following environment variables: ```bash export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Number of GPU devices available for post-training. At least 2 for 4B and 8 for 12B. export NUM_DEVICES=2 # Optionally, you can monitor training progress with Weights and Biases (wandb). export WANDB_API_KEY="</your/wandb/api/key>" export WANDB_PROJECT_NAME="cosmos-autoregressive-nemo-finetuning" export WANDB_RUN_ID="cosmos_autoregressive_4b_finetune" ``` 2. Run the following command for Cosmos-1.0-Autoregressive-4B post-training: ```bash torchrun --nproc-per-node $NUM_DEVICES cosmos1/models/autoregressive/nemo/post_training/general.py \ --data_path $OUTPUT_PREFIX \ --split_string 4,1,1 \ --log_dir ./logs \ --max_steps 10 --save_every_n_steps 5 \ --tensor_model_parallel_size $NUM_DEVICES \ --model_path nvidia/Cosmos-1.0-Autoregressive-4B ``` 3. You can now run inference with your post-trained model using the instructions [here](../inference/README.md#run-the-inference-script-with-post-trained-model). ##### 5B and 13B Models Complete the following steps to post-train the Cosmos-1.0-Autoregressive-5B model. 1. Set the following environment variables: ```bash export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Number of GPU devices available for post-training. At least 4 for 5B and 8 for 13B. export NUM_DEVICES=4 # Optionally, you can monitor training progress with Weights and Biases (wandb). export WANDB_API_KEY="</your/wandb/api/key>" export WANDB_PROJECT_NAME="cosmos-autoregressive-nemo-finetuning" export WANDB_RUN_ID="cosmos_autoregressive_5b_finetune" ``` 2. Run the following command for Cosmos-1.0-Autoregressive-5B-Video2World post-training: ```bash torchrun --nproc-per-node $NUM_DEVICES \ cosmos1/models/autoregressive/nemo/post_training/video2world_finetuning.py \ --data_path $OUTPUT_PREFIX \ --log_dir ./logs \ --max_steps 10 --save_every_n_steps 5 \ --tensor_model_parallel_size $NUM_DEVICES \ --model_path nvidia/Cosmos-1.0-Autoregressive-5B-Video2World ``` 3. You can now run inference with your post-trained model using the instructions [here](../inference/README.md#run-the-inference-script-with-post-trained-model). #### Configuration Options Before getting started, review the following parameters that made available to the script. You can adjust these parameters to optimize performance based on your specific requirements. | Parameter | Description | Default | |---|---|---| | `--data_path` | Specifies the location of your preprocessed dataset. Ensure this path points to the directory containing your `.bin` and `.idx` files. | `/path/to/data` | | `--model_path` | Specifies the directory to the cosmos model to run post-training on. | `nvidia/Cosmos-1.0-Autoregressive-4B` | | `--index_mapping_dir` | Specifies the directory to store the indexed dataset. | `./index_mapping` | | `--log_dir` | Specifies the directory to store the logs and checkpoints. | `./log_dir` | | `--split_string` | Specifies the data split ratios for training, validation, and testing. (Only valid for Base Model (4B and 12B)) | `4,1,1` | | `--tensor_model_parallel_size` | Controls the number of GPUs used for model parallelism. Increase this number to scale up, ensuring your hardware can support the additional load. | `2` | | `--max_steps` | Defines the total number of training steps. Adjust based on training duration and storage capacity. | `100` | | `--save_every_n_steps` | Defines how often checkpoints are saved. Adjust based on training duration and storage capacity. | `10` | | `--global_batch_size` | Tweaks to optimize memory usage and training speed. Larger batch sizes may improve convergence but require more memory. | `2` | | `--micro_batch_size` | Tweaks to optimize memory usage and training speed. Larger batch sizes may improve convergence but require more memory. | `1` | | `--lr` | Sets the learning rate. A common starting point is `5e-5`, but this can be adjusted based on model performance and convergence behavior. | `5e-5` | | `--max_epochs` | The maximum number of epochs to run during post-training | `10` |
{ "source": "NVIDIA/Cosmos", "title": "cosmos1/models/autoregressive/nemo/post_training/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/nemo/post_training/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 12643 }