text
stringlengths
7
324k
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
463
# Builds CPU-only Docker image of PyTorch # Uses multi-staged approach to reduce size # Stage 1 FROM python:3.8-slim as compile-image ARG DEBIAN_FRONTEND=noninteractive RUN apt update RUN apt-get install -y --no-install-recommends \ build-essential \ git \ gcc # Setup virtual environment for Docker ENV VIRTUAL_ENV=/opt/venv RUN python3 -m venv ${VIRTUAL_ENV} # Make sure we use the virtualenv ENV PATH="${VIRTUAL_ENV}/bin:$PATH" WORKDIR /workspace # Install specific CPU torch wheel to save on space RUN python3 -m pip install --upgrade --no-cache-dir pip RUN python3 -m pip install --no-cache-dir \ jupyter \ git+https://github.com/huggingface/accelerate#egg=accelerate[testing,test_trackers] \ --extra-index-url https://download.pytorch.org/whl/cpu # Stage 2 FROM python:3.8-slim AS build-image COPY --from=compile-image /opt/venv /opt/venv RUN useradd -ms /bin/bash user USER user # Make sure we use the virtualenv ENV PATH="/opt/venv/bin:$PATH" CMD ["/bin/bash"]
accelerate/docker/accelerate-cpu/Dockerfile/0
{ "file_path": "accelerate/docker/accelerate-cpu/Dockerfile", "repo_id": "accelerate", "token_count": 380 }
0
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Accelerate's internal mechanisms Internally, ๐Ÿค— Accelerate works by first analyzing the environment in which the script is launched to determine which kind of distributed setup is used, how many different processes there are and which one the current script is in. All that information is stored in the [`~AcceleratorState`]. This class is initialized the first time you instantiate an [`~Accelerator`] as well as performing any specific initialization your distributed setup needs. Its state is then uniquely shared through all instances of [`~state.AcceleratorState`]. (The same can also be done with the [`PartialState`], a more barebones version it inherits) Then, when calling [`~Accelerator.prepare`], the library: - wraps your model(s) in the container adapted for the distributed setup, - wraps your optimizer(s) in an [`~optimizer.AcceleratedOptimizer`], - wraps your scheduler(s) in an [`~scheduler.AcceleratedScheduler`] - creates a new version of your dataloader(s) in a [`~data_loader.DataLoaderShard`] or [`~data_loader.DataLoaderDispatcher`] While the model(s), optimizer(s), and scheduler(s) are just put in simple wrappers, the dataloader(s) are re-created. This is mostly because PyTorch does not let the user change the `batch_sampler` of a dataloader once it's been created and the library handles the sharding of your data between processes by changing that `batch_sampler` to yield every other `num_processes` batches (if enabled). The [`~data_loader.DataLoaderShard`] subclasses `DataLoader` to add the following functionality: - it synchronizes the appropriate random number generator of all processes at each new iteration, to ensure any randomization (like shuffling) is done the exact same way across processes. - it puts the batches on the proper device before yielding them (unless you have opted out of `device_placement=True`). The [`~data_loader.DataLoaderDispatcher`] subclasses differs from the [`~data_loader.DataLoaderShard`] in that when iterating through the `DataLoader`, the data is all starting from process 0 and *then* split and sent off to each process rather than it happening at the dataset level. The random number generator synchronization will by default synchronize: - the `generator` attribute of a given sampler (like the PyTorch `RandomSampler`) for PyTorch >= 1.6 - the main random number generator in PyTorch <=1.5.1 You can choose which random number generator(s) to synchronize with the `rng_types` argument of the main [`Accelerator`]. In PyTorch >= 1.6, it is recommended to rely on a local `generator` to avoid setting the same seed in the main random number generator in all processes. <Tip warning={true}> Synchronization of the main torch (or CUDA or XLA) random number generator will affect any other potential random artifacts you could have in your dataset (like random data augmentation) in the sense that all processes will get the same random numbers from the torch random modules (so will apply the same random data augmentation if it's controlled by torch). </Tip> <Tip> The randomization part of your custom sampler, batch sampler or iterable dataset should be done using a local `torch.Generator` object (in PyTorch >= 1.6), see the traditional `RandomSampler`, as an example. </Tip> For more details about the internals, see the [Internals page](package_reference/torch_wrappers).
accelerate/docs/source/concept_guides/internal_mechanism.md/0
{ "file_path": "accelerate/docs/source/concept_guides/internal_mechanism.md", "repo_id": "accelerate", "token_count": 1096 }
1
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Megatron-LM [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) enables training large transformer language models at scale. It provides efficient tensor, pipeline and sequence based model parallelism for pre-training transformer based Language Models such as [GPT](https://arxiv.org/abs/2005.14165) (Decoder Only), [BERT](https://arxiv.org/pdf/1810.04805.pdf) (Encoder Only) and [T5](https://arxiv.org/abs/1910.10683) (Encoder-Decoder). For detailed information and how things work behind the scene please refer the github [repo](https://github.com/NVIDIA/Megatron-LM). ## What is integrated? Accelerate integrates following feature of Megatron-LM to enable large scale pre-training/finetuning of BERT (Encoder), GPT (Decoder) or T5 models (Encoder and Decoder): a. **Tensor Parallelism (TP)**: Reduces memory footprint without much additional communication on intra-node ranks. Each tensor is split into multiple chunks with each shard residing on separate GPU. At each step, the same mini-batch of data is processed independently and in parallel by each shard followed by syncing across all GPUs (`all-reduce` operation). In a simple transformer layer, this leads to 2 `all-reduces` in the forward path and 2 in the backward path. For more details, please refer research paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf) and this section of ๐Ÿค— blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#tensor-parallelism). b. **Pipeline Parallelism (PP)**: Reduces memory footprint and enables large scale training via inter-node parallelization. Reduces the bubble of naive PP via PipeDream-Flush schedule/1F1B schedule and Interleaved 1F1B schedule. Layers are distributed uniformly across PP stages. For example, if a model has `24` layers and we have `4` GPUs for pipeline parallelism, each GPU will have `6` layers (24/4). For more details on schedules to reduce the idle time of PP, please refer to the research paper [Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM](https://arxiv.org/pdf/2104.04473.pdf) and this section of ๐Ÿค— blogpost [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#pipeline-parallelism). c. **Sequence Parallelism (SP)**: Reduces memory footprint without any additional communication. Only applicable when using TP. It reduces activation memory required as it prevents the same copies to be on the tensor parallel ranks post `all-reduce` by replacing then with `reduce-scatter` and `no-op` operation would be replaced by `all-gather`. As `all-reduce = reduce-scatter + all-gather`, this saves a ton of activation memory at no added communication cost. To put it simply, it shards the outputs of each transformer layer along sequence dimension, e.g., if the sequence length is `1024` and the TP size is `4`, each GPU will have `256` tokens (1024/4) for each sample. This increases the batch size that can be supported for training. For more details, please refer to the research paper [Reducing Activation Recomputation in Large Transformer Models](https://arxiv.org/pdf/2205.05198.pdf). d. **Data Parallelism (DP)** via Distributed Optimizer: Reduces the memory footprint by sharding optimizer states and gradients across DP ranks (versus the traditional method of replicating the optimizer state across data parallel ranks). For example, when using Adam optimizer with mixed-precision training, each parameter accounts for 12 bytes of memory. This gets distributed equally across the GPUs, i.e., each parameter would account for 3 bytes (12/4) if we have 4 GPUs. For more details, please refer the research paper [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/pdf/1910.02054.pdf) and following section of ๐Ÿค— blog [The Technology Behind BLOOM Training](https://huggingface.co/blog/bloom-megatron-deepspeed#zero-data-parallelism). e. **Selective Activation Recomputation**: Reduces the memory footprint of activations significantly via smart activation checkpointing. It doesn't store activations occupying large memory while being fast to recompute thereby achieving great tradeoff between memory and recomputation. For example, for GPT-3, this leads to 70% reduction in required memory for activations at the expense of only 2.7% FLOPs overhead for recomputation of activations. For more details, please refer to the research paper [Reducing Activation Recomputation in Large Transformer Models](https://arxiv.org/pdf/2205.05198.pdf). f. **Fused Kernels**: Fused Softmax, Mixed Precision Fused Layer Norm and Fused gradient accumulation to weight gradient computation of linear layer. PyTorch JIT compiled Fused GeLU and Fused Bias+Dropout+Residual addition. g. **Support for Indexed datasets**: Efficient binary format of datasets for large scale training. Support for the `mmap`, `cached` index file and the `lazy` loader format. h. **Checkpoint reshaping and interoperability**: Utility for reshaping Megatron-LM checkpoints of variable tensor and pipeline parallel sizes to the beloved ๐Ÿค— Transformers sharded checkpoints as it has great support with plethora of tools such as ๐Ÿค— Accelerate Big Model Inference, Megatron-DeepSpeed Inference etc. Support is also available for converting ๐Ÿค— Transformers sharded checkpoints to Megatron-LM checkpoint of variable tensor and pipeline parallel sizes for large scale training. ## Pre-Requisites You will need to install the latest pytorch, cuda, nccl, and NVIDIA [APEX](https://github.com/NVIDIA/apex#quick-start) releases and the nltk library. See [documentation](https://github.com/NVIDIA/Megatron-LM#setup) for more details. Another way to setup the environment is to pull an NVIDIA PyTorch Container that comes with all the required installations from NGC. Below is a step-by-step method to set up the conda environment: 1. Create a virtual environment ``` conda create --name ml ``` 2. Assuming that the machine has CUDA 11.3 installed, installing the corresponding PyTorch GPU Version ``` conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch ``` 3. Install Nvidia APEX ``` git clone https://github.com/NVIDIA/apex cd apex pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ cd .. ``` 4. Installing Megatron-LM ``` pip install git+https://github.com/huggingface/Megatron-LM.git ``` ## Accelerate Megatron-LM Plugin Important features are directly supported via the `accelerate config` command. An example of the corresponding questions for using Megatron-LM features is shown below: ```bash :~$ accelerate config --config_file "megatron_gpt_config.yaml" In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0 Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2 How many different machines will you use (use more than 1 for multi-node training)? [1]: Do you want to use DeepSpeed? [yes/NO]: Do you want to use FullyShardedDataParallel? [yes/NO]: Do you want to use Megatron-LM ? [yes/NO]: yes What is the Tensor Parallelism degree/size? [1]:2 Do you want to enable Sequence Parallelism? [YES/no]: What is the Pipeline Parallelism degree/size? [1]:2 What is the number of micro-batches? [1]:2 Do you want to enable selective activation recomputation? [YES/no]: Do you want to use distributed optimizer which shards optimizer state and gradients across data parallel ranks? [YES/no]: What is the gradient clipping value based on global L2 Norm (0 to disable)? [1.0]: How many GPU(s) should be used for distributed training? [1]:4 Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: bf16 ``` The resulting config is shown below: ``` ~$ cat megatron_gpt_config.yaml compute_environment: LOCAL_MACHINE deepspeed_config: {} distributed_type: MEGATRON_LM downcast_bf16: 'no' fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main megatron_lm_config: megatron_lm_gradient_clipping: 1.0 megatron_lm_num_micro_batches: 2 megatron_lm_pp_degree: 2 megatron_lm_recompute_activations: true megatron_lm_sequence_parallelism: true megatron_lm_tp_degree: 2 megatron_lm_use_distributed_optimizer: true mixed_precision: bf16 num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true use_cpu: false ``` We will take the example of GPT pre-training. The minimal changes required to the official `run_clm_no_trainer.py` to use Megatron-LM are as follows: 1. As Megatron-LM uses its own implementation of Optimizer, the corresponding scheduler compatible with it needs to be used. As such, support for only the Megatron-LM's scheduler is present. User will need to create `accelerate.utils.MegatronLMDummyScheduler`. Example is given below: ```python from accelerate.utils import MegatronLMDummyScheduler if accelerator.distributed_type == DistributedType.MEGATRON_LM: lr_scheduler = MegatronLMDummyScheduler( optimizer=optimizer, total_num_steps=args.max_train_steps, warmup_num_steps=args.num_warmup_steps, ) else: lr_scheduler = get_scheduler( name=args.lr_scheduler_type, optimizer=optimizer, num_warmup_steps=args.num_warmup_steps * args.gradient_accumulation_steps, num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, ) ``` 2. Getting the details of the total batch size now needs to be cognization of tensor and pipeline parallel sizes. Example of getting the effective total batch size is shown below: ```python if accelerator.distributed_type == DistributedType.MEGATRON_LM: total_batch_size = accelerator.state.megatron_lm_plugin.global_batch_size else: total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps ``` 3. When using Megatron-LM, the losses are already averaged across the data parallel group ```python if accelerator.distributed_type == DistributedType.MEGATRON_LM: losses.append(loss) else: losses.append(accelerator.gather_for_metrics(loss.repeat(args.per_device_eval_batch_size))) if accelerator.distributed_type == DistributedType.MEGATRON_LM: losses = torch.tensor(losses) else: losses = torch.cat(losses) ``` 4. For Megatron-LM, we need to save the model using `accelerator.save_state` ```python if accelerator.distributed_type == DistributedType.MEGATRON_LM: accelerator.save_state(args.output_dir) else: unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained( args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save ) ``` That's it! We are good to go ๐Ÿš€. Please find the example script in the examples folder at the path `accelerate/examples/by_feature/megatron_lm_gpt_pretraining.py`. Let's run it for `gpt-large` model architecture using 4 A100-80GB GPUs. ```bash accelerate launch --config_file megatron_gpt_config.yaml \ examples/by_feature/megatron_lm_gpt_pretraining.py \ --config_name "gpt2-large" \ --tokenizer_name "gpt2-large" \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --block_size 1024 \ --learning_rate 5e-5 \ --per_device_train_batch_size 24 \ --per_device_eval_batch_size 24 \ --num_train_epochs 5 \ --with_tracking \ --report_to "wandb" \ --output_dir "awesome_model" ``` Below are some important excerpts from the output logs: ```bash Loading extension module fused_dense_cuda... >>> done with compiling and loading fused kernels. Compilation time: 3.569 seconds > padded vocab (size: 50257) with 175 dummy tokens (new size: 50432) Building gpt model in the pre-training mode. The Megatron LM model weights are initialized at random in `accelerator.prepare`. Please use `accelerator.load_checkpoint` to load a pre-trained checkpoint matching the distributed setup. Preparing dataloader Preparing dataloader Preparing model > number of parameters on (tensor, pipeline) model parallel rank (1, 0): 210753280 > number of parameters on (tensor, pipeline) model parallel rank (1, 1): 209445120 > number of parameters on (tensor, pipeline) model parallel rank (0, 0): 210753280 > number of parameters on (tensor, pipeline) model parallel rank (0, 1): 209445120 Preparing optimizer Preparing scheduler > learning rate decay style: linear 10/10/2022 22:57:22 - INFO - __main__ - ***** Running training ***** 10/10/2022 22:57:22 - INFO - __main__ - Num examples = 2318 10/10/2022 22:57:22 - INFO - __main__ - Num Epochs = 5 10/10/2022 22:57:22 - INFO - __main__ - Instantaneous batch size per device = 24 10/10/2022 22:57:22 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 48 10/10/2022 22:57:22 - INFO - __main__ - Gradient Accumulation steps = 1 10/10/2022 22:57:22 - INFO - __main__ - Total optimization steps = 245 20%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ– | 49/245 [01:04<04:09, 1.27s/it] 10/10/2022 22:58:29 - INFO - __main__ - epoch 0: perplexity: 1222.1594275215962 eval_loss: 7.10837459564209 40%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š | 98/245 [02:10<03:07, 1.28s/it] 10/10/2022 22:59:35 - INFO - __main__ - epoch 1: perplexity: 894.5236583794557 eval_loss: 6.796291351318359 60%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ | 147/245 [03:16<02:05, 1.28s/it] 10/10/2022 23:00:40 - INFO - __main__ - epoch 2: perplexity: 702.8458788508042 eval_loss: 6.555137634277344 80%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š | 196/245 [04:22<01:02, 1.28s/it] 10/10/2022 23:01:46 - INFO - __main__ - epoch 3: perplexity: 600.3220028695281 eval_loss: 6.39746618270874 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 245/245 [05:27<00:00, 1.28s/it] ``` There are a large number of other options/features that one can set using `accelerate.utils.MegatronLMPlugin`. ## Advanced features to leverage writing custom train step and Megatron-LM Indexed Datasets For leveraging more features, please go through below details. 1. Below is an example of changes required to customize the Train Step while using Megatron-LM. You will implement the `accelerate.utils.AbstractTrainStep` or inherit from their corresponding children `accelerate.utils.GPTTrainStep`, `accelerate.utils.BertTrainStep` or `accelerate.utils.T5TrainStep`. ```python from accelerate.utils import MegatronLMDummyScheduler, GPTTrainStep, avg_losses_across_data_parallel_group # Custom loss function for the Megatron model class GPTTrainStepWithCustomLoss(GPTTrainStep): def __init__(self, megatron_args, **kwargs): super().__init__(megatron_args) self.kwargs = kwargs def get_loss_func(self): def loss_func(inputs, loss_mask, output_tensor): batch_size, seq_length = output_tensor.shape losses = output_tensor.float() loss_mask = loss_mask.view(-1).float() loss = losses.view(-1) * loss_mask # Resize and average loss per sample loss_per_sample = loss.view(batch_size, seq_length).sum(axis=1) loss_mask_per_sample = loss_mask.view(batch_size, seq_length).sum(axis=1) loss_per_sample = loss_per_sample / loss_mask_per_sample # Calculate and scale weighting weights = torch.stack([(inputs == kt).float() for kt in self.kwargs["keytoken_ids"]]).sum(axis=[0, 2]) weights = 1.0 + self.kwargs["alpha"] * weights # Calculate weighted average weighted_loss = (loss_per_sample * weights).mean() # Reduce loss across data parallel groups averaged_loss = avg_losses_across_data_parallel_group([weighted_loss]) return weighted_loss, {"lm loss": averaged_loss[0]} return loss_func def get_forward_step_func(self): def forward_step(data_iterator, model): """Forward step.""" # Get the batch. tokens, labels, loss_mask, attention_mask, position_ids = self.get_batch(data_iterator) output_tensor = model(tokens, position_ids, attention_mask, labels=labels) return output_tensor, partial(self.loss_func, tokens, loss_mask) return forward_step def main(): # Custom loss function for the Megatron model keytoken_ids = [] keywords = ["plt", "pd", "sk", "fit", "predict", " plt", " pd", " sk", " fit", " predict"] for keyword in keywords: ids = tokenizer([keyword]).input_ids[0] if len(ids) == 1: keytoken_ids.append(ids[0]) accelerator.print(f"Keytoken ids: {keytoken_ids}") accelerator.state.megatron_lm_plugin.custom_train_step_class = GPTTrainStepWithCustomLoss accelerator.state.megatron_lm_plugin.custom_train_step_kwargs = { "keytoken_ids": keytoken_ids, "alpha": 0.25, } ``` 2. For using the Megatron-LM datasets, a few more changes are required. Dataloaders for these datasets are available only on rank 0 of each tensor parallel group. As such, there are rank where dataloader won't be available and this requires tweaks to the training loop. Being able to do all this shows how flexible and extensible ๐Ÿค— Accelerate is. The changes required are as follows. a. For Megatron-LM indexed datasets, we need to use `MegatronLMDummyDataLoader` and pass the required dataset args to it such as `data_path`, `seq_length` etc. See [here](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/arguments.py#L804) for the list of available args. ```python from accelerate.utils import MegatronLMDummyDataLoader megatron_dataloader_config = { "data_path": args.data_path, "splits_string": args.splits_string, "seq_length": args.block_size, "micro_batch_size": args.per_device_train_batch_size, } megatron_dataloader = MegatronLMDummyDataLoader(**megatron_dataloader_config) accelerator.state.megatron_lm_plugin.megatron_dataset_flag = True ``` b. `megatron_dataloader` is repeated 3 times to get training, validation and test dataloaders as per the `args.splits_string` proportions ```python model, optimizer, lr_scheduler, train_dataloader, eval_dataloader, _ = accelerator.prepare( model, optimizer, lr_scheduler, megatron_dataloader, megatron_dataloader, megatron_dataloader ) ``` c. Changes to training and evaluation loops as dataloader is only available on tensor parallel ranks 0 So, we need to iterate only if the dataloader isn't `None` else provide empty dict As such, we loop using `while` loop and break when `completed_steps` is equal to `args.max_train_steps` This is similar to the Megatron-LM setup wherein user has to provide `max_train_steps` when using Megaton-LM indexed datasets. This displays how flexible and extensible ๐Ÿค— Accelerate is. ```python while completed_steps < args.max_train_steps: model.train() batch = next(train_dataloader) if train_dataloader is not None else {} outputs = model(**batch) loss = outputs.loss ... if completed_steps % eval_interval == 0: eval_completed_steps = 0 losses = [] while eval_completed_steps < eval_iters: model.eval() with torch.no_grad(): batch = next(eval_dataloader) if eval_dataloader is not None else {} outputs = model(**batch) ``` ## Utility for Checkpoint reshaping and interoperability 1. The scripts for these are present in ๐Ÿค— Transformers library under respective models. Currently, it is available for GPT model [checkpoint_reshaping_and_interoperability.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/checkpoint_reshaping_and_interoperability.py) 2. Below is an example of conversion of checkpoint from Megatron-LM to universal ๐Ÿค— Transformers sharded checkpoint. ```bash python checkpoint_reshaping_and_interoperability.py \ --convert_checkpoint_from_megatron_to_transformers \ --load_path "gpt/iter_0005000" \ --save_path "gpt/trfs_checkpoint" \ --max_shard_size "200MB" \ --tokenizer_name "gpt2" \ --print-checkpoint-structure ``` 3. Conversion of checkpoint from transformers to megatron with `tp_size=2`, `pp_size=2` and `dp_size=2`. ```bash python checkpoint_utils/megatgron_gpt2/checkpoint_reshaping_and_interoperability.py \ --load_path "gpt/trfs_checkpoint" \ --save_path "gpt/megatron_lm_checkpoint" \ --target_tensor_model_parallel_size 2 \ --target_pipeline_model_parallel_size 2 \ --target_data_parallel_size 2 \ --target_params_dtype "bf16" \ --make_vocab_size_divisible_by 128 \ --use_distributed_optimizer \ --print-checkpoint-structure ``` ## Megatron-LM GPT models support returning logits and `megatron_generate` function for text generation 1. Returning logits require setting `require_logits=True` in MegatronLMPlugin as shown below. These would be available on the in the last stage of pipeline. ```python megatron_lm_plugin = MegatronLMPlugin(return_logits=True) ``` 2. `megatron_generate` method for Megatron-LM GPT model: This will use Tensor and Pipeline Parallelism to complete generations for a batch of inputs when using greedy with/without top_k/top_p sampling and for individual prompt inputs when using beam search decoding. Only a subset of features of transformers generate is supported. This will help in using large models via tensor and pipeline parallelism for generation (already does key-value caching and uses fused kernels by default). This requires data parallel size to be 1, sequence parallelism and activation checkpointing to be disabled. It also requires specifying path to tokenizer's vocab file and merges file. Below example shows how to configure and use `megatron_generate` method for Megatron-LM GPT model. ```python # specifying tokenizer's vocab and merges file vocab_file = os.path.join(args.resume_from_checkpoint, "vocab.json") merge_file = os.path.join(args.resume_from_checkpoint, "merges.txt") other_megatron_args = {"vocab_file": vocab_file, "merge_file": merge_file} megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args) # inference using `megatron_generate` functionality tokenizer.pad_token = tokenizer.eos_token max_new_tokens = 64 batch_texts = [ "Are you human?", "The purpose of life is", "The arsenal was constructed at the request of", "How are you doing these days?", ] batch_encodings = tokenizer(batch_texts, return_tensors="pt", padding=True) # top-p sampling generated_tokens = model.megatron_generate( batch_encodings["input_ids"], batch_encodings["attention_mask"], max_new_tokens=max_new_tokens, top_p=0.8, top_p_decay=0.5, temperature=0.9, ) decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy()) accelerator.print(decoded_preds) # top-k sampling generated_tokens = model.megatron_generate( batch_encodings["input_ids"], batch_encodings["attention_mask"], max_new_tokens=max_new_tokens, top_k=50, temperature=0.9, ) decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy()) accelerator.print(decoded_preds) # adding `bos` token at the start generated_tokens = model.megatron_generate( batch_encodings["input_ids"], batch_encodings["attention_mask"], max_new_tokens=max_new_tokens, add_BOS=True ) decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy()) accelerator.print(decoded_preds) # beam search => only takes single prompt batch_texts = ["The purpose of life is"] batch_encodings = tokenizer(batch_texts, return_tensors="pt", padding=True) generated_tokens = model.megatron_generate( batch_encodings["input_ids"], batch_encodings["attention_mask"], max_new_tokens=max_new_tokens, num_beams=20, length_penalty=1.5, ) decoded_preds = tokenizer.batch_decode(generated_tokens.cpu().numpy()) accelerator.print(decoded_preds) ``` 3. An end-to-end example of using `megatron_generate` method for Megatron-LM GPT model is available at [megatron_gpt2_generation.py](https://github.com/pacman100/accelerate-megatron-test/blob/main/src/inference/megatron_gpt2_generation.py) with config file [megatron_lm_gpt_generate_config.yaml](https://github.com/pacman100/accelerate-megatron-test/blob/main/src/Configs/megatron_lm_gpt_generate_config.yaml). The bash script with accelerate launch command is available at [megatron_lm_gpt_generate.sh](https://github.com/pacman100/accelerate-megatron-test/blob/main/megatron_lm_gpt_generate.sh). The output logs of the script are available at [megatron_lm_gpt_generate.log](https://github.com/pacman100/accelerate-megatron-test/blob/main/output_logs/megatron_lm_gpt_generate.log). ## Support for ROPE and ALiBi Positional embeddings and Multi-Query Attention 1. For ROPE/ALiBi attention, pass `position_embedding_type` with `("absolute" | "rotary" | "alibi")` to `MegatronLMPlugin` as shown below. ```python other_megatron_args = {"position_embedding_type": "alibi"} megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args) ``` 2. For Multi-Query Attention, pass `attention_head_type` with `("multihead" | "multiquery")` to `MegatronLMPlugin` as shown below. ```python other_megatron_args = {"attention_head_type": "multiquery"} megatron_lm_plugin = MegatronLMPlugin(other_megatron_args=other_megatron_args) ``` ## Caveats 1. Supports Transformers GPT2, Megatron-BERT and T5 models. This covers Decoder only, Encode only and Encoder-Decoder model classes. 2. Only loss is returned from model forward pass as there is quite complex interplay of pipeline, tensor and data parallelism behind the scenes. The `model(**batch_data)` call return loss(es) averaged across the data parallel ranks. This is fine for most cases wherein pre-training jobs are run using Megatron-LM features and you can easily compute the `perplexity` using the loss. For GPT model, returning logits in addition to loss(es) is supported. These logits aren't gathered across data parallel ranks. Use `accelerator.utils.gather_across_data_parallel_groups` to gather logits across data parallel ranks. These logits along with labels can be used for computing various performance metrics. 3. The main process is the last rank as the losses/logits are available in the last stage of pipeline. `accelerator.is_main_process` and `accelerator.is_local_main_process` return `True` for last rank when using Megatron-LM integration. 4. In `accelerator.prepare` call, a Megatron-LM model corresponding to a given Transformers model is created with random weights. Please use `accelerator.load_state` to load the Megatron-LM checkpoint with matching TP, PP and DP partitions. 5. Currently, checkpoint reshaping and interoperability support is only available for GPT. Soon it will be extended to BERT and T5. 6. `gradient_accumulation_steps` needs to be 1. When using Megatron-LM, micro batches in pipeline parallelism setting is synonymous with gradient accumulation. 7. When using Megatron-LM, use `accelerator.save_state` and `accelerator.load_state` for saving and loading checkpoints. 8. Below are the mapping from Megatron-LM model architectures to the the equivalent ๐Ÿค— transformers model architectures. Only these ๐Ÿค— transformers model architectures are supported. a. Megatron-LM [BertModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/bert_model.py) : ๐Ÿค— transformers models with `megatron-bert` in config's model type, e.g., [MegatronBERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert) b. Megatron-LM [GPTModel](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py) : ๐Ÿค— transformers models with `gpt2` in config's model type, e.g., [OpenAI GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2) c. Megatron-LM [T5Model](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/t5_model.py) : ๐Ÿค— transformers models with `t5` in config's model type, e.g., [T5](https://huggingface.co/docs/transformers/model_doc/t5) and [MT5](https://huggingface.co/docs/transformers/model_doc/mt5)
accelerate/docs/source/usage_guides/megatron_lm.md/0
{ "file_path": "accelerate/docs/source/usage_guides/megatron_lm.md", "repo_id": "accelerate", "token_count": 9557 }
2
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from transformers import AutoModelForCausalLM, AutoTokenizer from accelerate import PartialState, prepare_pippy # sdpa implementation which is the default torch>2.1.2 fails with the tracing + attention mask kwarg # with attn_implementation="eager" mode, the forward is very slow for some reason model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-7b-chat-hf", low_cpu_mem_usage=True, attn_implementation="sdpa" ) model.eval() # Input configs # Create example inputs for the model tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf") prompts = ("I would like to", "I really like to", "The weather is") # bs = 3 tokenizer.pad_token = tokenizer.eos_token inputs = tokenizer(prompts, return_tensors="pt", padding=True) # Create a pipeline stage from the model # Using `auto` is equivalent to letting `device_map="auto"` figure # out device mapping and will also split the model according to the # number of total GPUs available if it fits on one GPU model = prepare_pippy(model, split_points="auto", example_args=inputs) # You can pass `gather_output=True` to have the output from the model # available on all GPUs # model = prepare_pippy(model, split_points="auto", example_args=(input,), gather_output=True) # currently we don't support `model.generate` # output = model.generate(**inputs, max_new_tokens=1) with torch.no_grad(): output = model(**inputs) # The outputs are only on the final process by default if PartialState().is_last_process: next_token_logits = output[0][:, -1, :] next_token = torch.argmax(next_token_logits, dim=-1) print(tokenizer.batch_decode(next_token))
accelerate/examples/inference/llama.py/0
{ "file_path": "accelerate/examples/inference/llama.py", "repo_id": "accelerate", "token_count": 695 }
3
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import contextlib import functools import json import math import os import re import shutil import sys import warnings from collections import OrderedDict from contextlib import contextmanager from functools import partial from types import MethodType from typing import Any, Callable, Union import torch import torch.utils.hooks as hooks from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state from .data_loader import DataLoaderDispatcher, prepare_data_loader, skip_first_batches from .hooks import AlignDevicesHook from .logging import get_logger from .optimizer import AcceleratedOptimizer from .scheduler import AcceleratedScheduler from .state import AcceleratorState, GradientState, PartialState from .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers from .utils import ( MODEL_NAME, SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, WEIGHTS_INDEX_NAME, WEIGHTS_NAME, AutocastKwargs, DataLoaderConfiguration, DeepSpeedPlugin, DistributedDataParallelKwargs, DistributedType, DynamoBackend, FP8RecipeKwargs, FullyShardedDataParallelPlugin, GradientAccumulationPlugin, GradScalerKwargs, InitProcessGroupKwargs, KwargsHandler, LoggerType, MegatronLMPlugin, PrecisionType, ProjectConfiguration, RNGType, TorchDynamoPlugin, check_os_kernel, clean_state_dict_for_safetensors, compare_versions, convert_model, convert_outputs_to_fp32, extract_model_from_parallel, gather, gather_object, get_mixed_precision_context_manager, get_pretty_name, has_transformer_engine_layers, is_bf16_available, is_deepspeed_available, is_fp8_available, is_ipex_available, is_megatron_lm_available, is_msamp_available, is_npu_available, is_torch_version, is_torch_xla_available, is_xpu_available, load_fsdp_model, load_fsdp_optimizer, pad_across_processes, parse_choice_from_env, recursively_apply, reduce, release_memory, save, save_fsdp_model, save_fsdp_optimizer, shard_checkpoint, wait_for_everyone, ) from .utils.constants import FSDP_PYTORCH_VERSION from .utils.modeling import get_state_dict_offloaded_model from .utils.other import is_compiled_module if is_deepspeed_available(): from .utils import ( DeepSpeedEngineWrapper, DeepSpeedOptimizerWrapper, DeepSpeedSchedulerWrapper, DummyOptim, DummyScheduler, ) if is_fp8_available(): import transformer_engine.common.recipe as te_recipe from transformer_engine.pytorch import fp8_autocast if is_megatron_lm_available(): from .utils import ( MegatronEngine, MegatronLMDummyDataLoader, MegatronLMDummyScheduler, MegatronLMOptimizerWrapper, MegatronLMSchedulerWrapper, megatron_lm_initialize, megatron_lm_prepare_data_loader, megatron_lm_prepare_model, megatron_lm_prepare_optimizer, megatron_lm_prepare_scheduler, ) from torch.distributed.algorithms.join import Join if is_torch_xla_available(): import torch_xla.amp as xamp import torch_xla.core.xla_model as xm import torch_xla.distributed.xla_multiprocessing as xmp if is_npu_available(check_device=False): import torch_npu # noqa: F401 try: from torch.optim.lr_scheduler import LRScheduler except ImportError: from torch.optim.lr_scheduler import _LRScheduler as LRScheduler logger = get_logger(__name__) # Sentinel values for defaults _split_batches = object() _dispatch_batches = object() _even_batches = object() _use_seedable_sampler = object() class Accelerator: """ Creates an instance of an accelerator for distributed training (on multi-GPU, TPU) or mixed precision training. Args: device_placement (`bool`, *optional*, defaults to `True`): Whether or not the accelerator should put objects on device (tensors yielded by the dataloader, model, etc...). mixed_precision (`str`, *optional*): Whether or not to use mixed precision training. Choose from 'no','fp16','bf16 or 'fp8'. Will default to the value in the environment variable `ACCELERATE_MIXED_PRECISION`, which will use the default value in the accelerate config of the current system or the flag passed with the `accelerate.launch` command. 'fp8' requires the installation of transformers-engine. gradient_accumulation_steps (`int`, *optional*, default to 1): The number of steps that should pass before gradients are accumulated. A number > 1 should be combined with `Accelerator.accumulate`. If not passed, will default to the value in the environment variable `ACCELERATE_GRADIENT_ACCUMULATION_STEPS`. Can also be configured through a `GradientAccumulationPlugin`. cpu (`bool`, *optional*): Whether or not to force the script to execute on CPU. Will ignore GPU available if set to `True` and force the execution on one process only. dataloader_config (`DataLoaderConfiguration`, *optional*): A configuration for how the dataloaders should be handled in distributed scenarios. deepspeed_plugin ([`~utils.DeepSpeedPlugin`], *optional*): Tweak your DeepSpeed related args using this argument. This argument is optional and can be configured directly using *accelerate config* fsdp_plugin ([`~utils.FullyShardedDataParallelPlugin`], *optional*): Tweak your FSDP related args using this argument. This argument is optional and can be configured directly using *accelerate config* megatron_lm_plugin ([`~utils.MegatronLMPlugin`], *optional*): Tweak your MegatronLM related args using this argument. This argument is optional and can be configured directly using *accelerate config* rng_types (list of `str` or [`~utils.RNGType`]): The list of random number generators to synchronize at the beginning of each iteration in your prepared dataloaders. Should be one or several of: - `"torch"`: the base torch random number generator - `"cuda"`: the CUDA random number generator (GPU only) - `"xla"`: the XLA random number generator (TPU only) - `"generator"`: the `torch.Generator` of the sampler (or batch sampler if there is no sampler in your dataloader) or of the iterable dataset (if it exists) if the underlying dataset is of that type. Will default to `["torch"]` for PyTorch versions <=1.5.1 and `["generator"]` for PyTorch versions >= 1.6. log_with (list of `str`, [`~utils.LoggerType`] or [`~tracking.GeneralTracker`], *optional*): A list of loggers to be setup for experiment tracking. Should be one or several of: - `"all"` - `"tensorboard"` - `"wandb"` - `"comet_ml"` If `"all"` is selected, will pick up all available trackers in the environment and initialize them. Can also accept implementations of `GeneralTracker` for custom trackers, and can be combined with `"all"`. project_config ([`~utils.ProjectConfiguration`], *optional*): A configuration for how saving the state can be handled. project_dir (`str`, `os.PathLike`, *optional*): A path to a directory for storing data such as logs of locally-compatible loggers and potentially saved checkpoints. step_scheduler_with_optimizer (`bool`, *optional`, defaults to `True`): Set `True` if the learning rate scheduler is stepped at the same time as the optimizer, `False` if only done under certain circumstances (at the end of each epoch, for instance). kwargs_handlers (list of [`~utils.KwargsHandler`], *optional*) A list of [`~utils.KwargsHandler`] to customize how the objects related to distributed training or mixed precision are created. See [kwargs](kwargs) for more information. dynamo_backend (`str` or [`~utils.DynamoBackend`], *optional*, defaults to `"no"`): Set to one of the possible dynamo backends to optimize your training with torch dynamo. gradient_accumulation_plugin ([`~utils.GradientAccumulationPlugin`], *optional*): A configuration for how gradient accumulation should be handled, if more tweaking than just the `gradient_accumulation_steps` is needed. **Available attributes:** - **device** (`torch.device`) -- The device to use. - **distributed_type** ([`~utils.DistributedType`]) -- The distributed training configuration. - **local_process_index** (`int`) -- The process index on the current machine. - **mixed_precision** (`str`) -- The configured mixed precision mode. - **num_processes** (`int`) -- The total number of processes used for training. - **optimizer_step_was_skipped** (`bool`) -- Whether or not the optimizer update was skipped (because of gradient overflow in mixed precision), in which case the learning rate should not be changed. - **process_index** (`int`) -- The overall index of the current process among all processes. - **state** ([`~state.AcceleratorState`]) -- The distributed setup state. - **sync_gradients** (`bool`) -- Whether the gradients are currently being synced across all processes. - **use_distributed** (`bool`) -- Whether the current configuration is for distributed training. """ def __init__( self, device_placement: bool = True, split_batches: bool = _split_batches, mixed_precision: PrecisionType | str | None = None, gradient_accumulation_steps: int = 1, cpu: bool = False, dataloader_config: DataLoaderConfiguration | None = None, deepspeed_plugin: DeepSpeedPlugin | None = None, fsdp_plugin: FullyShardedDataParallelPlugin | None = None, megatron_lm_plugin: MegatronLMPlugin | None = None, rng_types: list[str | RNGType] | None = None, log_with: str | LoggerType | GeneralTracker | list[str | LoggerType | GeneralTracker] | None = None, project_dir: str | os.PathLike | None = None, project_config: ProjectConfiguration | None = None, gradient_accumulation_plugin: GradientAccumulationPlugin | None = None, dispatch_batches: bool | None = _dispatch_batches, even_batches: bool = _even_batches, use_seedable_sampler: bool = _use_seedable_sampler, step_scheduler_with_optimizer: bool = True, kwargs_handlers: list[KwargsHandler] | None = None, dynamo_backend: DynamoBackend | str | None = None, ): self.trackers = [] if project_config is not None: self.project_configuration = project_config else: self.project_configuration = ProjectConfiguration(project_dir=project_dir) if project_dir is not None and self.project_dir is None: self.project_configuration.set_directories(project_dir) if mixed_precision is not None: mixed_precision = str(mixed_precision) if mixed_precision not in PrecisionType: raise ValueError( f"Unknown mixed_precision mode: {mixed_precision}. Choose between {PrecisionType.list()}" ) dynamo_plugin = TorchDynamoPlugin() if dynamo_backend is None else TorchDynamoPlugin(backend=dynamo_backend) if deepspeed_plugin is None: # init from env variables deepspeed_plugin = ( DeepSpeedPlugin() if os.environ.get("ACCELERATE_USE_DEEPSPEED", "false") == "true" else None ) else: assert isinstance( deepspeed_plugin, DeepSpeedPlugin ), "`deepspeed_plugin` must be an `accelerate.utils.DeepSpeedPlugin` object." os.environ["ACCELERATE_USE_DEEPSPEED"] = "true" # use DeepSpeed if plugin is provided if deepspeed_plugin: if not is_deepspeed_available(): raise ImportError("DeepSpeed is not installed => run `pip install deepspeed` or build it from source.") if compare_versions("deepspeed", "<", "0.9.3"): raise ImportError("DeepSpeed version must be >= 0.9.3. Please update DeepSpeed.") mixed_precision = ( os.environ.get("ACCELERATE_MIXED_PRECISION", "no") if mixed_precision is None else mixed_precision ) deepspeed_plugin.set_mixed_precision(mixed_precision) deepspeed_plugin.set_deepspeed_weakref() if os.environ.get("ACCELERATE_USE_FSDP", "false") == "true" or isinstance( fsdp_plugin, FullyShardedDataParallelPlugin ): if is_torch_version("<", FSDP_PYTORCH_VERSION): raise ValueError(f"FSDP requires PyTorch >= {FSDP_PYTORCH_VERSION}") if fsdp_plugin is None: # init from env variables fsdp_plugin = ( FullyShardedDataParallelPlugin() if os.environ.get("ACCELERATE_USE_FSDP", "false") == "true" else None ) else: if not isinstance(fsdp_plugin, FullyShardedDataParallelPlugin): raise TypeError("`fsdp_plugin` must be a FullyShardedDataParallelPlugin object.") os.environ["ACCELERATE_USE_FSDP"] = "true" # use FSDP if plugin is provided if megatron_lm_plugin is None: # init from env variables megatron_lm_plugin = ( MegatronLMPlugin() if os.environ.get("ACCELERATE_USE_MEGATRON_LM", "false") == "true" else None ) else: if not isinstance(megatron_lm_plugin, MegatronLMPlugin): raise TypeError("`megatron_lm_plugin` must be a MegatronLMPlugin object.") os.environ["ACCELERATE_USE_MEGATRON_LM"] = "true" # use MegatronLM if plugin is provided if megatron_lm_plugin: if not is_megatron_lm_available(): raise ImportError("Megatron is not installed. please build it from source.") # Kwargs handlers self.ddp_handler = None self.scaler_handler = None self.init_handler = None self.fp8_recipe_handler = None self.autocast_handler = None if kwargs_handlers is not None: for handler in kwargs_handlers: assert isinstance( handler, KwargsHandler ), f"Unsupported kwargs handler passed: {handler}, must be one that inherits `accelerate.utils.KwargsHandler`." if isinstance(handler, DistributedDataParallelKwargs): if self.ddp_handler is not None: raise ValueError("You can only pass one `DistributedDataParallelKwargs` in `kwargs_handler`.") else: self.ddp_handler = handler elif isinstance(handler, GradScalerKwargs): if self.scaler_handler is not None: raise ValueError("You can only pass one `GradScalerKwargs` in `kwargs_handler`.") else: self.scaler_handler = handler elif isinstance(handler, InitProcessGroupKwargs): if self.init_handler is not None: raise ValueError("You can only pass one `InitProcessGroupKwargs` in `kwargs_handler`.") else: self.init_handler = handler elif isinstance(handler, FP8RecipeKwargs): if self.fp8_recipe_handler is not None: raise ValueError("You can only pass one `FP8RecipeKwargs` in `kwargs_handler`.") else: self.fp8_recipe_handler = handler elif isinstance(handler, AutocastKwargs): if self.autocast_handler is not None: raise ValueError("You can only pass one `AutocastKwargs` in `kwargs_handler`.") else: self.autocast_handler = handler kwargs = self.init_handler.to_kwargs() if self.init_handler is not None else {} self.state = AcceleratorState( mixed_precision=mixed_precision, cpu=cpu, dynamo_plugin=dynamo_plugin, deepspeed_plugin=deepspeed_plugin, fsdp_plugin=fsdp_plugin, megatron_lm_plugin=megatron_lm_plugin, _from_accelerator=True, **kwargs, ) if self.fp8_recipe_handler is None and self.state.mixed_precision == "fp8": self.fp8_recipe_handler = FP8RecipeKwargs(backend="MSAMP" if is_msamp_available() else "TE") trackers = filter_trackers(log_with, self.logging_dir) if len(trackers) < 1 and log_with is not None: warnings.warn(f"`log_with={log_with}` was passed but no supported trackers are currently installed.") self.log_with = trackers if ( (mixed_precision != "bf16") and getattr(self.state, "downcast_bfloat", False) and (self.state.distributedType != DistributedType.XLA) ): raise ValueError("Can only use `downcast_bf16` when using `mixed_precision='bf16'` and on a TPU") if gradient_accumulation_plugin is not None: if gradient_accumulation_steps != 1: raise ValueError( "You can only pass one of `gradient_accumulation_steps` and `gradient_accumulation_plugin`. Please only pass in the created `GradientAccumulationPlugin` object." ) else: gradient_accumulation_steps = int( parse_choice_from_env("ACCELERATE_GRADIENT_ACCUMULATION_STEPS", gradient_accumulation_steps) ) gradient_accumulation_plugin = GradientAccumulationPlugin(num_steps=gradient_accumulation_steps) self.gradient_state = GradientState( gradient_accumulation_plugin=gradient_accumulation_plugin, ) self.device_placement = device_placement if dataloader_config is None: dataloader_config = DataLoaderConfiguration() self.dataloader_config = dataloader_config # Deal with deprecated args # TODO: Remove in v1.0.0 deprecated_dl_args = {} if dispatch_batches is not _dispatch_batches: deprecated_dl_args["dispatch_batches"] = dispatch_batches self.dataloader_config.dispatch_batches = dispatch_batches if split_batches is not _split_batches: deprecated_dl_args["split_batches"] = split_batches self.dataloader_config.split_batches = split_batches if even_batches is not _even_batches: deprecated_dl_args["even_batches"] = even_batches self.dataloader_config.even_batches = even_batches if use_seedable_sampler is not _use_seedable_sampler: deprecated_dl_args["use_seedable_sampler"] = use_seedable_sampler self.dataloader_config.use_seedable_sampler = use_seedable_sampler if len(deprecated_dl_args) > 0: values = ", ".join([f"{k}={v}" for k, v in deprecated_dl_args.items()]) warnings.warn( f"Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: {deprecated_dl_args.keys()}. " "Please pass an `accelerate.DataLoaderConfiguration` instead: \n" f"dataloader_config = DataLoaderConfiguration({values})", FutureWarning, ) self.step_scheduler_with_optimizer = step_scheduler_with_optimizer # Mixed precision attributes self.scaler = None self.native_amp = False if ( self.state.mixed_precision == "fp16" and self.device.type != "cpu" and self.distributed_type not in (DistributedType.DEEPSPEED, DistributedType.MEGATRON_LM) ): self.native_amp = True if self.device.type not in ("xpu", "cuda", "mps", "npu", "xla") or is_torch_xla_available( check_is_tpu=True ): raise ValueError(f"fp16 mixed precision requires a GPU (not {self.device.type!r}).") kwargs = self.scaler_handler.to_kwargs() if self.scaler_handler is not None else {} if self.distributed_type == DistributedType.FSDP: from torch.distributed.fsdp.sharded_grad_scaler import ShardedGradScaler self.scaler = ShardedGradScaler(**kwargs) elif is_torch_xla_available(check_is_gpu=True): self.scaler = xamp.GradScaler(**kwargs) elif is_npu_available(): self.scaler = torch.npu.amp.GradScaler(**kwargs) else: self.scaler = torch.cuda.amp.GradScaler(**kwargs) elif self.state.mixed_precision == "bf16" and self.distributed_type not in ( DistributedType.DEEPSPEED, DistributedType.MEGATRON_LM, ): if self.device.type in ["cpu", "xpu"]: self.native_amp = True else: self.native_amp = is_bf16_available(True) if mixed_precision == "bf16" and not self.native_amp and not is_torch_xla_available(): raise ValueError("bf16 mixed precision requires PyTorch >= 1.10 and a supported device.") # Start of internal step tracking self.step = 0 # Internal references to the training objects self._optimizers = [] self._models = [] self._schedulers = [] self._dataloaders = [] self._custom_objects = [] # Hooks self._load_model_state_pre_hook = OrderedDict() self._save_model_state_pre_hook = OrderedDict() # RNG Types self.rng_types = rng_types if self.rng_types is None: self.rng_types = ["generator"] # Set a flag tensor for early stopping and other breakpoints self.flag_tensor = None check_os_kernel() @property def use_distributed(self): """ Whether the Accelerator is configured for distributed training """ return self.state.use_distributed @property def distributed_type(self): return self.state.distributed_type @property def num_processes(self): return self.state.num_processes @property def process_index(self): return self.state.process_index @property def local_process_index(self): return self.state.local_process_index @property def device(self): return self.state.device @property def split_batches(self): return self.dataloader_config.split_batches @property def dispatch_batches(self): return self.dataloader_config.dispatch_batches @property def even_batches(self): return self.dataloader_config.even_batches @even_batches.setter def even_batches(self, value: bool): self.dataloader_config.even_batches = value @property def use_seedable_sampler(self): return self.dataloader_config.use_seedable_sampler @property def project_dir(self): return self.project_configuration.project_dir @property def logging_dir(self): return self.project_configuration.logging_dir @property def save_iteration(self): return self.project_configuration.iteration @property def is_main_process(self): """True for one process only.""" return self.state.is_main_process @property def is_local_main_process(self): """True for one process per server.""" return self.state.is_local_main_process @property def use_fp16(self): warnings.warn( "The `use_fp16` property is deprecated and will be removed in version 1.0 of Accelerate use " "`Accelerator.mixed_precision == 'fp16'` instead.", FutureWarning, ) return self.mixed_precision != "no" @property def is_last_process(self): return self.process_index == self.num_processes - 1 @property def mixed_precision(self): return self.state.mixed_precision @contextmanager def split_between_processes(self, inputs: list | tuple | dict | torch.Tensor, apply_padding: bool = False): """ Splits `input` between `self.num_processes` quickly and can be then used on that process. Useful when doing distributed inference, such as with different prompts. Note that when using a `dict`, all keys need to have the same number of elements. Args: inputs (`list`, `tuple`, `torch.Tensor`, or `dict` of `list`/`tuple`/`torch.Tensor`): The input to split between processes. apply_padding (`bool`, `optional`, defaults to `False`): Whether to apply padding by repeating the last element of the input so that all processes have the same number of elements. Useful when trying to perform actions such as `Accelerator.gather()` on the outputs or passing in less inputs than there are processes. If so, just remember to drop the padded elements afterwards. Example: ```python # Assume there are two processes from accelerate import Accelerator accelerator = Accelerator() with accelerator.split_between_processes(["A", "B", "C"]) as inputs: print(inputs) # Process 0 ["A", "B"] # Process 1 ["C"] with accelerator.split_between_processes(["A", "B", "C"], apply_padding=True) as inputs: print(inputs) # Process 0 ["A", "B"] # Process 1 ["C", "C"] ``` """ with PartialState().split_between_processes(inputs, apply_padding=apply_padding) as inputs: yield inputs def on_main_process(self, function: Callable[..., Any] = None): """ A decorator that will run the decorated function on the main process only. Can also be called using the `PartialState` class. Args: function (`Callable`): The function to decorate. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> @accelerator.on_main_process ... def print_something(): ... print("This will be printed by process 0 only.") >>> print_something() "This will be printed by process 0 only" ``` """ # For times when the `Accelerator` object itself utilizes this decorator. if function is None: if "Accelerator." in self.__qualname__: function = self else: raise ValueError( "The `on_main_process` decorator must be called with a function on an instantiated `Accelerator` object." ) def _inner(*args, **kwargs): return PartialState().on_main_process(function)(*args, **kwargs) return _inner def on_local_main_process(self, function: Callable[..., Any] = None): """ A decorator that will run the decorated function on the local main process only. Can also be called using the `PartialState` class. Args: function (`Callable`): The function to decorate. Example: ```python # Assume we have 2 servers with 4 processes each. from accelerate import Accelerator accelerator = Accelerator() @accelerator.on_local_main_process def print_something(): print("This will be printed by process 0 only on each server.") print_something() # On server 1: "This will be printed by process 0 only" # On server 2: "This will be printed by process 0 only" ``` """ # For times when the `Accelerator` object itself utilizes this decorator. if function is None: if "Accelerator." in self.__qualname__: function = self else: raise ValueError( "The `on_local_main_process` decorator must be called with a function on an instantiated `Accelerator` object." ) def _inner(*args, **kwargs): return PartialState().on_local_main_process(function)(*args, **kwargs) return _inner def on_last_process(self, function: Callable[..., Any]): """ A decorator that will run the decorated function on the last process only. Can also be called using the `PartialState` class. Args: function (`Callable`): The function to decorate. Example: ```python # Assume we have 4 processes. from accelerate import Accelerator accelerator = Accelerator() @accelerator.on_last_process def print_something(): print(f"Printed on process {accelerator.process_index}") print_something() "Printed on process 3" ``` """ # For times when the `Accelerator` object itself utilizes this decorator. if function is None: if "Accelerator." in self.__qualname__: function = self else: raise ValueError( "The `on_last_process` decorator must be called with a function on an instantiated `Accelerator` object." ) def _inner(*args, **kwargs): return PartialState().on_last_process(function)(*args, **kwargs) return _inner def on_process(self, function: Callable[..., Any] = None, process_index: int = None): """ A decorator that will run the decorated function on a given process index only. Can also be called using the `PartialState` class. Args: function (`Callable`, `optional`): The function to decorate. process_index (`int`, `optional`): The index of the process on which to run the function. Example: ```python # Assume we have 4 processes. from accelerate import Accelerator accelerator = Accelerator() @accelerator.on_process(process_index=2) def print_something(): print(f"Printed on process {accelerator.process_index}") print_something() "Printed on process 2" ``` """ # Initial construction of the decorator. if (self is not None) and (process_index is not None) and (function is None): return partial(self.on_process, process_index=process_index) # For times when the `Accelerator` object itself utilizes this decorator. if function is None: if "Accelerator." in self.__qualname__: function = self else: raise ValueError( "The `on_main_process` decorator must be called with a function on an instantiated `Accelerator` object." ) def _inner(*args, **kwargs): return PartialState().on_process(function, process_index)(*args, **kwargs) return _inner def on_local_process(self, function: Callable[..., Any] = None, local_process_index: int = None): """ A decorator that will run the decorated function on a given local process index only. Can also be called using the `PartialState` class. Args: function (`Callable`, *optional*): The function to decorate. local_process_index (`int`, *optional*): The index of the local process on which to run the function. Example: ```python # Assume we have 2 servers with 4 processes each. from accelerate import Accelerator accelerator = Accelerator() @accelerator.on_local_process(local_process_index=2) def print_something(): print(f"Printed on process {accelerator.local_process_index}") print_something() # On server 1: "Printed on process 2" # On server 2: "Printed on process 2" ``` """ # Initial construction of the decorator. if (self is not None) and (local_process_index is not None) and (function is None): return partial(self.on_local_process, local_process_index=local_process_index) # For times when the `Accelerator` object itself utilizes this decorator. if function is None: if "Accelerator." in self.__qualname__: function = self else: raise ValueError( "The `on_main_process` decorator must be called with a function on an instantiated `Accelerator` object." ) def _inner(*args, **kwargs): return PartialState().on_local_process(function, local_process_index)(*args, **kwargs) return _inner @contextmanager def main_process_first(self): """ Lets the main process go first inside a with block. The other processes will enter the with block after the main process exits. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> with accelerator.main_process_first(): ... # This will be printed first by process 0 then in a seemingly ... # random order by the other processes. ... print(f"This will be printed by process {accelerator.process_index}") ``` """ with self.state.main_process_first(): yield @contextmanager def local_main_process_first(self): """ Lets the local main process go inside a with block. The other processes will enter the with block after the main process exits. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> with accelerator.local_main_process_first(): ... # This will be printed first by local process 0 then in a seemingly ... # random order by the other processes. ... print(f"This will be printed by process {accelerator.local_process_index}") ``` """ with self.state.local_main_process_first(): yield @contextmanager def no_sync(self, model): """ A context manager to disable gradient synchronizations across DDP processes by calling `torch.nn.parallel.DistributedDataParallel.no_sync`. If `model` is not in DDP, this context manager does nothing Args: model (`torch.nn.Module`): PyTorch Module that was prepared with `Accelerator.prepare` Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> dataloader, model, optimizer = accelerator.prepare(dataloader, model, optimizer) >>> input_a = next(iter(dataloader)) >>> input_b = next(iter(dataloader)) >>> with accelerator.no_sync(): ... outputs = model(input_a) ... loss = loss_func(outputs) ... accelerator.backward(loss) ... # No synchronization across processes, only accumulate gradients >>> outputs = model(input_b) >>> accelerator.backward(loss) >>> # Synchronization across all processes >>> optimizer.step() >>> optimizer.zero_grad() ``` """ context = contextlib.nullcontext if self.use_distributed: context = getattr(model, "no_sync", context) with context(): yield @staticmethod @contextmanager def trigger_sync_in_backward(model): """Trigger the sync of the gradients in the next backward pass of the model after multiple forward passes under `Accelerator.no_sync` (only applicable in multi-GPU scenarios). If the script is not launched in distributed mode, this context manager does nothing. Args: model (`torch.nn.Module`): The model for which to trigger the gradient synchronization. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> dataloader, model, optimizer = accelerator.prepare(dataloader, model, optimizer) >>> with accelerator.no_sync(): ... loss_a = loss_func(model(input_a)) # first forward pass ... loss_b = loss_func(model(input_b)) # second forward pass >>> accelerator.backward(loss_a) # No synchronization across processes, only accumulate gradients >>> with accelerator.trigger_sync_in_backward(model): ... accelerator.backward(loss_b) # Synchronization across all processes >>> optimizer.step() >>> optimizer.zero_grad() ``` """ if not isinstance(model, torch.nn.parallel.DistributedDataParallel): yield return old_require_backward_grad_sync = model.require_backward_grad_sync old_require_forward_param_sync = model.require_forward_param_sync # EXPERIMENTAL: This will force grad sync during `backward()`, but it is unknown if it breaks other DDP features. # https://github.com/pytorch/pytorch/blob/e1502c0cdbfd17548c612f25d5a65b1e4b86224d/torch/nn/parallel/distributed.py#L1453-L1466 model.require_backward_grad_sync = True model.require_forward_param_sync = True # https://github.com/pytorch/pytorch/blob/e1502c0cdbfd17548c612f25d5a65b1e4b86224d/torch/csrc/distributed/c10d/reducer.cpp#L1371-L1402 model.reducer.prepare_for_backward([]) try: yield finally: model.require_backward_grad_sync = old_require_backward_grad_sync model.require_forward_param_sync = old_require_forward_param_sync def _do_sync(self, force: bool = False): "Sets the right `sync_gradients` context and either resets or increases `self.step`" if self.gradient_state.sync_with_dataloader and self.gradient_state.end_of_dataloader: self.step = 0 self.gradient_state._set_sync_gradients(True) else: self.step += 1 self.gradient_state._set_sync_gradients(force or ((self.step % self.gradient_state.num_steps) == 0)) @property def sync_gradients(self): return self.gradient_state.sync_gradients @sync_gradients.setter def sync_gradients(self, sync_gradients): self.gradient_state.sync_gradients = sync_gradients @property def gradient_accumulation_steps(self): return self.gradient_state.num_steps @gradient_accumulation_steps.setter def gradient_accumulation_steps(self, gradient_accumulation_steps): self.gradient_state.plugin_kwargs.update({"num_steps": gradient_accumulation_steps}) @contextmanager def accumulate(self, *models): """ A context manager that will lightly wrap around and perform gradient accumulation automatically Args: *models (list of `torch.nn.Module`): PyTorch Modules that were prepared with `Accelerator.prepare`. Models passed to `accumulate()` will skip gradient syncing during backward pass in distributed training Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator(gradient_accumulation_steps=1) >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler) >>> for input, output in dataloader: ... with accelerator.accumulate(model): ... outputs = model(input) ... loss = loss_func(outputs) ... loss.backward() ... optimizer.step() ... scheduler.step() ... optimizer.zero_grad() ``` """ # sync_each_batch=True will guarantee below that self.sync_gradients=True, therefore # resulting in the nullcontext always being selected. self._do_sync(force=self.gradient_state.plugin_kwargs.get("sync_each_batch", False)) with contextlib.ExitStack() as cm_stack: for m in models: cm_stack.enter_context(contextlib.nullcontext() if self.sync_gradients else self.no_sync(m)) yield @contextmanager def join_uneven_inputs(self, joinables, even_batches=None): """ A context manager that facilitates distributed training or evaluation on uneven inputs, which acts as a wrapper around `torch.distributed.algorithms.join`. This is useful when the total batch size does not evenly divide the length of the dataset. Args: joinables (`list[torch.distributed.algorithms.Joinable]`): A list of models or optimizers that subclass `torch.distributed.algorithms.Joinable`. Most commonly, a PyTorch Module that was prepared with `Accelerator.prepare` for DistributedDataParallel training. even_batches (`bool`, *optional*) If set, this will override the value of `even_batches` set in the `Accelerator`. If it is not provided, the default `Accelerator` value wil be used. <Tip warning={true}> `join_uneven_inputs` is only supported for Distributed Data Parallel training on multiple GPUs. For any other configuration, this method will have no effect. </Tip> <Tip warning={true}> Overidding `even_batches` will not affect iterable-style data loaders. </Tip> Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator(even_batches=True) >>> ddp_model, optimizer, dataloader = accelerator.prepare(model, optimizer, dataloader) >>> with accelerator.join_uneven_inputs([ddp_model], even_batches=False): ... for input, output in dataloader: ... outputs = model(input) ... loss = loss_func(outputs) ... loss.backward() ... optimizer.step() ... optimizer.zero_grad() ``` """ if self.distributed_type in (DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, DistributedType.MULTI_XPU): dl_even_batches_values = [] if even_batches is not None: iterable_dl_seen = False # override value in batch sampler for map-style datasets for dl_idx, dl in enumerate(self._dataloaders): if isinstance(dl, DataLoaderDispatcher): iterable_dl_seen = True continue dl_even_batches_values.append((dl_idx, dl.batch_sampler.even_batches)) dl.batch_sampler.even_batches = even_batches if iterable_dl_seen: warnings.warn( "Overridding even_batches is only supported for map-style datasets, yet some dataloaders given were iterable" ) else: even_batches = self.even_batches enable_join = False if even_batches else True try: with Join(joinables, enable=enable_join, throw_on_early_termination=False): yield finally: # reset any batch samplers that have been modified for dl_idx, even_batches_value in dl_even_batches_values: self._dataloaders[dl_idx].batch_sampler.even_batches = even_batches_value else: # Even when disabled, Join expects models to subclass Joinable, so skip entirely for single process runs if self.distributed_type != DistributedType.NO: warnings.warn( "Joining uneven inputs is only supported for multi-GPU training, as a result `join_uneven_inputs` will have no effect." ) with contextlib.nullcontext(joinables): yield def print(self, *args, **kwargs): """ Drop in replacement of `print()` to only print once per server. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> accelerator.print("Hello world!") ``` """ self.state.print(*args, **kwargs) def _prepare_one(self, obj, first_pass=False, device_placement=None): # First pass of preparation: DataLoader, model, optimizer if first_pass: if isinstance(obj, torch.utils.data.DataLoader): return self.prepare_data_loader(obj, device_placement=device_placement) elif isinstance(obj, torch.nn.Module): return self.prepare_model(obj, device_placement=device_placement) elif isinstance(obj, torch.optim.Optimizer): optimizer = self.prepare_optimizer(obj, device_placement=device_placement) return optimizer # Second pass of preparation: LR scheduler (which need the full list of optimizers) elif isinstance(obj, LRScheduler): scheduler = self.prepare_scheduler(obj) return scheduler # Return the unprocessed object if previous criteria was not met return obj def prepare(self, *args, device_placement=None): """ Prepare all objects passed in `args` for distributed training and mixed precision, then return them in the same order. Args: *args (list of objects): Any of the following type of objects: - `torch.utils.data.DataLoader`: PyTorch Dataloader - `torch.nn.Module`: PyTorch Module - `torch.optim.Optimizer`: PyTorch Optimizer - `torch.optim.lr_scheduler.LRScheduler`: PyTorch LR Scheduler device_placement (`list[bool]`, *optional*): Used to customize whether automatic device placement should be performed for each object passed. Needs to be a list of the same length as `args`. Not compatible with DeepSpeed or FSDP. <Tip> You don't need to prepare a model if you only use it for inference without any kind of mixed precision </Tip> Examples: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume a model, optimizer, data_loader and scheduler are defined >>> model, optimizer, data_loader, scheduler = accelerator.prepare(model, optimizer, data_loader, scheduler) ``` ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume a model, optimizer, data_loader and scheduler are defined >>> device_placement = [True, True, False, False] >>> # Will place the first to items passed in automatically to the right device but not the last two. >>> model, optimizer, data_loader, scheduler = accelerator.prepare( ... model, optimizer, data_loader, scheduler, device_placement=device_placement ... ) ``` """ if device_placement is None: device_placement = [None for _ in args] elif self.distributed_type in (DistributedType.DEEPSPEED, DistributedType.MEGATRON_LM): raise ValueError("You can't customize device placements with DeepSpeed or Megatron-LM.") elif len(device_placement) != len(args): raise ValueError( f"`device_placement` should be a list with {len(args)} elements (the number of objects passed)." ) for obj in args: # TODO: Look at enabling native TP training directly with a proper config if ( isinstance(obj, torch.nn.Module) and self.verify_device_map(obj) and self.distributed_type != DistributedType.NO and os.environ.get("ACCELERATE_BYPASS_DEVICE_MAP", "false") != "true" ): raise ValueError( "You can't train a model that has been loaded with `device_map='auto'` in any distributed mode." " Please rerun your script specifying `--num_processes=1` or by launching with `python {{myscript.py}}`." ) if self.distributed_type == DistributedType.DEEPSPEED: model_count = 0 for obj in args: if isinstance(obj, torch.nn.Module): model_count += 1 if model_count > 1: raise AssertionError( "You can't use same `Accelerator()` instance with multiple models when using DeepSpeed" ) # On TPUs, putting the model on the XLA device will create new parameters, so the corresponding optimizer will # have parameters disconnected from the model (so no training :-( ). # If the model and optimizer have parameters on different devices we raise an error. if self.distributed_type == DistributedType.XLA: model_device, optimizer_device = self._get_devices() if model_device is not None and optimizer_device is not None and model_device != optimizer_device: raise ValueError( "The model and the optimizer parameters are not on the same device, which probably means you " "created an optimizer around your model **before** putting on the device. Make sure the line " "model.to(device) is before the optimizer creation in your script or remove it entirely and use " "the flag default value for `device_placement` in your `Accelerator` to let it handle that " "part for you." ) # If we're dealing with device placement, this deals with that by... tpu_should_fix_optimizer = self.device_placement and self.distributed_type == DistributedType.XLA if tpu_should_fix_optimizer or (self.mixed_precision == "fp8" and self.fp8_recipe_handler.backend == "TE"): # 1. grabbing old model parameters old_named_params = self._get_named_parameters(*args) if self.distributed_type in [DistributedType.MULTI_CPU, DistributedType.MULTI_XPU, DistributedType.NO]: if self.device.type == "cpu" and self.state.use_ipex: args = self._prepare_ipex(*args) elif self.device.type == "xpu" and is_xpu_available(): args = self._prepare_ipex(*args) if self.distributed_type == DistributedType.DEEPSPEED: result = self._prepare_deepspeed(*args) elif self.distributed_type == DistributedType.MEGATRON_LM: result = self._prepare_megatron_lm(*args) else: if self.mixed_precision == "fp8" and self.fp8_recipe_handler.backend == "MSAMP": args = self._prepare_msamp(*args) # MS-AMP will handle the device placement device_placement = [False for _ in args] result = tuple( self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement) ) result = tuple(self._prepare_one(obj, device_placement=d) for obj, d in zip(result, device_placement)) if tpu_should_fix_optimizer or (self.mixed_precision == "fp8" and self.fp8_recipe_handler.backend == "TE"): # 2. grabbing new model parameters new_named_params = self._get_named_parameters(*result) # 3. building a map from the first to the second mapping = {p: new_named_params[n] for n, p in old_named_params.items()} # 4. using that map to update the parameters of the optimizer for obj in result: if isinstance(obj, torch.optim.Optimizer): obj._switch_parameters(mapping) for item in result: if any( item in container for container in (self._dataloaders, self._models, self._optimizers, self._schedulers) ): item._is_accelerate_prepared = True return result if len(result) > 1 else result[0] def prepare_model(self, model: torch.nn.Module, device_placement: bool = None, evaluation_mode: bool = False): """ Prepares a PyTorch model for training in any distributed setup. It is recommended to use [`Accelerator.prepare`] instead. Args: model (`torch.nn.Module`): A PyTorch model to prepare. You don't need to prepare a model if it is used only for inference without any kind of mixed precision device_placement (`bool`, *optional*): Whether or not to place the model on the proper device. Will default to `self.device_placement`. evaluation_mode (`bool`, *optional*, defaults to `False`): Whether or not to set the model for evaluation only, by just applying mixed precision and `torch.compile` (if configured in the `Accelerator` object). Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume a model is defined >>> model = accelerator.prepare_model(model) ``` """ if device_placement is None: device_placement = self.device_placement and self.distributed_type != DistributedType.FSDP self._models.append(model) # TODO: Look at enabling native TP training directly with a proper config if ( self.verify_device_map(model) and self.distributed_type != DistributedType.NO and os.environ.get("ACCELERATE_BYPASS_DEVICE_MAP", "false") != "true" ): raise ValueError( "You can't train a model that has been loaded with `device_map='auto'` in any distributed mode." " Please rerun your script specifying `--num_processes=1` or by launching with `python {{myscript.py}}`." ) if self.native_amp: model._original_forward = model.forward model_forward_func = model.forward.__func__ if hasattr(model.forward, "__func__") else model.forward autocast_context = get_mixed_precision_context_manager(self.native_amp, self.autocast_handler) new_forward = autocast_context(model_forward_func) if hasattr(model.forward, "__func__"): model.forward = MethodType(new_forward, model) model.forward = MethodType(convert_outputs_to_fp32(model.forward.__func__), model) else: model.forward = convert_outputs_to_fp32(new_forward) elif self.mixed_precision == "fp8" and self.fp8_recipe_handler.backend == "TE": if not has_transformer_engine_layers(model): with torch.no_grad(): convert_model(model) model._converted_to_transformer_engine = True model._original_forward = model.forward kwargs = self.fp8_recipe_handler.to_kwargs() if self.fp8_recipe_handler is not None else {} if "fp8_format" in kwargs: kwargs["fp8_format"] = getattr(te_recipe.Format, kwargs["fp8_format"]) fp8_recipe = te_recipe.DelayedScaling(**kwargs) model.forward = fp8_autocast(enabled=True, fp8_recipe=fp8_recipe)(model.forward) if (getattr(model, "is_loaded_in_8bit", False) or getattr(model, "is_loaded_in_4bit", False)) and getattr( model, "hf_device_map", False ): model_devices = set(model.hf_device_map.values()) if len(model_devices) > 1 and self.distributed_type != DistributedType.NO: raise ValueError( "You can't train a model that has been loaded in 8-bit precision on multiple devices in any distributed mode." " In order to use 8-bit models that have been loaded across multiple GPUs the solution is to use Naive Pipeline Parallelism." " Therefore you should not specify that you are under any distributed regime in your accelerate config." ) current_device = list(model_devices)[0] current_device_index = current_device.index if isinstance(current_device, torch.device) else current_device if torch.device(current_device_index) != self.device: # if on the first device (GPU 0) we don't care if (self.device.index is not None) or (current_device_index != 0): raise ValueError( "You can't train a model that has been loaded in 8-bit precision on a different device than the one " "you're training on. Make sure you loaded the model on the correct device using for example `device_map={'':torch.cuda.current_device() or device_map={'':torch.xpu.current_device()}" ) if "cpu" in model_devices or "disk" in model_devices: raise ValueError( "You can't train a model that has been loaded in 8-bit precision with CPU or disk offload." ) elif device_placement and not self.verify_device_map(model): model = model.to(self.device) if not evaluation_mode: if self.distributed_type in ( DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, DistributedType.MULTI_XPU, ): if any(p.requires_grad for p in model.parameters()): kwargs = self.ddp_handler.to_kwargs() if self.ddp_handler is not None else {} # TODO: Look at enabling native TP training directly with a proper config if os.environ.get("ACCELERATE_BYPASS_DEVICE_MAP", "false") != "true": device_ids, output_device = [self.local_process_index], self.local_process_index else: device_ids, output_device = None, None model = torch.nn.parallel.DistributedDataParallel( model, device_ids=device_ids, output_device=output_device, **kwargs ) elif self.distributed_type == DistributedType.FSDP: from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP # Check if the model is already a FSDP model due to `Manual Wrapping` and if so, # don't wrap it again # In case the model is already compiled using PyTorch 2.0 and the wrapped model in it # is a FSDP model, don't wrap it again is_type_fsdp = isinstance(model, FSDP) or ( is_compiled_module(model) and isinstance(model._orig_mod, FSDP) ) if not is_type_fsdp: self.state.fsdp_plugin.set_auto_wrap_policy(model) fsdp_plugin = self.state.fsdp_plugin kwargs = { "sharding_strategy": fsdp_plugin.sharding_strategy, "cpu_offload": fsdp_plugin.cpu_offload, "auto_wrap_policy": fsdp_plugin.auto_wrap_policy, "mixed_precision": fsdp_plugin.mixed_precision_policy, "sync_module_states": fsdp_plugin.sync_module_states, "backward_prefetch": fsdp_plugin.backward_prefetch, "forward_prefetch": fsdp_plugin.forward_prefetch, "use_orig_params": fsdp_plugin.use_orig_params, "param_init_fn": fsdp_plugin.param_init_fn, "ignored_modules": fsdp_plugin.ignored_modules, "limit_all_gathers": fsdp_plugin.limit_all_gathers, "device_id": self.device, } model = FSDP(model, **kwargs) if fsdp_plugin.activation_checkpointing: from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import ( CheckpointImpl, apply_activation_checkpointing, checkpoint_wrapper, ) apply_activation_checkpointing( model, checkpoint_wrapper_fn=functools.partial( checkpoint_wrapper, checkpoint_impl=CheckpointImpl.NO_REENTRANT, ), auto_wrap_policy=fsdp_plugin.auto_wrap_policy, ) # if the previous and current models are same, delete the previous one if len(self._models) > 1 and (self._models[-2] is self._models[-1]): del self._models[-2] self._models[-1] = model elif self.distributed_type == DistributedType.MULTI_CPU: kwargs = self.ddp_handler.to_kwargs() if self.ddp_handler is not None else {} model = torch.nn.parallel.DistributedDataParallel(model, **kwargs) elif self.distributed_type == DistributedType.XLA and self.state.fork_launched: model = xmp.MpModelWrapper(model).to(self.device) # torch.compile should be called last and only if the model isn't already compiled. if self.state.dynamo_plugin.backend != DynamoBackend.NO and not is_compiled_module(model): if not is_torch_version(">=", "2.0"): raise ValueError("Using `torch.compile` requires PyTorch 2.0 or higher.") model = torch.compile(model, **self.state.dynamo_plugin.to_kwargs()) return model def _prepare_deepspeed(self, *args): import deepspeed deepspeed_plugin = self.state.deepspeed_plugin is_dataloader_present = any(isinstance(obj, torch.utils.data.DataLoader) for obj in args) result = [ self._prepare_one(obj, first_pass=True) if isinstance(obj, torch.utils.data.DataLoader) else obj for obj in args ] if deepspeed_plugin.is_auto("train_micro_batch_size_per_gpu"): if is_dataloader_present: batch_sizes = [obj.batch_size for obj in args if hasattr(obj, "batch_size")] if any(bs is None for bs in batch_sizes): raise ValueError( "At least one of the dataloaders passed to `accelerate.prepare()` has `None` as batch size. " "Please set an integer value in `train_micro_batch_size_per_gpu` in the deepspeed config file " "or assign integer value to `AcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu']`." ) if self.split_batches: batch_sizes = [batch_size // self.num_processes for batch_size in batch_sizes] batch_size_per_device = min(batch_sizes) if deepspeed_plugin.is_train_batch_min else max(batch_sizes) if len(batch_sizes) > 1: logger.info( "Since you passed both train and evaluation dataloader, `is_train_batch_min` (here " f"{deepspeed_plugin.is_train_batch_min} will decide the `train_batch_size` ({batch_size_per_device})." ) else: raise ValueError( "When using DeepSpeed, `accelerate.prepare()` requires you to pass at least one of training or evaluation dataloaders " "with `batch_size` attribute returning an integer value " "or alternatively set an integer value in `train_micro_batch_size_per_gpu` in the deepspeed config file " "or assign integer value to `AcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu']`." ) else: batch_size_per_device = deepspeed_plugin.get_value("train_micro_batch_size_per_gpu") # handle `gradient_accumulation_steps` when the value is `auto` deepspeed_plugin.fill_match( "gradient_accumulation_steps", must_match=False, gradient_accumulation_steps=self.gradient_accumulation_steps, ) config_kwargs = { "train_micro_batch_size_per_gpu": batch_size_per_device, "train_batch_size": batch_size_per_device * deepspeed_plugin.get_value("gradient_accumulation_steps") * self.num_processes, "gradient_clipping": 1.0, "zero_optimization.stage3_gather_16bit_weights_on_model_save": False, } model = None optimizer = None scheduler = None for obj in result: if isinstance(obj, torch.nn.Module): model = obj elif isinstance(obj, (torch.optim.Optimizer, DummyOptim)): optimizer = obj elif (isinstance(obj, (LRScheduler, DummyScheduler))) or ( type(obj).__name__ in deepspeed.runtime.lr_schedules.VALID_LR_SCHEDULES ): scheduler = obj if optimizer is not None: if "optimizer" in deepspeed_plugin.deepspeed_config and not isinstance(optimizer, (DummyOptim)): raise ValueError( "You cannot specify an optimizer in the config file and in the code at the same time. " "Please remove the optimizer from the config file or " "create `accelerate.utils.DummyOptim` in the code." ) elif "optimizer" not in deepspeed_plugin.deepspeed_config and isinstance(optimizer, (DummyOptim)): raise ValueError( "You cannot create a `DummyOptim` without specifying an optimizer in the config file." ) if isinstance(optimizer, (torch.optim.Optimizer)): deepspeed_plugin.deepspeed_config["zero_allow_untested_optimizer"] = True if scheduler is not None: if "scheduler" in deepspeed_plugin.deepspeed_config and not isinstance(scheduler, (DummyScheduler)): raise ValueError( "You cannot specify a scheduler in the config file and in the code at the same time. " "Please remove the scheduler from the config file or " "create `accelerate.utils.DummyScheduler` in the code." ) elif ( "scheduler" not in deepspeed_plugin.deepspeed_config and isinstance(scheduler, (DummyScheduler)) and scheduler.lr_scheduler_callable is None ): raise ValueError( "Either specify a scheduler in the config file or " "pass in the `lr_scheduler_callable` parameter when using `accelerate.utils.DummyScheduler`." ) if optimizer is not None and scheduler is not None: if isinstance(optimizer, (DummyOptim)) and not isinstance(scheduler, (DummyScheduler)): raise ValueError( "You can only specify `accelerate.utils.DummyScheduler` in the code when using " "`accelerate.utils.DummyOptim`." ) if model is not None: # deal with config keys that use `auto` value and rely on model's hidden_size hidden_size_based_keys = [ "zero_optimization.reduce_bucket_size", "zero_optimization.stage3_prefetch_bucket_size", "zero_optimization.stage3_param_persistence_threshold", ] hidden_size_auto_keys = [x for x in hidden_size_based_keys if deepspeed_plugin.is_auto(x)] if len(hidden_size_auto_keys) > 0: reasoning = ( "therefore it's not possible to automatically fill out the following `auto` entries " + f"in the DeepSpeed config file: {hidden_size_auto_keys}. You can fix that by replacing " + "`auto` values for these keys with an integer value of your choice." ) if not hasattr(model, "config"): raise ValueError("Can't find `model.config` entry, " + reasoning) if hasattr(model.config, "hidden_size"): hidden_size = model.config.hidden_size elif hasattr(model.config, "hidden_sizes"): # if there are many hidden sizes pick the largest one hidden_size = max(model.config.hidden_sizes) else: raise ValueError( "Can find neither `model.config.hidden_size` nor `model.config.hidden_sizes`, " + reasoning ) config_kwargs.update( { "zero_optimization.reduce_bucket_size": hidden_size * hidden_size, "zero_optimization.stage3_prefetch_bucket_size": 0.9 * hidden_size * hidden_size, "zero_optimization.stage3_param_persistence_threshold": 10 * hidden_size, } ) if isinstance(optimizer, (DummyOptim)): config_kwargs.update( {"optimizer.params.lr": optimizer.lr, "optimizer.params.weight_decay": optimizer.weight_decay} ) if isinstance(scheduler, (DummyScheduler)) and scheduler.lr_scheduler_callable is None: max_lr = ( getattr(scheduler.optimizer, "lr", None) if getattr(scheduler.optimizer, "defaults", None) is None else scheduler.optimizer.defaults["lr"] ) config_kwargs.update( { "scheduler.params.warmup_min_lr": 0, "scheduler.params.warmup_max_lr": max_lr, "scheduler.params.warmup_num_steps": scheduler.warmup_num_steps, } ) if scheduler.total_num_steps is not None: config_kwargs["scheduler.params.total_num_steps"] = ( math.ceil(scheduler.total_num_steps / self.num_processes) if not self.split_batches else scheduler.total_num_steps ) deepspeed_plugin.deepspeed_config_process(must_match=False, **config_kwargs) self.deepspeed_config = deepspeed_plugin.deepspeed_config kwargs = dict(model=model, config_params=self.deepspeed_config) if optimizer is not None: if isinstance(optimizer, (DummyOptim)): kwargs["model_parameters"] = optimizer.params if isinstance(scheduler, (DummyScheduler)) and scheduler.lr_scheduler_callable is not None: kwargs["lr_scheduler"] = scheduler.lr_scheduler_callable else: if self.deepspeed_config["zero_optimization"].get("offload_optimizer", {}).get( "device", "none" ) != "none" and self.deepspeed_config.get("zero_force_ds_cpu_optimizer", True): from deepspeed.ops.adam import DeepSpeedCPUAdam defaults = {k: v for k, v in optimizer.defaults.items() if k in ["lr", "weight_decay"]} optimizer = DeepSpeedCPUAdam(optimizer.param_groups, **defaults) kwargs["optimizer"] = optimizer if scheduler is not None: if type(scheduler).__name__ in deepspeed.runtime.lr_schedules.VALID_LR_SCHEDULES: kwargs["lr_scheduler"] = scheduler engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs) if optimizer is not None: optimizer = DeepSpeedOptimizerWrapper(optimizer) if scheduler is not None: if lr_scheduler is None: scheduler = AcceleratedScheduler( scheduler, optimizer, step_with_optimizer=self.step_scheduler_with_optimizer, split_batches=self.split_batches, ) else: scheduler = DeepSpeedSchedulerWrapper(lr_scheduler, optimizer) for i in range(len(result)): if isinstance(result[i], torch.nn.Module): result[i] = engine elif isinstance(result[i], (torch.optim.Optimizer, DummyOptim)): result[i] = optimizer elif (isinstance(result[i], (LRScheduler, DummyScheduler))) or ( type(result[i]).__name__ in deepspeed.runtime.lr_schedules.VALID_LR_SCHEDULES ): result[i] = scheduler # pointing for deepspeed_engine_wrapped.backward() self.deepspeed_engine_wrapped = DeepSpeedEngineWrapper(engine) self._models.append(engine) if optimizer is not None: self._optimizers.append(optimizer) if scheduler is not None: self._schedulers.append(scheduler) if len(self._models) > 1: raise AssertionError( "You can't use same `Accelerator()` instance with multiple models when using DeepSpeed" ) return tuple(result) def _prepare_megatron_lm(self, *args): megatron_lm_plugin = self.state.megatron_lm_plugin if not megatron_lm_plugin.megatron_dataset_flag: batch_sizes = [obj.batch_size for obj in args if hasattr(obj, "batch_size")] if len(batch_sizes) == 0: raise ValueError( "You must specify a training or evaluation dataloader in `accelerate.prepare()` when using Megatron-LM." ) micro_batch_size = min(batch_sizes) if megatron_lm_plugin.is_train_batch_min else max(batch_sizes) if len(batch_sizes) > 1: logger.info( "Since you passed both train and evaluation dataloader, `is_train_batch_min` (here " f"{megatron_lm_plugin.is_train_batch_min} will decide the `train_batch_size` ({micro_batch_size})." ) else: for obj in args: if isinstance(obj, MegatronLMDummyDataLoader): micro_batch_size = obj.dataset_args["micro_batch_size"] break dp_degree = self.num_processes // (megatron_lm_plugin.tp_degree * megatron_lm_plugin.pp_degree) megatron_lm_plugin.set_training_args(micro_batch_size, dp_degree) model = None optimizer = None scheduler = None is_dummy_scheduler = False batch_data = None for obj in args: if isinstance(obj, torch.utils.data.DataLoader) and batch_data is None: batch_data = next(iter(obj)) if isinstance(obj, torch.nn.Module): model = obj elif isinstance(obj, (torch.optim.Optimizer)): optimizer = obj elif isinstance(obj, (LRScheduler, MegatronLMDummyScheduler)): scheduler = obj if model is not None: megatron_lm_plugin.set_network_size_args(model, batch_data) if optimizer is not None: megatron_lm_plugin.set_optimizer_type(optimizer) if scheduler is not None: is_dummy_scheduler = isinstance(scheduler, MegatronLMDummyScheduler) if not is_dummy_scheduler: raise ValueError( "You can't use a custom scheduler with Megatron-LM. Please use the `accelerate.utils.MegatronLMDummyScheduler` instead." ) megatron_lm_plugin.set_scheduler_args(scheduler) # initialize megatron-lm megatron_lm_initialize(self, args_defaults=megatron_lm_plugin.megatron_lm_default_args) counter = 0 result = [] for obj in args: if isinstance(obj, torch.utils.data.DataLoader): result.append(megatron_lm_prepare_data_loader(self, obj)) counter += 1 elif isinstance(obj, MegatronLMDummyDataLoader): if counter == 0: obj.set_megatron_data_args() dataloaders = megatron_lm_prepare_data_loader(self, obj) result.append(dataloaders[counter]) counter += 1 else: result.append(obj) if model is not None: model = megatron_lm_prepare_model(self) if optimizer is not None: optimizer = megatron_lm_prepare_optimizer(self, model) if scheduler is not None: scheduler = megatron_lm_prepare_scheduler(self, optimizer, scheduler) if model is not None: model = MegatronEngine(self, model, optimizer, scheduler) if optimizer is not None: optimizer = MegatronLMOptimizerWrapper(optimizer) if scheduler is not None: scheduler = MegatronLMSchedulerWrapper(scheduler, optimizer) for i in range(len(result)): if isinstance(result[i], torch.nn.Module): result[i] = model elif isinstance(result[i], torch.optim.Optimizer): result[i] = optimizer elif isinstance(result[i], MegatronLMDummyScheduler): result[i] = scheduler if model is not None: self._models.append(model) if optimizer is not None: self._optimizers.append(optimizer) if scheduler is not None: self._schedulers.append(scheduler) if len(self._models) > 1: raise AssertionError( "You can't use same `Accelerator()` instance with multiple models when using Megatron-LM" ) return tuple(result) def _prepare_ipex(self, *args): if not is_ipex_available(): raise ImportError( "IPEX is not installed or IPEX's version does not match current PyTorch version. Please refer" " to https://github.com/intel/intel-extension-for-pytorch." ) else: import intel_extension_for_pytorch as ipex model = None optimizer = None result = [obj for obj in args] for obj in result: if isinstance(obj, torch.nn.Module): model = obj model.train() elif isinstance(obj, (torch.optim.Optimizer)): optimizer = obj if optimizer is not None and model is not None: dtype = torch.bfloat16 if self.state.mixed_precision == "bf16" else None if self.device.type == "xpu" and is_xpu_available(): model = model.to(self.device) model, optimizer = torch.xpu.optimize( model, optimizer=optimizer, dtype=dtype, inplace=True, level="O1" ) else: model, optimizer = ipex.optimize(model, optimizer=optimizer, dtype=dtype, inplace=True, level="O1") for i in range(len(result)): if isinstance(result[i], torch.nn.Module): result[i] = model elif isinstance(result[i], (torch.optim.Optimizer)): result[i] = optimizer return tuple(result) def _prepare_msamp(self, *args): if not is_msamp_available(): raise ImportError( "MS-AMP was not found on your system. Please ensure that MS-AMP is available " " or choose `'te'` as the backend for FP8 mixed precision training." ) else: import msamp model, optimizer = None, None num_models, num_optimizers = 0, 0 result = [obj for obj in args] for obj in result: if isinstance(obj, torch.nn.Module): model = obj num_models += 1 elif isinstance(obj, (torch.optim.Optimizer)): optimizer = obj num_optimizers += 1 if optimizer is None or model is None: raise ValueError( "You must pass a model and an optimizer together to `accelerate.prepare()` when using MS-AMP." ) elif num_models > 1 or num_optimizers > 1: raise ValueError( f"You can't use multiple models ({num_models}) or optimizers {num_optimizers} with MS-AMP." ) else: model, optimizer = msamp.initialize(model, optimizer, opt_level=self.fp8_recipe_handler.opt_level) for i in range(len(result)): if isinstance(result[i], torch.nn.Module): result[i] = model elif isinstance(result[i], (torch.optim.Optimizer)): result[i] = optimizer return tuple(result) def prepare_data_loader( self, data_loader: torch.utils.data.DataLoader, device_placement=None, slice_fn_for_dispatch=None ): """ Prepares a PyTorch DataLoader for training in any distributed setup. It is recommended to use [`Accelerator.prepare`] instead. Args: data_loader (`torch.utils.data.DataLoader`): A vanilla PyTorch DataLoader to prepare device_placement (`bool`, *optional*): Whether or not to place the batches on the proper device in the prepared dataloader. Will default to `self.device_placement`. slice_fn_for_dispatch (`Callable`, *optional*`): If passed, this function will be used to slice tensors across `num_processes`. Will default to [`~utils.slice_tensors`]. This argument is used only when `dispatch_batches` is set to `True` and will be ignored otherwise. Example: ```python >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> data_loader = torch.utils.data.DataLoader(...) >>> data_loader = accelerator.prepare_data_loader(data_loader, device_placement=True) ``` """ # Ensure we can't double wrap a DataLoader due to `find_batch_size` if getattr(data_loader, "_is_accelerate_prepared", False): if data_loader not in self._dataloaders: self._dataloaders.append(data_loader) return data_loader if device_placement is None: device_placement = self.device_placement if self.distributed_type != DistributedType.XLA else False prepared_data_loader = prepare_data_loader( data_loader, self.device, num_processes=self.num_processes, process_index=self.process_index, split_batches=self.split_batches, put_on_device=device_placement, rng_types=self.rng_types.copy(), dispatch_batches=self.dispatch_batches, even_batches=self.even_batches, slice_fn_for_dispatch=slice_fn_for_dispatch, use_seedable_sampler=self.use_seedable_sampler, ) self._dataloaders.append(prepared_data_loader) return prepared_data_loader def prepare_optimizer(self, optimizer: torch.optim.Optimizer, device_placement=None): """ Prepares a PyTorch Optimizer for training in any distributed setup. It is recommended to use [`Accelerator.prepare`] instead. Args: optimizer (`torch.optim.Optimizer`): A vanilla PyTorch optimizer to prepare device_placement (`bool`, *optional*): Whether or not to place the optimizer on the proper device. Will default to `self.device_placement`. Example: ```python >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> optimizer = torch.optim.Adam(...) >>> optimizer = accelerator.prepare_optimizer(optimizer, device_placement=True) ``` """ # Ensure we can't double wrap an optimizer due to `find_batch_size` if getattr(optimizer, "_is_accelerate_prepared", False): if optimizer not in self._optimizers: self._optimizers.append(optimizer) return optimizer if device_placement is None: device_placement = self.device_placement optimizer = AcceleratedOptimizer(optimizer, device_placement=device_placement, scaler=self.scaler) self._optimizers.append(optimizer) return optimizer def prepare_scheduler(self, scheduler: LRScheduler): """ Prepares a PyTorch Scheduler for training in any distributed setup. It is recommended to use [`Accelerator.prepare`] instead. Args: scheduler (`torch.optim.lr_scheduler.LRScheduler`): A vanilla PyTorch scheduler to prepare Example: ```python >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> optimizer = torch.optim.Adam(...) >>> scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, ...) >>> scheduler = accelerator.prepare_scheduler(scheduler) ``` """ # Ensure we can't double wrap a scheduler due to `find_batch_size` if getattr(scheduler, "_is_accelerate_prepared", False): if scheduler not in self._schedulers: self._schedulers.append(scheduler) return scheduler # We try to find the optimizer associated with `scheduler`, the default is the full list. optimizer = self._optimizers for opt in self._optimizers: if getattr(scheduler, "optimizer", None) == opt.optimizer: optimizer = opt break scheduler = AcceleratedScheduler( scheduler, optimizer, step_with_optimizer=self.step_scheduler_with_optimizer, split_batches=self.split_batches, ) self._schedulers.append(scheduler) return scheduler def backward(self, loss, **kwargs): """ Scales the gradients in accordance to the `GradientAccumulationPlugin` and calls the correct `backward()` based on the configuration. Should be used in lieu of `loss.backward()`. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator(gradient_accumulation_steps=2) >>> outputs = model(inputs) >>> loss = loss_fn(outputs, labels) >>> accelerator.backward(loss) ``` """ if self.distributed_type != DistributedType.DEEPSPEED: # deepspeed handles loss scaling by gradient_accumulation_steps in its `backward` loss = loss / self.gradient_accumulation_steps if self.distributed_type == DistributedType.DEEPSPEED: self.deepspeed_engine_wrapped.backward(loss, **kwargs) elif self.distributed_type == DistributedType.MEGATRON_LM: return elif self.scaler is not None: self.scaler.scale(loss).backward(**kwargs) else: loss.backward(**kwargs) def set_trigger(self): """ Sets the internal trigger tensor to 1 on the current process. A latter check should follow using this which will check across all processes. Note: Does not require `wait_for_everyone()` Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume later in the training script >>> # `should_do_breakpoint` is a custom function to monitor when to break, >>> # e.g. when the loss is NaN >>> if should_do_breakpoint(loss): ... accelerator.set_trigger() >>> # Assume later in the training script >>> if accelerator.check_breakpoint(): ... break ``` """ self.flag_tensor = torch.tensor(1, device=self.device) def check_trigger(self): """ Checks if the internal trigger tensor has been set to 1 in any of the processes. If so, will return `True` and reset the trigger tensor to 0. Note: Does not require `wait_for_everyone()` Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume later in the training script >>> # `should_do_breakpoint` is a custom function to monitor when to break, >>> # e.g. when the loss is NaN >>> if should_do_breakpoint(loss): ... accelerator.set_trigger() >>> # Assume later in the training script >>> if accelerator.check_trigger(): ... break ``` """ # Now that we are outside `__init__`, we can initialize it if it is `None` on device if self.flag_tensor is None: self.flag_tensor = torch.tensor(0, device=self.device) flag_tensor = self.reduce(self.flag_tensor) if flag_tensor.item() >= 1: self.flag_tensor = torch.tensor(0, device=self.device) return True return False def unscale_gradients(self, optimizer=None): """ Unscale the gradients in mixed precision training with AMP. This is a noop in all other settings. Likely should be called through [`Accelerator.clip_grad_norm_`] or [`Accelerator.clip_grad_value_`] Args: optimizer (`torch.optim.Optimizer` or `list[torch.optim.Optimizer]`, *optional*): The optimizer(s) for which to unscale gradients. If not set, will unscale gradients on all optimizers that were passed to [`~Accelerator.prepare`]. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model, optimizer = accelerator.prepare(model, optimizer) >>> outputs = model(inputs) >>> loss = loss_fn(outputs, labels) >>> accelerator.backward(loss) >>> accelerator.unscale_gradients(optimizer=optimizer) ``` """ if self.native_amp and self.mixed_precision == "fp16": if optimizer is None: # TODO: this unscales all optimizers where we should only unscale the one where parameters are. optimizer = self._optimizers elif not isinstance(optimizer, (tuple, list)): optimizer = [optimizer] for opt in optimizer: while isinstance(opt, AcceleratedOptimizer): opt = opt.optimizer self.scaler.unscale_(opt) def clip_grad_norm_(self, parameters, max_norm, norm_type=2): """ Should be used in place of `torch.nn.utils.clip_grad_norm_`. Returns: `torch.Tensor`: Total norm of the parameter gradients (viewed as a single vector). Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator(gradient_accumulation_steps=2) >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler) >>> for input, target in dataloader: ... optimizer.zero_grad() ... output = model(input) ... loss = loss_func(output, target) ... accelerator.backward(loss) ... if accelerator.sync_gradients: ... accelerator.clip_grad_norm_(model.parameters(), max_grad_norm) ... optimizer.step() ``` """ if self.distributed_type == DistributedType.FSDP: self.unscale_gradients() parameters = [p for p in parameters] for model in self._models: if parameters == [p for p in model.parameters()]: return model.clip_grad_norm_(max_norm, norm_type) elif self.distributed_type == DistributedType.DEEPSPEED: # `accelerator.backward(loss)` is doing that automatically. Therefore, its implementation is not needed # We cannot return the gradient norm because DeepSpeed does it. return None elif self.distributed_type == DistributedType.XLA: # Reduce gradients first for XLA for acc_opt in self._optimizers: if not acc_opt.gradient_state.is_xla_gradients_synced: opt = acc_opt while isinstance(opt, AcceleratedOptimizer): opt = opt.optimizer gradients = xm._fetch_gradients(opt) # Use xm.all_reduce to perform an in-place all-reduce. Recusrsive all-reduce each tensor # one by one in self.reduce is non-inplace. xm.all_reduce("sum", gradients, scale=1.0 / self.num_processes) # Set is_xla_gradients_synced to True to avoid all-reduce twice in the AcceleratedOptimizer step. acc_opt.gradient_state.is_xla_gradients_synced = True self.unscale_gradients() return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type) def clip_grad_value_(self, parameters, clip_value): """ Should be used in place of `torch.nn.utils.clip_grad_value_`. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator(gradient_accumulation_steps=2) >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler) >>> for input, target in dataloader: ... optimizer.zero_grad() ... output = model(input) ... loss = loss_func(output, target) ... accelerator.backward(loss) ... if accelerator.sync_gradients: ... accelerator.clip_grad_value_(model.parameters(), clip_value) ... optimizer.step() ``` """ if self.distributed_type in [DistributedType.DEEPSPEED, DistributedType.FSDP]: raise Exception("DeepSpeed and FSDP do not support `clip_grad_value_`. Use `clip_grad_norm_` instead.") self.unscale_gradients() torch.nn.utils.clip_grad_value_(parameters, clip_value) def gather(self, tensor): """ Gather the values in *tensor* across all processes and concatenate them on the first dimension. Useful to regroup the predictions from all processes when doing evaluation. Note: This gather happens in all processes. Args: tensor (`torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`): The tensors to gather across all processes. Returns: `torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`: The gathered tensor(s). Note that the first dimension of the result is *num_processes* multiplied by the first dimension of the input tensors. Example: ```python >>> # Assuming four processes >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> process_tensor = torch.tensor([accelerator.process_index]) >>> gathered_tensor = accelerator.gather(process_tensor) >>> gathered_tensor tensor([0, 1, 2, 3]) ``` """ return gather(tensor) def gather_for_metrics(self, input_data): """ Gathers `input_data` and potentially drops duplicates in the last batch if on a distributed system. Should be used for gathering the inputs and targets for metric calculation. Args: input (`torch.Tensor`, `object`, a nested tuple/list/dictionary of `torch.Tensor`, or a nested tuple/list/dictionary of `object`): The tensors or objects for calculating metrics across all processes Example: ```python >>> # Assuming two processes, with a batch size of 5 on a dataset with 9 samples >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> dataloader = torch.utils.data.DataLoader(range(9), batch_size=5) >>> dataloader = accelerator.prepare(dataloader) >>> batch = next(iter(dataloader)) >>> gathered_items = accelerator.gather_for_metrics(batch) >>> len(gathered_items) 9 ``` """ try: recursively_apply(lambda x: x, input_data, error_on_other_type=True) all_tensors = True except TypeError: all_tensors = False if not all_tensors: data = gather_object(input_data) else: data = self.gather(input_data) try: if self.gradient_state.end_of_dataloader: # at the end of a dataloader, `gather_for_metrics` regresses to # `gather` unless the dataset has a remainder so log. if self.gradient_state.remainder == -1: logger.info( "The used dataset had no length, returning gathered tensors. You should drop the remainder yourself." ) return data elif self.gradient_state.remainder > 0: # Last batch needs to be truncated on distributed systems as it contains additional samples def _adjust_samples(tensor): return tensor[: self.gradient_state.remainder] return recursively_apply(_adjust_samples, data) else: # remainder is 0 # no remainder even though at end of dataloader, so nothing to do. return data else: # Not at the end of the dataloader, no need to adjust the tensors return data except Exception: # Dataset had no length or raised an error return data def reduce(self, tensor, reduction="sum", scale=1.0): """ Reduce the values in *tensor* across all processes based on *reduction*. Note: All processes get the reduced value. Args: tensor (`torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`): The tensors to reduce across all processes. reduction (`str`, *optional*, defaults to "sum"): A reduction type, can be one of 'sum', 'mean', or 'none'. If 'none', will not perform any operation. scale (`float`, *optional*, defaults to 1.0): A default scaling value to be applied after the reduce, only valied on XLA. Returns: `torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`: The reduced tensor(s). Example: ```python >>> # Assuming two processes >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> process_tensor = torch.arange(accelerator.num_processes) + 1 + (2 * accelerator.process_index) >>> process_tensor = process_tensor.to(accelerator.device) >>> reduced_tensor = accelerator.reduce(process_tensor, reduction="sum") >>> reduced_tensor tensor([4, 6]) ``` """ return reduce(tensor, reduction, scale) def pad_across_processes(self, tensor, dim=0, pad_index=0, pad_first=False): """ Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they can safely be gathered. Args: tensor (nested list/tuple/dictionary of `torch.Tensor`): The data to gather. dim (`int`, *optional*, defaults to 0): The dimension on which to pad. pad_index (`int`, *optional*, defaults to 0): The value with which to pad. pad_first (`bool`, *optional*, defaults to `False`): Whether to pad at the beginning or the end. Returns: `torch.Tensor`, or a nested tuple/list/dictionary of `torch.Tensor`: The padded tensor(s). Example: ```python >>> # Assuming two processes, with the first processes having a tensor of size 1 and the second of size 2 >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> process_tensor = torch.arange(accelerator.process_index + 1).to(accelerator.device) >>> padded_tensor = accelerator.pad_across_processes(process_tensor) >>> padded_tensor.shape torch.Size([2]) ``` """ return pad_across_processes(tensor, dim=dim, pad_index=pad_index, pad_first=pad_first) def unwrap_model(self, model, keep_fp32_wrapper: bool = True): """ Unwraps the `model` from the additional layer possible added by [`~Accelerator.prepare`]. Useful before saving the model. Args: model (`torch.nn.Module`): The model to unwrap. keep_fp32_wrapper (`bool`, *optional*, defaults to `True`): Whether to not remove the mixed precision hook if it was added. Returns: `torch.nn.Module`: The unwrapped model. Example: ```python >>> # Assuming two GPU processes >>> from torch.nn.parallel import DistributedDataParallel >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model = accelerator.prepare(MyModel()) >>> print(model.__class__.__name__) DistributedDataParallel >>> model = accelerator.unwrap_model(model) >>> print(model.__class__.__name__) MyModel ``` """ return extract_model_from_parallel(model, keep_fp32_wrapper) def wait_for_everyone(self): """ Will stop the execution of the current process until every other process has reached that point (so this does nothing when the script is only run in one process). Useful to do before saving a model. Example: ```python >>> # Assuming two GPU processes >>> import time >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> if accelerator.is_main_process: ... time.sleep(2) >>> else: ... print("I'm waiting for the main process to finish its sleep...") >>> accelerator.wait_for_everyone() >>> # Should print on every process at the same time >>> print("Everyone is here") ``` """ wait_for_everyone() @on_main_process def init_trackers(self, project_name: str, config: dict | None = None, init_kwargs: dict | None = {}): """ Initializes a run for all trackers stored in `self.log_with`, potentially with starting configurations Args: project_name (`str`): The name of the project. All trackers will save their data based on this config (`dict`, *optional*): Optional starting configuration to be logged. init_kwargs (`dict`, *optional*): A nested dictionary of kwargs to be passed to a specific tracker's `__init__` function. Should be formatted like so: ```python {"wandb": {"tags": ["tag_a", "tag_b"]}} ``` Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator(log_with="tensorboard") >>> accelerator.init_trackers( ... project_name="my_project", ... config={"learning_rate": 0.001, "batch_size": 32}, ... init_kwargs={"tensorboard": {"flush_secs": 60}}, ... ) ``` """ for tracker in self.log_with: if issubclass(type(tracker), GeneralTracker): # Custom trackers are already initialized self.trackers.append(tracker) else: tracker_init = LOGGER_TYPE_TO_CLASS[str(tracker)] if tracker_init.requires_logging_directory: # We can skip this check since it was done in `__init__` self.trackers.append( tracker_init(project_name, self.logging_dir, **init_kwargs.get(str(tracker), {})) ) else: self.trackers.append(tracker_init(project_name, **init_kwargs.get(str(tracker), {}))) if config is not None: for tracker in self.trackers: tracker.store_init_configuration(config) def get_tracker(self, name: str, unwrap: bool = False): """ Returns a `tracker` from `self.trackers` based on `name` on the main process only. Args: name (`str`): The name of a tracker, corresponding to the `.name` property. unwrap (`bool`): Whether to return the internal tracking mechanism or to return the wrapped tracker instead (recommended). Returns: `GeneralTracker`: The tracker corresponding to `name` if it exists. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator(log_with="tensorboard") >>> accelerator.init_trackers("my_project") >>> tensorboard_tracker = accelerator.get_tracker("tensorboard") ``` """ if len(self.trackers) > 0: for tracker in self.trackers: if tracker.name == name: return tracker.tracker if unwrap else tracker raise ValueError(f"{name} is not an available tracker stored inside the `Accelerator`.") # Handle tracker only made on main process return GeneralTracker(_blank=True) @on_main_process def log(self, values: dict, step: int | None = None, log_kwargs: dict | None = {}): """ Logs `values` to all stored trackers in `self.trackers` on the main process only. Args: values (`dict`): Values should be a dictionary-like object containing only types `int`, `float`, or `str`. step (`int`, *optional*): The run step. If included, the log will be affiliated with this step. log_kwargs (`dict`, *optional*): A nested dictionary of kwargs to be passed to a specific tracker's `log` function. Should be formatted like so: ```python {"wandb": {"tags": ["tag_a", "tag_b"]}} ``` Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator(log_with="tensorboard") >>> accelerator.init_trackers("my_project") >>> accelerator.log({"loss": 0.5, "accuracy": 0.9}) ``` """ for tracker in self.trackers: tracker.log(values, step=step, **log_kwargs.get(tracker.name, {})) @on_main_process def end_training(self): """ Runs any special end training behaviors, such as stopping trackers on the main process only. Should always be called at the end of your script if using experiment tracking. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator(log_with="tensorboard") >>> accelerator.init_trackers("my_project") >>> # Do training >>> accelerator.end_training() ``` """ for tracker in self.trackers: tracker.finish() def save(self, obj, f, safe_serialization=False): """ Save the object passed to disk once per machine. Use in place of `torch.save`. Args: obj (`object`): The object to save. f (`str` or `os.PathLike`): Where to save the content of `obj`. safe_serialization (`bool`, *optional*, defaults to `False`): Whether to save `obj` using `safetensors` Note: If `save_on_each_node` was passed in as a `ProjectConfiguration`, will save the object once per node, rather than only once on the main node. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> arr = [0, 1, 2, 3] >>> accelerator.save(arr, "array.pkl") ``` """ save( obj, f, save_on_each_node=self.project_configuration.save_on_each_node, safe_serialization=safe_serialization, ) def save_model( self, model: torch.nn.Module, save_directory: Union[str, os.PathLike], max_shard_size: Union[int, str] = "10GB", safe_serialization: bool = True, ): """ Save a model so that it can be re-loaded using load_checkpoint_in_model Arguments: model: (`torch.nn.Module`): Model to be saved. The model can be wrapped or unwraped. save_directory (`str` or `os.PathLike`): Directory to which to save. Will be created if it doesn't exist. max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`): The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like `"5MB"`). <Tip warning={true}> If a single weight of the model is bigger than `max_shard_size`, it will be in its own checkpoint shard which will be bigger than `max_shard_size`. </Tip> safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`). Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model = ... >>> accelerator.save_model(model, save_directory) ``` """ if os.path.isfile(save_directory): logger.error(f"Provided path ({save_directory}) should be a directory, not a file") return os.makedirs(save_directory, exist_ok=True) # get the state_dict of the model if any( [ module._hf_hook.offload for module in model.modules() if hasattr(module, "_hf_hook") and isinstance(module._hf_hook, AlignDevicesHook) ] ): state_dict = get_state_dict_offloaded_model(model) else: if any(param.device == torch.device("meta") for param in model.parameters()): raise RuntimeError("You can't save the model since some parameters are on the meta device.") state_dict = self.get_state_dict(model) if safe_serialization: state_dict = clean_state_dict_for_safetensors(state_dict) weights_name = SAFE_WEIGHTS_NAME if safe_serialization else WEIGHTS_NAME # Shard the model if it is too big. shards, index = shard_checkpoint(state_dict, max_shard_size=max_shard_size, weights_name=weights_name) # Clean the folder from a previous save for filename in os.listdir(save_directory): full_filename = os.path.join(save_directory, filename) # If we have a shard file that is not going to be replaced, we delete it, but only from the main process # in distributed settings to avoid race conditions. weights_no_suffix = weights_name.replace(".bin", "") # make sure that file to be deleted matches format of sharded file, e.g. pytorch_model-00001-of-00005 filename_no_suffix = filename.replace(".bin", "") reg = re.compile(r"(.*?)-\d{5}-of-\d{5}") if ( filename.startswith(weights_no_suffix) and os.path.isfile(full_filename) and filename not in shards.keys() and reg.fullmatch(filename_no_suffix) is not None and PartialState().is_main_process ): os.remove(full_filename) # Save the model for shard_file, shard in shards.items(): self.save(shard, os.path.join(save_directory, shard_file), safe_serialization=safe_serialization) if index is None: path_to_weights = os.path.join(save_directory, WEIGHTS_NAME) logger.info(f"Model weights saved in {path_to_weights}") else: save_index_file = SAFE_WEIGHTS_INDEX_NAME if safe_serialization else WEIGHTS_INDEX_NAME save_index_file = os.path.join(save_directory, save_index_file) # Save the index as well with open(save_index_file, "w", encoding="utf-8") as f: content = json.dumps(index, indent=2, sort_keys=True) + "\n" f.write(content) logger.info( f"The model is bigger than the maximum size per checkpoint ({max_shard_size}) and is going to be " f"split in {len(shards)} checkpoint shards. You can find where each parameters has been saved in the " f"index located at {save_index_file}." ) def register_save_state_pre_hook(self, hook: Callable[..., None]) -> hooks.RemovableHandle: """ Registers a pre hook to be run before `save_checkpoint` is called in [`Accelerator.save_state`]. Args: hook (`Callable`): A function to be called in [`Accelerator.save_state`] before `save_checkpoint`. The hook should have the following signature: `hook(models: list[torch.nn.Module], weights: list[dict[str, torch.Tensor]], input_dir: str) -> None` The `models` argument are the models as saved in the accelerator state under `accelerator._models`, `weigths` argument are the state dicts of the `models`, and the `input_dir` argument is the `input_dir` argument passed to [`Accelerator.load_state`]. <Tip> Should only be used in conjunction with [`Accelerator.register_load_state_pre_hook`]. Can be useful to save configurations in addition to model weights. Can also be used to overwrite model saving with a customized method. In this case, make sure to remove already loaded weights from the weights list. </Tip> Returns: `torch.utils.hooks.RemovableHandle`: a handle that can be used to remove the added hook by calling `handle.remove()` """ handle = hooks.RemovableHandle(self._save_model_state_pre_hook) self._save_model_state_pre_hook[handle.id] = hook return handle def save_state(self, output_dir: str = None, safe_serialization: bool = True, **save_model_func_kwargs): """ Saves the current states of the model, optimizer, scaler, RNG generators, and registered objects to a folder. If a `ProjectConfiguration` was passed to the `Accelerator` object with `automatic_checkpoint_naming` enabled then checkpoints will be saved to `self.project_dir/checkpoints`. If the number of current saves is greater than `total_limit` then the oldest save is deleted. Each checkpoint is saved in seperate folders named `checkpoint_<iteration>`. Otherwise they are just saved to `output_dir`. <Tip> Should only be used when wanting to save a checkpoint during training and restoring the state in the same environment. </Tip> Args: output_dir (`str` or `os.PathLike`): The name of the folder to save all relevant weights and states. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`). save_model_func_kwargs (`dict`, *optional*): Additional keyword arguments for saving model which can be passed to the underlying save function, such as optional arguments for DeepSpeed's `save_checkpoint` function. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model, optimizer, lr_scheduler = ... >>> model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler) >>> accelerator.save_state(output_dir="my_checkpoint") ``` """ if self.project_configuration.automatic_checkpoint_naming: output_dir = os.path.join(self.project_dir, "checkpoints") os.makedirs(output_dir, exist_ok=True) if self.project_configuration.automatic_checkpoint_naming: folders = [os.path.join(output_dir, folder) for folder in os.listdir(output_dir)] if ( self.project_configuration.total_limit is not None and (len(folders) + 1 > self.project_configuration.total_limit) and self.is_main_process ): def _inner(folder): return list(map(int, re.findall(r"[\/]?([0-9]+)(?=[^\/]*$)", folder)))[0] folders.sort(key=_inner) logger.warning( f"Deleting {len(folders) + 1 - self.project_configuration.total_limit} checkpoints to make room for new checkpoint." ) for folder in folders[: len(folders) + 1 - self.project_configuration.total_limit]: shutil.rmtree(folder) output_dir = os.path.join(output_dir, f"checkpoint_{self.save_iteration}") if os.path.exists(output_dir): raise ValueError( f"Checkpoint directory {output_dir} ({self.save_iteration}) already exists. Please manually override `self.save_iteration` with what iteration to start with." ) self.wait_for_everyone() os.makedirs(output_dir, exist_ok=True) logger.info(f"Saving current state to {output_dir}") if self.distributed_type == DistributedType.XLA: # Finish running the previous step before checkpointing xm.mark_step() # Save the models taking care of FSDP and DeepSpeed nuances weights = [] for i, model in enumerate(self._models): if self.distributed_type == DistributedType.FSDP: logger.info("Saving FSDP model") save_fsdp_model(self.state.fsdp_plugin, self, model, output_dir, i) logger.info(f"FSDP Model saved to output dir {output_dir}") elif self.distributed_type == DistributedType.DEEPSPEED: logger.info("Saving DeepSpeed Model and Optimizer") ckpt_id = f"{MODEL_NAME}" if i == 0 else f"{MODEL_NAME}_{i}" model.save_checkpoint(output_dir, ckpt_id, **save_model_func_kwargs) logger.info(f"DeepSpeed Model and Optimizer saved to output dir {os.path.join(output_dir, ckpt_id)}") elif self.distributed_type == DistributedType.MEGATRON_LM: logger.info("Saving Megatron-LM Model, Optimizer and Scheduler") model.save_checkpoint(output_dir) logger.info(f"Megatron-LM Model , Optimizer and Scheduler saved to output dir {output_dir}") else: weights.append(self.get_state_dict(model, unwrap=False)) # Save the optimizers taking care of FSDP and DeepSpeed nuances optimizers = [] if self.distributed_type == DistributedType.FSDP: for i, opt in enumerate(self._optimizers): logger.info("Saving FSDP Optimizer") save_fsdp_optimizer(self.state.fsdp_plugin, self, opt, self._models[i], output_dir, i) logger.info(f"FSDP Optimizer saved to output dir {output_dir}") elif self.distributed_type not in [DistributedType.DEEPSPEED, DistributedType.MEGATRON_LM]: optimizers = self._optimizers # Save the lr schedulers taking care of DeepSpeed nuances schedulers = [] if self.distributed_type == DistributedType.DEEPSPEED: for i, scheduler in enumerate(self._schedulers): if isinstance(scheduler, DeepSpeedSchedulerWrapper): continue schedulers.append(scheduler) elif self.distributed_type not in [DistributedType.MEGATRON_LM]: schedulers = self._schedulers # Save the samplers of the dataloaders dataloaders = self._dataloaders # Call model loading hooks that might have been registered with # accelerator.register_model_state_hook for hook in self._save_model_state_pre_hook.values(): hook(self._models, weights, output_dir) save_location = save_accelerator_state( output_dir, weights, optimizers, schedulers, dataloaders, self.state.process_index, self.scaler, save_on_each_node=self.project_configuration.save_on_each_node, safe_serialization=safe_serialization, ) for i, obj in enumerate(self._custom_objects): save_custom_state(obj, output_dir, i, save_on_each_node=self.project_configuration.save_on_each_node) self.project_configuration.iteration += 1 return save_location def register_load_state_pre_hook(self, hook: Callable[..., None]) -> hooks.RemovableHandle: """ Registers a pre hook to be run before [`load_checkpoint`] is called in [`Accelerator.load_state`]. Args: hook (`Callable`): A function to be called in [`Accelerator.load_state`] before `load_checkpoint`. The hook should have the following signature: `hook(models: list[torch.nn.Module], input_dir: str) -> None` The `models` argument are the models as saved in the accelerator state under `accelerator._models`, and the `input_dir` argument is the `input_dir` argument passed to [`Accelerator.load_state`]. <Tip> Should only be used in conjunction with [`Accelerator.register_save_state_pre_hook`]. Can be useful to load configurations in addition to model weights. Can also be used to overwrite model loading with a customized method. In this case, make sure to remove already loaded models from the models list. </Tip> Returns: `torch.utils.hooks.RemovableHandle`: a handle that can be used to remove the added hook by calling `handle.remove()` """ handle = hooks.RemovableHandle(self._load_model_state_pre_hook) self._load_model_state_pre_hook[handle.id] = hook return handle def load_state(self, input_dir: str = None, **load_model_func_kwargs): """ Loads the current states of the model, optimizer, scaler, RNG generators, and registered objects. <Tip> Should only be used in conjunction with [`Accelerator.save_state`]. If a file is not registered for checkpointing, it will not be loaded if stored in the directory. </Tip> Args: input_dir (`str` or `os.PathLike`): The name of the folder all relevant weights and states were saved in. Can be `None` if `automatic_checkpoint_naming` is used, and will pick up from the latest checkpoint. load_model_func_kwargs (`dict`, *optional*): Additional keyword arguments for loading model which can be passed to the underlying load function, such as optional arguments for DeepSpeed's `load_checkpoint` function or a `map_location` to load the model and optimizer on. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model, optimizer, lr_scheduler = ... >>> model, optimizer, lr_scheduler = accelerator.prepare(model, optimizer, lr_scheduler) >>> accelerator.load_state("my_checkpoint") ``` """ if input_dir is not None: # Check if folder exists input_dir = os.path.expanduser(input_dir) if not os.path.isdir(input_dir): raise ValueError(f"Tried to find {input_dir} but folder does not exist") elif self.project_configuration.automatic_checkpoint_naming: # Pick up from automatic checkpoint naming input_dir = os.path.join(self.project_dir, "checkpoints") folders = [os.path.join(input_dir, folder) for folder in os.listdir(input_dir)] def _inner(folder): return list(map(int, re.findall(r"[\/]?([0-9]+)(?=[^\/]*$)", folder)))[0] folders.sort(key=_inner) input_dir = folders[-1] else: raise ValueError("No input_dir provided and automatic checkpoint naming is disabled.") logger.info(f"Loading states from {input_dir}") # Load the models taking care of FSDP and DeepSpeed nuances models = [] for i, model in enumerate(self._models): if self.distributed_type == DistributedType.FSDP: logger.info("Loading FSDP model") load_fsdp_model(self.state.fsdp_plugin, self, model, input_dir, i) logger.info(f"FSDP Model loaded from input dir {input_dir}") elif self.distributed_type == DistributedType.DEEPSPEED: logger.info("Loading DeepSpeed Model and Optimizer") ckpt_id = f"{MODEL_NAME}" if i == 0 else f"{MODEL_NAME}_{i}" model.load_checkpoint(input_dir, ckpt_id, **load_model_func_kwargs) logger.info(f"DeepSpeed Model and Optimizer loaded from input dir {os.path.join(input_dir, ckpt_id)}") elif self.distributed_type == DistributedType.MEGATRON_LM: logger.info("Loading Megatron-LM Model, Optimizer and Scheduler") model.load_checkpoint(input_dir) logger.info(f"Megatron-LM Model , Optimizer and Scheduler loaded from input dir {input_dir}") else: models.append(model) # Load the optimizers taking care of FSDP and DeepSpeed nuances optimizers = [] if self.distributed_type == DistributedType.FSDP: for i, opt in enumerate(self._optimizers): logger.info("Loading FSDP Optimizer") load_fsdp_optimizer(self.state.fsdp_plugin, self, opt, self._models[i], input_dir, i) logger.info(f"FSDP Optimizer loaded from input dir {input_dir}") elif self.distributed_type not in [DistributedType.DEEPSPEED, DistributedType.MEGATRON_LM]: optimizers = self._optimizers # Load the lr schedulers taking care of DeepSpeed nuances schedulers = [] if self.distributed_type == DistributedType.DEEPSPEED: for i, scheduler in enumerate(self._schedulers): if isinstance(scheduler, DeepSpeedSchedulerWrapper): continue schedulers.append(scheduler) elif self.distributed_type not in [DistributedType.MEGATRON_LM]: schedulers = self._schedulers dataloaders = self._dataloaders # Call model loading hooks that might have been registered with # accelerator.register_model_state_hook for hook in self._load_model_state_pre_hook.values(): hook(models, input_dir) map_location = load_model_func_kwargs.pop("map_location", None) if map_location is None: if self.num_processes > 1 and self.distributed_type in ( DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, ): map_location = "on_device" else: map_location = "cpu" load_accelerator_state( input_dir, models, optimizers, schedulers, dataloaders, self.state.process_index, self.scaler, map_location, **load_model_func_kwargs, ) custom_checkpoints = [ f for f in os.listdir(input_dir) if re.search(r"^custom_checkpoint_\d+\.pkl$", f) is not None ] if len(custom_checkpoints) != len(self._custom_objects): err = "Number of custom checkpoints in folder {input_dir} does not match the number of registered objects:" err += f"\n\tFound checkpoints: {len(custom_checkpoints)}" err += f"\n\tRegistered objects: {len(self._custom_objects)}\n" err += "Please make sure to only load checkpoints from folders that were created with the same set of registered objects," err += "or avoid using `custom_checkpoint` in the filename for files in that same directory and load them in manually." raise RuntimeError(err) else: logger.info(f"Loading in {len(custom_checkpoints)} custom states") for index, obj in enumerate(self._custom_objects): load_custom_state(obj, input_dir, index) def free_memory(self): """ Will release all references to the internal objects stored and call the garbage collector. You should call this method between two trainings with different models/optimizers. Also will reset `Accelerator.step` to 0. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model, optimizer, scheduler = ... >>> model, optimizer, scheduler = accelerator.prepare(model, optimizer, scheduler) >>> accelerator.free_memory() >>> del model, optimizer, scheduler ``` """ self._schedulers = [] self._optimizers = [] self._models = [] self._dataloaders = [] self.deepspeed_engine_wrapped = None self.step = 0 release_memory() def clear(self): """ Alias for [`Accelerate.free_memory`], releases all references to the internal objects stored and call the garbage collector. You should call this method between two trainings with different models/optimizers. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> model, optimizer, scheduler = ... >>> model, optimizer, scheduler = accelerator.prepare(model, optimizer, scheduler) >>> accelerator.free_memory() >>> del model, optimizer, scheduler ``` """ self.free_memory() def _get_named_parameters(self, *args): named_parameters = {} for obj in args: if isinstance(obj, torch.nn.Module): obj = extract_model_from_parallel(obj) named_parameters.update({n: p for n, p in obj.named_parameters()}) return named_parameters def _get_devices(self, *args): model_device = None optimizer_device = None for obj in args: # Loop through model parameters and stop at the first once we have its device. if isinstance(obj, torch.nn.Module): for param in obj.parameters(): model_device = param.device break # Loop through optimizer parameters groups and stop at the first once we have its device. if isinstance(obj, torch.optim.Optimizer): for param_group in obj.param_groups: if len(param_group["params"]) > 0: optimizer_device = param_group["params"][0].device break return (model_device, optimizer_device) def get_state_dict(self, model, unwrap=True): """ Returns the state dictionary of a model sent through [`Accelerator.prepare`] potentially without full precision. Args: model (`torch.nn.Module`): A PyTorch model sent through [`Accelerator.prepare`] unwrap (`bool`, *optional*, defaults to `True`): Whether to return the original underlying state_dict of `model` or to return the wrapped state_dict Returns: `dict`: The state dictionary of the model potentially without full precision. Example: ```python >>> import torch >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> net = torch.nn.Linear(2, 2) >>> net = accelerator.prepare(net) >>> state_dict = accelerator.get_state_dict(net) ``` """ if self.distributed_type == DistributedType.DEEPSPEED: if self.deepspeed_config["zero_optimization"]["stage"] == 3: if model.zero_gather_16bit_weights_on_model_save(): state_dict = model._zero3_consolidated_16bit_state_dict() else: raise ValueError( "Cannot get 16bit model weights because `stage3_gather_16bit_weights_on_model_save` in DeepSpeed config is False. " "To save the model weights in 16bit, set `stage3_gather_16bit_weights_on_model_save` to True in DeepSpeed config file or " "set `zero3_save_16bit_model` to True when using `accelerate config`. " "To save the full checkpoint, run `model.save_checkpoint(save_dir)` and use `zero_to_fp32.py` to recover weights." ) else: from deepspeed.checkpoint.utils import clone_tensors_for_torch_save state_dict = clone_tensors_for_torch_save(self.unwrap_model(model).state_dict()) elif self.distributed_type == DistributedType.FSDP: from torch.distributed.fsdp import FullStateDictConfig, StateDictType from torch.distributed.fsdp import FullyShardedDataParallel as FSDP full_state_dict_config = FullStateDictConfig(offload_to_cpu=True, rank0_only=True) with FSDP.state_dict_type(model, StateDictType.FULL_STATE_DICT, full_state_dict_config): state_dict = model.state_dict() else: if unwrap: model = self.unwrap_model(model) state_dict = model.state_dict() return state_dict def register_for_checkpointing(self, *objects): """ Makes note of `objects` and will save or load them in during `save_state` or `load_state`. These should be utilized when the state is being loaded or saved in the same script. It is not designed to be used in different scripts. <Tip> Every `object` must have a `load_state_dict` and `state_dict` function to be stored. </Tip> Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> # Assume `CustomObject` has a `state_dict` and `load_state_dict` function. >>> obj = CustomObject() >>> accelerator.register_for_checkpointing(obj) >>> accelerator.save_state("checkpoint.pt") ``` """ invalid_objects = [] for obj in objects: if not hasattr(obj, "state_dict") or not hasattr(obj, "load_state_dict"): invalid_objects.append(obj) if len(invalid_objects) > 0: err = "All `objects` must include a `state_dict` and `load_state_dict` function to be stored. The following inputs are invalid:" for index, obj in enumerate(invalid_objects): err += f"\n\t- Item at index {index}, `{get_pretty_name(obj)}`" raise ValueError(err) self._custom_objects.extend(objects) @contextmanager def autocast(self, cache_enabled: bool = False, autocast_handler: AutocastKwargs = None): """ Will apply automatic mixed-precision inside the block inside this context manager, if it is enabled. Nothing different will happen otherwise. A different `autocast_handler` can be passed in to override the one set in the `Accelerator` object. This is useful in blocks under `autocast` where you want to revert to fp32. Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator(mixed_precision="fp16") >>> with accelerator.autocast(): ... train() ``` """ if cache_enabled: warnings.warn( "Passing `cache_enabled=True` to `accelerator.autocast` is deprecated and will be removed in v0.23.0. " "Please use the `AutocastKwargs` class instead and pass it to the `Accelerator` as a `kwarg_handler`.", FutureWarning, ) if self.autocast_handler is not None: self.autocast_handler.cache_enabled = True else: self.autocast_handler = AutocastKwargs(cache_enabled=True) if autocast_handler is None: autocast_handler = self.autocast_handler autocast_context = get_mixed_precision_context_manager(self.native_amp, autocast_handler) autocast_context.__enter__() # TODO: should the `yield` be in a try/finally block? yield autocast_context.__exit__(*sys.exc_info()) @property def optimizer_step_was_skipped(self): """ Whether or not the optimizer update was skipped (because of gradient overflow in mixed precision), in which case the learning rate should not be changed. """ for optimizer in self._optimizers: if optimizer.step_was_skipped: return True return False def skip_first_batches(self, dataloader, num_batches: int = 0): """ Creates a new `torch.utils.data.DataLoader` that will efficiently skip the first `num_batches`. Args: dataloader (`torch.utils.data.DataLoader`): The data loader in which to skip batches. num_batches (`int`, *optional*, defaults to 0): The number of batches to skip Example: ```python >>> from accelerate import Accelerator >>> accelerator = Accelerator() >>> dataloader, model, optimizer, scheduler = accelerator.prepare(dataloader, model, optimizer, scheduler) >>> skipped_dataloader = accelerator.skip_first_batches(dataloader, num_batches=2) >>> # for the first epoch only >>> for input, target in skipped_dataloader: ... optimizer.zero_grad() ... output = model(input) ... loss = loss_func(output, target) ... accelerator.backward(loss) ... optimizer.step() >>> # subsequent epochs >>> for input, target in dataloader: ... optimizer.zero_grad() ... ... ``` """ return skip_first_batches(dataloader, num_batches=num_batches) def __deepcopy__(self, memo): logger.info("Deep copying the `Accelerator` object, note that this will point to the same original object.") return self def verify_device_map(self, model: torch.nn.Module) -> bool: """ Verifies that `model` has not been prepared with big model inference with a device-map resembling `auto`. """ # Checks if any of the child modules has the attribute `hf_device_map` and this map has more than one entry. for m in model.modules(): if hasattr(m, "hf_device_map") and len(m.hf_device_map) > 1: return True return False
accelerate/src/accelerate/accelerator.py/0
{ "file_path": "accelerate/src/accelerate/accelerator.py", "repo_id": "accelerate", "token_count": 62812 }
4
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import warnings import torch from .state import AcceleratorState, GradientState from .utils import DistributedType, honor_type, is_torch_xla_available if is_torch_xla_available(): import torch_xla.core.xla_model as xm def move_to_device(state, device): if isinstance(state, (list, tuple)): return honor_type(state, (move_to_device(t, device) for t in state)) elif isinstance(state, dict): return type(state)({k: move_to_device(v, device) for k, v in state.items()}) elif isinstance(state, torch.Tensor): return state.to(device) return state class AcceleratedOptimizer(torch.optim.Optimizer): """ Internal wrapper around a torch optimizer. Conditionally will perform `step` and `zero_grad` if gradients should be synchronized when performing gradient accumulation. Args: optimizer (`torch.optim.optimizer.Optimizer`): The optimizer to wrap. device_placement (`bool`, *optional*, defaults to `True`): Whether or not the optimizer should handle device placement. If so, it will place the state dictionary of `optimizer` on the right device. scaler (`torch.cuda.amp.grad_scaler.GradScaler`, *optional*): The scaler to use in the step function if training with mixed precision. """ def __init__(self, optimizer, device_placement=True, scaler=None): self.optimizer = optimizer self.scaler = scaler self.accelerator_state = AcceleratorState() self.gradient_state = GradientState() self.device_placement = device_placement self._is_overflow = False if self.scaler is not None: self._accelerate_step_called = False self._optimizer_original_step_method = self.optimizer.step self._optimizer_patched_step_method = patch_optimizer_step(self, self.optimizer.step) # Handle device placement if device_placement: state_dict = self.optimizer.state_dict() if self.accelerator_state.distributed_type == DistributedType.XLA: xm.send_cpu_data_to_device(state_dict, self.accelerator_state.device) else: state_dict = move_to_device(state_dict, self.accelerator_state.device) self.optimizer.load_state_dict(state_dict) @property def state(self): return self.optimizer.state @state.setter def state(self, state): self.optimizer.state = state @property def param_groups(self): return self.optimizer.param_groups @param_groups.setter def param_groups(self, param_groups): self.optimizer.param_groups = param_groups @property def defaults(self): return self.optimizer.defaults @defaults.setter def defaults(self, defaults): self.optimizer.defaults = defaults def add_param_group(self, param_group): self.optimizer.add_param_group(param_group) def load_state_dict(self, state_dict): if self.accelerator_state.distributed_type == DistributedType.XLA and self.device_placement: xm.send_cpu_data_to_device(state_dict, self.accelerator_state.device) self.optimizer.load_state_dict(state_dict) def state_dict(self): return self.optimizer.state_dict() def zero_grad(self, set_to_none=None): if self.gradient_state.sync_gradients: accept_arg = "set_to_none" in inspect.signature(self.optimizer.zero_grad).parameters if accept_arg: if set_to_none is None: set_to_none = True self.optimizer.zero_grad(set_to_none=set_to_none) else: if set_to_none is not None: raise ValueError("`set_to_none` for Optimizer.zero_grad` is not supported by this optimizer.") self.optimizer.zero_grad() def step(self, closure=None): if ( not self.gradient_state.is_xla_gradients_synced and self.accelerator_state.distributed_type == DistributedType.XLA ): gradients = xm._fetch_gradients(self.optimizer) xm.all_reduce("sum", gradients, scale=1.0 / xm.xrt_world_size()) self.gradient_state.is_xla_gradients_synced = True if self.gradient_state.sync_gradients: if self.scaler is not None: self.optimizer.step = self._optimizer_patched_step_method self.scaler.step(self.optimizer, closure) self.scaler.update() if not self._accelerate_step_called: # If the optimizer step was skipped, gradient overflow was detected. self._is_overflow = True else: self._is_overflow = False # Reset the step method to the original one self.optimizer.step = self._optimizer_original_step_method # Reset the indicator self._accelerate_step_called = False else: self.optimizer.step(closure) if self.accelerator_state.distributed_type == DistributedType.XLA: self.gradient_state.is_xla_gradients_synced = False def _switch_parameters(self, parameters_map): for param_group in self.optimizer.param_groups: param_group["params"] = [parameters_map.get(p, p) for p in param_group["params"]] @property def is_overflow(self): """Whether or not the optimizer step was done, or skipped because of gradient overflow.""" warnings.warn( "The `is_overflow` property is deprecated and will be removed in version 1.0 of Accelerate use " "`optimizer.step_was_skipped` instead.", FutureWarning, ) return self._is_overflow @property def step_was_skipped(self): """Whether or not the optimizer step was skipped.""" return self._is_overflow def __getstate__(self): _ignored_keys = [ "_accelerate_step_called", "_optimizer_original_step_method", "_optimizer_patched_step_method", ] return {k: v for k, v in self.__dict__.items() if k not in _ignored_keys} def __setstate__(self, state): self.__dict__.update(state) if self.scaler is not None: self._accelerate_step_called = False self._optimizer_original_step_method = self.optimizer.step self._optimizer_patched_step_method = patch_optimizer_step(self, self.optimizer.step) def patch_optimizer_step(accelerated_optimizer: AcceleratedOptimizer, method): def patched_step(*args, **kwargs): accelerated_optimizer._accelerate_step_called = True return method(*args, **kwargs) return patched_step
accelerate/src/accelerate/optimizer.py/0
{ "file_path": "accelerate/src/accelerate/optimizer.py", "repo_id": "accelerate", "token_count": 3103 }
5
#!/usr/bin/env python # Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import io import math import time from copy import deepcopy from pathlib import Path import numpy as np import torch from torch.utils.data import DataLoader, Dataset from accelerate import Accelerator from accelerate.data_loader import SeedableRandomSampler, prepare_data_loader from accelerate.state import AcceleratorState from accelerate.test_utils import RegressionDataset, are_the_same_tensors from accelerate.utils import ( DataLoaderConfiguration, DistributedType, gather, is_bf16_available, is_ipex_available, is_npu_available, is_xpu_available, set_seed, synchronize_rng_states, ) # TODO: remove RegressionModel4XPU once ccl support empty buffer in broadcasting. if is_xpu_available(): from accelerate.test_utils import RegressionModel4XPU as RegressionModel else: from accelerate.test_utils import RegressionModel def generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler=False): "Creates a dataloader that can also use the `SeedableRandomSampler`" if use_seedable_sampler: # The SeedableRandomSampler is needed during distributed setups # for full reproducability across processes with the `DataLoader` sampler = SeedableRandomSampler( generator=generator, data_source=train_set, num_samples=len(train_set), ) return DataLoader(train_set, batch_size=batch_size, sampler=sampler) else: return DataLoader(train_set, batch_size=batch_size, shuffle=True, generator=generator) def print_main(state): print(f"Printing from the main process {state.process_index}") def print_local_main(state): print(f"Printing from the local main process {state.local_process_index}") def print_last(state): print(f"Printing from the last process {state.process_index}") def print_on(state, process_idx): print(f"Printing from process {process_idx}: {state.process_index}") def process_execution_check(): accelerator = Accelerator() num_processes = accelerator.num_processes # Test main_process_first context manager path = Path("check_main_process_first.txt") with accelerator.main_process_first(): if accelerator.is_main_process: time.sleep(0.1) # ensure main process takes longest with open(path, "a+") as f: f.write("Currently in the main process\n") else: with open(path, "a+") as f: f.write("Now on another process\n") accelerator.wait_for_everyone() if accelerator.is_main_process: with open(path) as f: text = "".join(f.readlines()) try: assert text.startswith("Currently in the main process\n"), "Main process was not first" if num_processes > 1: assert text.endswith("Now on another process\n"), "Main process was not first" assert ( text.count("Now on another process\n") == accelerator.num_processes - 1 ), f"Only wrote to file {text.count('Now on another process') + 1} times, not {accelerator.num_processes}" except AssertionError: path.unlink() raise if accelerator.is_main_process and path.exists(): path.unlink() accelerator.wait_for_everyone() # Test the decorators f = io.StringIO() with contextlib.redirect_stdout(f): accelerator.on_main_process(print_main)(accelerator.state) result = f.getvalue().rstrip() if accelerator.is_main_process: assert result == "Printing from the main process 0", f"{result} != Printing from the main process 0" else: assert f.getvalue().rstrip() == "", f'{result} != ""' f.truncate(0) f.seek(0) with contextlib.redirect_stdout(f): accelerator.on_local_main_process(print_local_main)(accelerator.state) if accelerator.is_local_main_process: assert f.getvalue().rstrip() == "Printing from the local main process 0" else: assert f.getvalue().rstrip() == "" f.truncate(0) f.seek(0) with contextlib.redirect_stdout(f): accelerator.on_last_process(print_last)(accelerator.state) if accelerator.is_last_process: assert f.getvalue().rstrip() == f"Printing from the last process {accelerator.state.num_processes - 1}" else: assert f.getvalue().rstrip() == "" f.truncate(0) f.seek(0) for process_idx in range(num_processes): with contextlib.redirect_stdout(f): accelerator.on_process(print_on, process_index=process_idx)(accelerator.state, process_idx) if accelerator.process_index == process_idx: assert f.getvalue().rstrip() == f"Printing from process {process_idx}: {accelerator.process_index}" else: assert f.getvalue().rstrip() == "" f.truncate(0) f.seek(0) def init_state_check(): # Test we can instantiate this twice in a row. state = AcceleratorState() if state.local_process_index == 0: print("Testing, testing. 1, 2, 3.") print(state) def rng_sync_check(): state = AcceleratorState() synchronize_rng_states(["torch"]) assert are_the_same_tensors(torch.get_rng_state()), "RNG states improperly synchronized on CPU." if state.distributed_type == DistributedType.MULTI_GPU: synchronize_rng_states(["cuda"]) assert are_the_same_tensors(torch.cuda.get_rng_state()), "RNG states improperly synchronized on GPU." elif state.distributed_type == DistributedType.MULTI_XPU: synchronize_rng_states(["xpu"]) assert are_the_same_tensors(torch.xpu.get_rng_state()), "RNG states improperly synchronized on XPU." generator = torch.Generator() synchronize_rng_states(["generator"], generator=generator) assert are_the_same_tensors(generator.get_state()), "RNG states improperly synchronized in generator." if state.local_process_index == 0: print("All rng are properly synched.") def dl_preparation_check(): state = AcceleratorState() length = 32 * state.num_processes dl = DataLoader(range(length), batch_size=8) dl = prepare_data_loader(dl, state.device, state.num_processes, state.process_index, put_on_device=True) result = [] for batch in dl: result.append(gather(batch)) result = torch.cat(result) print(state.process_index, result, type(dl)) assert torch.equal(result.cpu(), torch.arange(0, length).long()), "Wrong non-shuffled dataloader result." dl = DataLoader(range(length), batch_size=8) dl = prepare_data_loader( dl, state.device, state.num_processes, state.process_index, put_on_device=True, split_batches=True, ) result = [] for batch in dl: result.append(gather(batch)) result = torch.cat(result) assert torch.equal(result.cpu(), torch.arange(0, length).long()), "Wrong non-shuffled dataloader result." if state.process_index == 0: print("Non-shuffled dataloader passing.") dl = DataLoader(range(length), batch_size=8, shuffle=True) dl = prepare_data_loader(dl, state.device, state.num_processes, state.process_index, put_on_device=True) result = [] for batch in dl: result.append(gather(batch)) result = torch.cat(result).tolist() result.sort() assert result == list(range(length)), "Wrong shuffled dataloader result." dl = DataLoader(range(length), batch_size=8, shuffle=True) dl = prepare_data_loader( dl, state.device, state.num_processes, state.process_index, put_on_device=True, split_batches=True, ) result = [] for batch in dl: result.append(gather(batch)) result = torch.cat(result).tolist() result.sort() assert result == list(range(length)), "Wrong shuffled dataloader result." if state.local_process_index == 0: print("Shuffled dataloader passing.") def central_dl_preparation_check(): state = AcceleratorState() length = 32 * state.num_processes dl = DataLoader(range(length), batch_size=8) dl = prepare_data_loader( dl, state.device, state.num_processes, state.process_index, put_on_device=True, dispatch_batches=True ) result = [] for batch in dl: result.append(gather(batch)) result = torch.cat(result) assert torch.equal(result.cpu(), torch.arange(0, length).long()), "Wrong non-shuffled dataloader result." dl = DataLoader(range(length), batch_size=8) dl = prepare_data_loader( dl, state.device, state.num_processes, state.process_index, put_on_device=True, split_batches=True, dispatch_batches=True, ) result = [] for batch in dl: result.append(gather(batch)) result = torch.cat(result) assert torch.equal(result.cpu(), torch.arange(0, length).long()), "Wrong non-shuffled dataloader result." if state.process_index == 0: print("Non-shuffled central dataloader passing.") dl = DataLoader(range(length), batch_size=8, shuffle=True) dl = prepare_data_loader( dl, state.device, state.num_processes, state.process_index, put_on_device=True, dispatch_batches=True ) result = [] for batch in dl: result.append(gather(batch)) result = torch.cat(result).tolist() result.sort() assert result == list(range(length)), "Wrong shuffled dataloader result." dl = DataLoader(range(length), batch_size=8, shuffle=True) dl = prepare_data_loader( dl, state.device, state.num_processes, state.process_index, put_on_device=True, split_batches=True, dispatch_batches=True, ) result = [] for batch in dl: result.append(gather(batch)) result = torch.cat(result).tolist() result.sort() assert result == list(range(length)), "Wrong shuffled dataloader result." if state.local_process_index == 0: print("Shuffled central dataloader passing.") def custom_sampler_check(): state = AcceleratorState() class CustomDataset(Dataset): def __init__(self, data): self.data = data def __len__(self): return len(self.data) def __getitem__(self, index): return self.data[index] class CustomBatchSampler: def __init__(self, dataset_length: int, batch_size: int, shuffle: bool = True): self.batch_size = batch_size self.data_index = np.arange(dataset_length) self.shuffle = shuffle def __iter__(self): num_batches = len(self) if self.shuffle: index = np.random.permutation(self.data_index) else: index = self.data_index output = np.array_split(index, num_batches) yield from output def __len__(self): return math.ceil(len(self.data_index) / self.batch_size) dataset = CustomDataset(range(32 * state.num_processes)) sampler = CustomBatchSampler(len(dataset), batch_size=8) dl = DataLoader(dataset, batch_sampler=sampler) dl = prepare_data_loader(dl, state.device, state.num_processes, state.process_index) # We need just ensure that `dl.batch_sampler` (or `dl.batch_sampler.batch_sampler` is indeed the old batch sampler if hasattr(dl.batch_sampler, "batch_sampler"): assert isinstance( dl.batch_sampler.batch_sampler, CustomBatchSampler ), "Custom sampler was changed after calling `prepare_data_loader`" else: assert isinstance( dl.batch_sampler, CustomBatchSampler ), "Custom sampler was changed after calling `prepare_data_loader`" def check_seedable_sampler(): # Set seed set_seed(42) train_set = RegressionDataset(length=10, seed=42) train_dl = DataLoader(train_set, batch_size=2, shuffle=True) config = DataLoaderConfiguration(use_seedable_sampler=True) accelerator = Accelerator(dataloader_config=config) train_dl = accelerator.prepare(train_dl) original_items = [] for _ in range(3): for batch in train_dl: original_items.append(batch["x"]) original_items = torch.cat(original_items) # Set seed again and the epoch set_seed(42) train_dl.set_epoch(0) new_items = [] for _ in range(3): for batch in train_dl: new_items.append(batch["x"]) new_items = torch.cat(new_items) assert torch.allclose(original_items, new_items), "Did not obtain the same items with the same seed and epoch." def mock_training(length, batch_size, generator, use_seedable_sampler=False): set_seed(42) generator.manual_seed(42) train_set = RegressionDataset(length=length, seed=42) train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler) model = RegressionModel() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) for epoch in range(3): for batch in train_dl: model.zero_grad() output = model(batch["x"]) loss = torch.nn.functional.mse_loss(output, batch["y"]) loss.backward() optimizer.step() return train_set, model def training_check(use_seedable_sampler=False): state = AcceleratorState() generator = torch.Generator() batch_size = 8 length = batch_size * 4 * state.num_processes train_set, old_model = mock_training(length, batch_size * state.num_processes, generator, use_seedable_sampler) assert are_the_same_tensors(old_model.a), "Did not obtain the same model on both processes." assert are_the_same_tensors(old_model.b), "Did not obtain the same model on both processes." accelerator = Accelerator() train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler) model = RegressionModel() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer) set_seed(42) generator.manual_seed(42) for _ in range(3): for batch in train_dl: model.zero_grad() output = model(batch["x"]) loss = torch.nn.functional.mse_loss(output, batch["y"]) accelerator.backward(loss) optimizer.step() model = accelerator.unwrap_model(model).cpu() assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training." assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training." accelerator.print("Training yielded the same results on one CPU or distributed setup with no batch split.") dataloader_config = DataLoaderConfiguration(split_batches=True, use_seedable_sampler=use_seedable_sampler) accelerator = Accelerator(dataloader_config=dataloader_config) train_dl = generate_baseline_dataloader( train_set, generator, batch_size * state.num_processes, use_seedable_sampler ) model = RegressionModel() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer) set_seed(42) generator.manual_seed(42) for _ in range(3): for batch in train_dl: model.zero_grad() output = model(batch["x"]) loss = torch.nn.functional.mse_loss(output, batch["y"]) accelerator.backward(loss) optimizer.step() model = accelerator.unwrap_model(model).cpu() assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training." assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training." accelerator.print("Training yielded the same results on one CPU or distributes setup with batch split.") if torch.cuda.is_available() or is_npu_available(): # Mostly a test that FP16 doesn't crash as the operation inside the model is not converted to FP16 print("FP16 training check.") AcceleratorState._reset_state() dataloader_config = DataLoaderConfiguration(use_seedable_sampler=use_seedable_sampler) accelerator = Accelerator(mixed_precision="fp16", dataloader_config=dataloader_config) train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler) model = RegressionModel() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer) set_seed(42) generator.manual_seed(42) for _ in range(3): for batch in train_dl: model.zero_grad() output = model(batch["x"]) loss = torch.nn.functional.mse_loss(output, batch["y"]) accelerator.backward(loss) optimizer.step() model = accelerator.unwrap_model(model).cpu() assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training." assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training." if torch.cuda.is_available(): # Mostly a test that model.forward will have autocast when running unwrap_model(model, keep_fp32_wrapper=True) print("Keep fp32 wrapper check.") AcceleratorState._reset_state() accelerator = Accelerator(mixed_precision="fp16") model = torch.nn.Linear(2, 4) model = accelerator.prepare(model) model_with_fp32_wrapper = accelerator.unwrap_model(model, keep_fp32_wrapper=True) # Run forward with fp16 as input. # When the model is with mixed precision wrapper, no error will be raised. input_tensor = torch.Tensor([1, 2]).to(dtype=torch.float16, device=accelerator.device) output = model_with_fp32_wrapper(input_tensor) # BF16 support is only for CPU + TPU, and some GPU if is_bf16_available(): # Mostly a test that BF16 doesn't crash as the operation inside the model is not converted to BF16 print("BF16 training check.") AcceleratorState._reset_state() dataloader_config = DataLoaderConfiguration(use_seedable_sampler=use_seedable_sampler) accelerator = Accelerator(mixed_precision="bf16", dataloader_config=dataloader_config) train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler) model = RegressionModel() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer) set_seed(42) generator.manual_seed(42) for _ in range(3): for batch in train_dl: model.zero_grad() output = model(batch["x"]) loss = torch.nn.functional.mse_loss(output, batch["y"]) accelerator.backward(loss) optimizer.step() model = accelerator.unwrap_model(model).cpu() assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training." assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training." # IPEX support is only for CPU if is_ipex_available(): print("ipex BF16 training check.") AcceleratorState._reset_state() dataloader_config = DataLoaderConfiguration(use_seedable_sampler=use_seedable_sampler) accelerator = Accelerator(mixed_precision="bf16", cpu=True, dataloader_config=dataloader_config) train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler) model = RegressionModel() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer) set_seed(42) generator.manual_seed(42) for _ in range(3): for batch in train_dl: model.zero_grad() output = model(batch["x"]) loss = torch.nn.functional.mse_loss(output, batch["y"]) accelerator.backward(loss) optimizer.step() model = accelerator.unwrap_model(model).cpu() assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on CPU or distributed training." assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on CPU or distributed training." # XPU support is only for XPU if is_xpu_available(): print("xpu BF16 training check.") AcceleratorState._reset_state() dataloader_config = DataLoaderConfiguration(use_seedable_sampler=use_seedable_sampler) accelerator = Accelerator(mixed_precision="bf16", cpu=False, dataloader_config=dataloader_config) train_dl = generate_baseline_dataloader(train_set, generator, batch_size, use_seedable_sampler) model = RegressionModel() optimizer = torch.optim.SGD(model.parameters(), lr=0.1) train_dl, model, optimizer = accelerator.prepare(train_dl, model, optimizer) set_seed(42) generator.manual_seed(42) for _ in range(3): for batch in train_dl: model.zero_grad() output = model(batch["x"]) loss = torch.nn.functional.mse_loss(output, batch["y"]) accelerator.backward(loss) optimizer.step() model = accelerator.unwrap_model(model).cpu() assert torch.allclose(old_model.a, model.a), "Did not obtain the same model on XPU or distributed training." assert torch.allclose(old_model.b, model.b), "Did not obtain the same model on XPU or distributed training." def test_split_between_processes_list(): state = AcceleratorState() data = list(range(0, 2 * state.num_processes)) with state.split_between_processes(data) as results: assert ( len(results) == 2 ), f"Each process did not have two items. Process index: {state.process_index}; Length: {len(results)}" data = list(range(0, (3 * state.num_processes) - 1)) with state.split_between_processes(data, apply_padding=True) as results: if state.is_last_process: # Test that the last process gets the extra item(s) num_samples_per_device = math.ceil(len(data) / state.num_processes) assert ( len(results) == num_samples_per_device ), f"Last process did not get the extra item(s). Process index: {state.process_index}; Length: {len(results)}" state.wait_for_everyone() def test_split_between_processes_nested_dict(): state = AcceleratorState() a = [1, 2, 3, 4, 5, 6, 7, 8] b = ["a", "b", "c", "d", "e", "f", "g", "h"] c = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]) if state.num_processes in (1, 2, 4): data = {"a": a, "b": b, "c": c} data_copy = deepcopy(data) with state.split_between_processes(data) as results: if state.process_index == 0: assert results["a"] == data_copy["a"][: 8 // state.num_processes] elif state.num_processes == 2: assert results["a"] == data_copy["a"][4:] elif state.process_index == 3: # We return a list each time assert results["a"] == data_copy["a"][-2:], f'Expected: {data_copy["a"][-2]}, Actual: {results["a"]}' if state.process_index == 0: assert results["b"] == data_copy["b"][: 8 // state.num_processes] elif state.num_processes == 2: assert results["b"] == data_copy["b"][4:] elif state.process_index == 3: assert results["b"] == data_copy["b"][-2:] if state.process_index == 0: assert torch.allclose( results["c"], data_copy["c"][: 8 // state.num_processes] ), f"Did not obtain expected values on process 0, expected `{data['c'][:8 // state.num_processes]}`, received: {results['c']}" elif state.num_processes == 2: assert torch.allclose( results["c"], data_copy["c"][4:] ), f"Did not obtain expected values on process 2, expected `{data['c'][4:]}`, received: {results['c']}" elif state.process_index == 3: assert torch.allclose( results["c"], data_copy["c"][-2:] ), f"Did not obtain expected values on process 4, expected `{data['c'][-2:]}`, received: {results['c']}" state.wait_for_everyone() def test_split_between_processes_tensor(): state = AcceleratorState() if state.num_processes > 1: data = torch.tensor([[0, 1, 2, 3], [4, 5, 6, 7]]).to(state.device) with state.split_between_processes(data) as results: if state.process_index == 0: assert torch.allclose(results, torch.tensor([0, 1, 2, 3]).to(state.device)) else: assert torch.allclose(results, torch.tensor([4, 5, 6, 7]).to(state.device)) state.wait_for_everyone() def test_trigger(): accelerator = Accelerator() # should start with being false assert accelerator.check_trigger() is False # set a breakpoint on the main process if accelerator.is_main_process: accelerator.set_trigger() # check it's been activated across all processes # calls `all_reduce` and triggers a sync assert accelerator.check_trigger() is True # check it's been reset after the sync assert accelerator.check_trigger() is False def main(): accelerator = Accelerator() state = accelerator.state if state.local_process_index == 0: print("**Initialization**") init_state_check() state.wait_for_everyone() if state.distributed_type == DistributedType.MULTI_GPU: num_processes_per_node = torch.cuda.device_count() else: num_processes_per_node = state.num_processes # We only run this test on non-multinode if num_processes_per_node == state.num_processes: if state.process_index == 0: print("\n**Test process execution**") process_execution_check() if state.process_index == 0: print("\n**Test split between processes as a list**") test_split_between_processes_list() if state.process_index == 0: print("\n**Test split between processes as a dict**") test_split_between_processes_nested_dict() if state.process_index == 0: print("\n**Test split between processes as a tensor**") test_split_between_processes_tensor() if state.local_process_index == 0: print("\n**Test random number generator synchronization**") rng_sync_check() if state.local_process_index == 0: print("\n**DataLoader integration test**") dl_preparation_check() if state.distributed_type != DistributedType.XLA: central_dl_preparation_check() custom_sampler_check() check_seedable_sampler() # Trainings are not exactly the same in DeepSpeed and CPU mode if state.distributed_type == DistributedType.DEEPSPEED: return if state.local_process_index == 0: print("\n**Training integration test**") training_check(use_seedable_sampler=False) training_check(use_seedable_sampler=True) if state.local_process_index == 0: print("\n**Breakpoint trigger test**") test_trigger() if __name__ == "__main__": main()
accelerate/src/accelerate/test_utils/scripts/test_script.py/0
{ "file_path": "accelerate/src/accelerate/test_utils/scripts/test_script.py", "repo_id": "accelerate", "token_count": 11562 }
6
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib import gc import importlib import inspect import json import logging import os import re import shutil import tempfile import warnings from collections import OrderedDict, defaultdict from typing import Dict, List, Optional, Tuple, Union import packaging import torch import torch.nn as nn from ..state import AcceleratorState from .constants import SAFE_WEIGHTS_NAME, WEIGHTS_NAME from .dataclasses import AutocastKwargs, CustomDtype, DistributedType from .imports import is_mps_available, is_npu_available, is_peft_available, is_torch_xla_available, is_xpu_available from .offload import load_offloaded_weight, offload_weight, save_offload_index from .tqdm import is_tqdm_available, tqdm from .versions import compare_versions if is_npu_available(check_device=False): import torch_npu # noqa: F401 from safetensors import safe_open from safetensors.torch import load_file as safe_load_file WEIGHTS_INDEX_NAME = "pytorch_model.bin.index.json" logger = logging.getLogger(__name__) def is_peft_model(model): from .other import extract_model_from_parallel if is_peft_available(): from peft import PeftModel return is_peft_available() and isinstance(extract_model_from_parallel(model), PeftModel) def check_device_same(first_device, second_device): """ Utility method to check if two `torch` devices are similar. When dealing with CUDA devices, torch throws `False` for `torch.device("cuda") == torch.device("cuda:0")` whereas they should be the same Args: first_device (`torch.device`): First device to check second_device (`torch.device`): Second device to check """ if first_device.type != second_device.type: return False if first_device.type == "cuda" and first_device.index is None: # In case the first_device is a cuda device and have # the index attribute set to `None`, default it to `0` first_device = torch.device("cuda", index=0) if second_device.type == "cuda" and second_device.index is None: # In case the second_device is a cuda device and have # the index attribute set to `None`, default it to `0` second_device = torch.device("cuda", index=0) return first_device == second_device def convert_file_size_to_int(size: Union[int, str]): """ Converts a size expressed as a string with digits an unit (like `"5MB"`) to an integer (in bytes). Args: size (`int` or `str`): The size to convert. Will be directly returned if an `int`. Example: ```py >>> convert_file_size_to_int("1MiB") 1048576 ``` """ mem_size = -1 err_msg = ( f"`size` {size} is not in a valid format. Use an integer for bytes, or a string with an unit (like '5.0GB')." ) try: if isinstance(size, int): mem_size = size elif size.upper().endswith("GIB"): mem_size = int(float(size[:-3]) * (2**30)) elif size.upper().endswith("MIB"): mem_size = int(float(size[:-3]) * (2**20)) elif size.upper().endswith("KIB"): mem_size = int(float(size[:-3]) * (2**10)) elif size.upper().endswith("GB"): int_size = int(float(size[:-2]) * (10**9)) mem_size = int_size // 8 if size.endswith("b") else int_size elif size.upper().endswith("MB"): int_size = int(float(size[:-2]) * (10**6)) mem_size = int_size // 8 if size.endswith("b") else int_size elif size.upper().endswith("KB"): int_size = int(float(size[:-2]) * (10**3)) mem_size = int_size // 8 if size.endswith("b") else int_size except ValueError: raise ValueError(err_msg) if mem_size < 0: raise ValueError(err_msg) return mem_size def dtype_byte_size(dtype: torch.dtype): """ Returns the size (in bytes) occupied by one parameter of type `dtype`. Example: ```py >>> dtype_byte_size(torch.float32) 4 ``` """ if dtype == torch.bool: return 1 / 8 elif dtype == CustomDtype.INT2: return 1 / 4 elif dtype == CustomDtype.INT4: return 1 / 2 elif dtype == CustomDtype.FP8: return 1 bit_search = re.search(r"[^\d](\d+)$", str(dtype)) if bit_search is None: raise ValueError(f"`dtype` is not a valid dtype: {dtype}.") bit_size = int(bit_search.groups()[0]) return bit_size // 8 def id_tensor_storage(tensor: torch.Tensor) -> Tuple[torch.device, int, int]: """ Unique identifier to a tensor storage. Multiple different tensors can share the same underlying storage. For example, "meta" tensors all share the same storage, and thus their identifier will all be equal. This identifier is guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with non-overlapping lifetimes may have the same id. """ _SIZE = { torch.int64: 8, torch.float32: 4, torch.int32: 4, torch.bfloat16: 2, torch.float16: 2, torch.int16: 2, torch.uint8: 1, torch.int8: 1, torch.bool: 1, torch.float64: 8, } try: storage_ptr = tensor.untyped_storage().data_ptr() storage_size = tensor.untyped_storage().nbytes() except Exception: # Fallback for torch==1.10 try: storage_ptr = tensor.storage().data_ptr() storage_size = tensor.storage().size() * _SIZE[tensor.dtype] except NotImplementedError: # Fallback for meta storage storage_ptr = 0 # On torch >=2.0 this is the tensor size storage_size = tensor.nelement() * _SIZE[tensor.dtype] return tensor.device, storage_ptr, storage_size def shard_checkpoint( state_dict: Dict[str, torch.Tensor], max_shard_size: Union[int, str] = "10GB", weights_name: str = WEIGHTS_NAME ): """ Splits a model state dictionary in sub-checkpoints so that the final size of each sub-checkpoint does not exceed a given size. The sub-checkpoints are determined by iterating through the `state_dict` in the order of its keys, so there is no optimization made to make each sub-checkpoint as close as possible to the maximum size passed. For example, if the limit is 10GB and we have weights of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip warning={true}> If one of the model's weight is bigger that `max_sahrd_size`, it will end up in its own sub-checkpoint which will have a size greater than `max_shard_size`. </Tip> Args: state_dict (`Dict[str, torch.Tensor]`): The state dictionary of a model to save. max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`): The maximum size of each sub-checkpoint. If expressed as a string, needs to be digits followed by a unit (like `"5MB"`). weights_name (`str`, *optional*, defaults to `"pytorch_model.bin"`): The name of the model save file. """ max_shard_size = convert_file_size_to_int(max_shard_size) sharded_state_dicts = [{}] last_block_size = 0 total_size = 0 storage_id_to_block = {} for key, weight in state_dict.items(): # when bnb serialization is used the weights in the state dict can be strings # check: https://github.com/huggingface/transformers/pull/24416 for more details if isinstance(weight, str): continue else: storage_id = id_tensor_storage(weight) # If a `weight` shares the same underlying storage as another tensor, we put `weight` in the same `block` if storage_id in storage_id_to_block: block_id = storage_id_to_block[storage_id] sharded_state_dicts[block_id][key] = weight continue weight_size = weight.numel() * dtype_byte_size(weight.dtype) # If this weight is going to tip up over the maximal size, we split. if last_block_size + weight_size > max_shard_size: sharded_state_dicts.append({}) last_block_size = 0 sharded_state_dicts[-1][key] = weight last_block_size += weight_size total_size += weight_size storage_id_to_block[storage_id] = len(sharded_state_dicts) - 1 # If we only have one shard, we return it if len(sharded_state_dicts) == 1: return {weights_name: sharded_state_dicts[0]}, None # Otherwise, let's build the index weight_map = {} shards = {} for idx, shard in enumerate(sharded_state_dicts): shard_file = weights_name.replace(".bin", f"-{idx + 1:05d}-of-{len(sharded_state_dicts):05d}.bin") shard_file = shard_file.replace( ".safetensors", f"-{idx + 1:05d}-of-{len(sharded_state_dicts):05d}.safetensors" ) shards[shard_file] = shard for key in shard.keys(): weight_map[key] = shard_file # Add the metadata metadata = {"total_size": total_size} index = {"metadata": metadata, "weight_map": weight_map} return shards, index def set_module_tensor_to_device( module: nn.Module, tensor_name: str, device: Union[int, str, torch.device], value: Optional[torch.Tensor] = None, dtype: Optional[Union[str, torch.dtype]] = None, fp16_statistics: Optional[torch.HalfTensor] = None, tied_params_map: Optional[Dict[int, Dict[torch.device, torch.Tensor]]] = None, ): """ A helper function to set a given tensor (parameter of buffer) of a module on a specific device (note that doing `param.to(device)` creates a new tensor not linked to the parameter, which is why we need this function). Args: module (`torch.nn.Module`): The module in which the tensor we want to move lives. tensor_name (`str`): The full name of the parameter/buffer. device (`int`, `str` or `torch.device`): The device on which to set the tensor. value (`torch.Tensor`, *optional*): The value of the tensor (useful when going from the meta device to any other device). dtype (`torch.dtype`, *optional*): If passed along the value of the parameter will be cast to this `dtype`. Otherwise, `value` will be cast to the dtype of the existing parameter in the model. fp16_statistics (`torch.HalfTensor`, *optional*): The list of fp16 statistics to set on the module, used for 8 bit model serialization. tied_params_map (Dict[int, Dict[torch.device, torch.Tensor]], *optional*, defaults to `None`): A map of current data pointers to dictionaries of devices to already dispatched tied weights. For a given execution device, this parameter is useful to reuse the first available pointer of a shared weight on the device for all others, instead of duplicating memory. """ # Recurse if needed if "." in tensor_name: splits = tensor_name.split(".") for split in splits[:-1]: new_module = getattr(module, split) if new_module is None: raise ValueError(f"{module} has no attribute {split}.") module = new_module tensor_name = splits[-1] if tensor_name not in module._parameters and tensor_name not in module._buffers: raise ValueError(f"{module} does not have a parameter or a buffer named {tensor_name}.") is_buffer = tensor_name in module._buffers old_value = getattr(module, tensor_name) # Treat the case where old_value (or a custom `value`, typically offloaded to RAM/disk) belongs to a tied group, and one of the weight # in the tied group has already been dispatched to the device, by avoiding reallocating memory on the device and just copying the pointer. if ( value is not None and tied_params_map is not None and value.data_ptr() in tied_params_map and device in tied_params_map[value.data_ptr()] ): module._parameters[tensor_name] = tied_params_map[value.data_ptr()][device] return elif ( tied_params_map is not None and old_value.data_ptr() in tied_params_map and device in tied_params_map[old_value.data_ptr()] ): module._parameters[tensor_name] = tied_params_map[old_value.data_ptr()][device] return if old_value.device == torch.device("meta") and device not in ["meta", torch.device("meta")] and value is None: raise ValueError(f"{tensor_name} is on the meta device, we need a `value` to put in on {device}.") if value is not None: if old_value.shape != value.shape: raise ValueError( f'Trying to set a tensor of shape {value.shape} in "{tensor_name}" (which has shape {old_value.shape}), this look incorrect.' ) if dtype is None: # For compatibility with PyTorch load_state_dict which converts state dict dtype to existing dtype in model value = value.to(old_value.dtype) elif not str(value.dtype).startswith(("torch.uint", "torch.int", "torch.bool")): value = value.to(dtype) param = module._parameters[tensor_name] if tensor_name in module._parameters else None param_cls = type(param) device_quantization = None with torch.no_grad(): # leave it on cpu first before moving them to cuda # # fix the case where the device is meta, we don't want to put it on cpu because there is no data =0 if ( param is not None and param.device.type != "cuda" and torch.device(device).type == "cuda" and param_cls.__name__ in ["Int8Params", "FP4Params", "Params4bit"] ): device_quantization = device device = "cpu" # `torch.Tensor.to(<int num>)` is not supported by `torch_npu` (see this [issue](https://github.com/Ascend/pytorch/issues/16)). if is_npu_available() and isinstance(device, int): device = f"npu:{device}" if is_xpu_available() and isinstance(device, int): device = f"xpu:{device}" if value is None: new_value = old_value.to(device) if dtype is not None and device in ["meta", torch.device("meta")]: if not str(old_value.dtype).startswith(("torch.uint", "torch.int", "torch.bool")): new_value = new_value.to(dtype) if not is_buffer: module._parameters[tensor_name] = param_cls(new_value, requires_grad=old_value.requires_grad) elif isinstance(value, torch.Tensor): new_value = value.to(device) else: new_value = torch.tensor(value, device=device) if device_quantization is not None: device = device_quantization if is_buffer: module._buffers[tensor_name] = new_value elif value is not None or not check_device_same(torch.device(device), module._parameters[tensor_name].device): param_cls = type(module._parameters[tensor_name]) kwargs = module._parameters[tensor_name].__dict__ if param_cls.__name__ in ["Int8Params", "FP4Params"]: if param_cls.__name__ == "Int8Params" and new_value.dtype == torch.float32: # downcast to fp16 if any - needed for 8bit serialization new_value = new_value.to(torch.float16) # quantize module that are going to stay on the cpu so that we offload quantized weights if device == "cpu" and param_cls.__name__ == "Int8Params": new_value = param_cls(new_value, requires_grad=old_value.requires_grad, **kwargs).to(0).to("cpu") new_value.CB = new_value.CB.to("cpu") new_value.SCB = new_value.SCB.to("cpu") else: new_value = param_cls(new_value, requires_grad=old_value.requires_grad, **kwargs).to(device) elif param_cls.__name__ in ["QTensor", "QBitsTensor"]: new_value = torch.nn.Parameter(new_value, requires_grad=old_value.requires_grad).to(device) else: new_value = param_cls(new_value, requires_grad=old_value.requires_grad).to(device) module._parameters[tensor_name] = new_value if fp16_statistics is not None: module._parameters[tensor_name].SCB = fp16_statistics.to(device) del fp16_statistics # as we put the weight to meta, it doesn't have SCB attr anymore. make sure that it is not a meta weight if ( module.__class__.__name__ == "Linear8bitLt" and getattr(module.weight, "SCB", None) is None and str(module.weight.device) != "meta" ): # quantize only if necessary device_index = torch.device(device).index if torch.device(device).type == "cuda" else None if not getattr(module.weight, "SCB", None) and device_index is not None: if module.bias is not None and module.bias.device.type != "meta": # if a bias exists, we need to wait until the bias is set on the correct device module = module.cuda(device_index) elif module.bias is None: # if no bias exists, we can quantize right away module = module.cuda(device_index) elif module.__class__.__name__ == "Linear4bit" and getattr(module.weight, "quant_state", None) is None: # quantize only if necessary device_index = torch.device(device).index if torch.device(device).type == "cuda" else None if not getattr(module.weight, "quant_state", None) and device_index is not None: module.weight = module.weight.cuda(device_index) # clean pre and post foward hook if is_npu_available(): torch.npu.empty_cache() elif is_xpu_available(): torch.xpu.empty_cache() else: torch.cuda.empty_cache() # When handling tied weights, we update tied_params_map to keep track of the tied weights that have already been allocated on the device in # order to avoid duplicating memory, see above. if ( tied_params_map is not None and old_value.data_ptr() in tied_params_map and device not in tied_params_map[old_value.data_ptr()] ): tied_params_map[old_value.data_ptr()][device] = new_value elif ( value is not None and tied_params_map is not None and value.data_ptr() in tied_params_map and device not in tied_params_map[value.data_ptr()] ): tied_params_map[value.data_ptr()][device] = new_value def named_module_tensors( module: nn.Module, include_buffers: bool = True, recurse: bool = False, remove_non_persistent: bool = False ): """ A helper function that gathers all the tensors (parameters + buffers) of a given module. If `include_buffers=True` it's the same as doing `module.named_parameters(recurse=recurse) + module.named_buffers(recurse=recurse)`. Args: module (`torch.nn.Module`): The module we want the tensors on. include_buffer (`bool`, *optional*, defaults to `True`): Whether or not to include the buffers in the result. recurse (`bool`, *optional`, defaults to `False`): Whether or not to go look in every submodule or just return the direct parameters and buffers. remove_non_persistent (`bool`, *optional*, defaults to `False`): Whether or not to remove the non persistent buffer from the buffers. Useful only when include_buffers = True """ yield from module.named_parameters(recurse=recurse) if include_buffers: non_persistent_buffers = set() if remove_non_persistent: non_persistent_buffers = get_non_persistent_buffers(module, recurse=recurse) for named_buffer in module.named_buffers(recurse=recurse): name, _ = named_buffer if name not in non_persistent_buffers: yield named_buffer def get_non_persistent_buffers(module: nn.Module, recurse: bool = False): """ Gather all non persistent buffers of a given modules into a set Args: module (`nn.Module`): The module we want the non persistent buffers on. recurse (`bool`, *optional*, defaults to `False`): Whether or not to go look in every submodule or just return the direct non persistent buffers. """ non_persistent_buffers_set = module._non_persistent_buffers_set if recurse: for _, m in module.named_modules(): non_persistent_buffers_set |= m._non_persistent_buffers_set return non_persistent_buffers_set class FindTiedParametersResult(list): """ This is a subclass of a list to handle backward compatibility for Transformers. Do not rely on the fact this is not a list or on the `values` method as in the future this will be removed. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def values(self): # TODO: at the next Transformers release (4.28.0) issue a deprecation warning here. return sum([x[1:] for x in self], []) def check_tied_parameters_in_config(model: nn.Module): """ Check if there is any indication in the given model that some weights should be tied. Args: model (`torch.nn.Module`): The model to inspect Returns: bool: True if the model needs to have tied weights """ # based on model.tie_weights() method has_tied_word_embedding = False has_tied_encoder_decoder = False has_tied_module = False if "PreTrainedModel" in [c.__name__ for c in inspect.getmro(model.__class__)]: has_tied_word_embedding = ( hasattr(model, "config") and getattr(model.config, "tie_word_embeddings", False) and model.get_output_embeddings() ) has_tied_encoder_decoder = ( hasattr(model, "config") and getattr(model.config, "is_encoder_decoder", False) and getattr(model.config, "tie_encoder_decoder", False) ) has_tied_module = any(hasattr(module, "_tie_weights") for module in model.modules()) return any([has_tied_word_embedding, has_tied_encoder_decoder, has_tied_module]) def _get_param_device(param, device_map): if param in device_map: return device_map[param] parent_param = ".".join(param.split(".")[:-1]) if parent_param == param: raise ValueError(f"The `device_map` does not contain the module {param}.") else: return _get_param_device(parent_param, device_map) def check_tied_parameters_on_same_device(tied_params, device_map): """ Check if tied parameters are on the same device Args: tied_params (`List[List[str]]`): A list of lists of parameter names being all tied together. device_map (`Dict[str, Union[int, str, torch.device]]`): A map that specifies where each submodule should go. """ for tie_param in tied_params: tie_param_devices = {} for param in tie_param: tie_param_devices[param] = _get_param_device(param, device_map) if len(set(tie_param_devices.values())) > 1: logger.warn( f"Tied parameters are on different devices: {tie_param_devices}. " "Please modify your custom device map or set `device_map='auto'`. " ) def find_tied_parameters(model: nn.Module, **kwargs): """ Find the tied parameters in a given model. <Tip warning={true}> The signature accepts keyword arguments, but they are for the recursive part of this function and you should ignore them. </Tip> Args: model (`torch.nn.Module`): The model to inspect. Returns: List[List[str]]: A list of lists of parameter names being all tied together. Example: ```py >>> from collections import OrderedDict >>> import torch.nn as nn >>> model = nn.Sequential(OrderedDict([("linear1", nn.Linear(4, 4)), ("linear2", nn.Linear(4, 4))])) >>> model.linear2.weight = model.linear1.weight >>> find_tied_parameters(model) [['linear1.weight', 'linear2.weight']] ``` """ # Initialize result and named_parameters before recursing. named_parameters = kwargs.get("named_parameters", None) prefix = kwargs.get("prefix", "") result = kwargs.get("result", {}) if named_parameters is None: named_parameters = {n: p for n, p in model.named_parameters()} else: # A tied parameter will not be in the full `named_parameters` seen above but will be in the `named_parameters` # of the submodule it belongs to. So while recursing we track the names that are not in the initial # `named_parameters`. for name, parameter in model.named_parameters(): full_name = name if prefix == "" else f"{prefix}.{name}" if full_name not in named_parameters: # When we find one, it has to be one of the existing parameters. for new_name, new_param in named_parameters.items(): if new_param is parameter: if new_name not in result: result[new_name] = [] result[new_name].append(full_name) # Once we have treated direct parameters, we move to the child modules. for name, child in model.named_children(): child_name = name if prefix == "" else f"{prefix}.{name}" find_tied_parameters(child, named_parameters=named_parameters, prefix=child_name, result=result) return FindTiedParametersResult([sorted([weight] + list(set(tied))) for weight, tied in result.items()]) def retie_parameters(model, tied_params): """ Reties tied parameters in a given model if the link was broken (for instance when adding hooks). Args: model (`torch.nn.Module`): The model in which to retie parameters. tied_params (`List[List[str]]`): A mapping parameter name to tied parameter name as obtained by `find_tied_parameters`. """ for tied_group in tied_params: param_to_tie = None # two loops : the first one to set param_to_tie , the second one to change the values of tied_group for param_name in tied_group: module = model splits = param_name.split(".") for split in splits[:-1]: module = getattr(module, split) param = getattr(module, splits[-1]) if param_to_tie is None and param.device != torch.device("meta"): param_to_tie = param break if param_to_tie is not None: for param_name in tied_group: module = model splits = param_name.split(".") for split in splits[:-1]: module = getattr(module, split) setattr(module, splits[-1], param_to_tie) def _get_proper_dtype(dtype: Union[str, torch.device]) -> torch.dtype: """ Just does torch.dtype(dtype) if necessary. """ if isinstance(dtype, str): # We accept "torch.float16" or just "float16" dtype = dtype.replace("torch.", "") dtype = getattr(torch, dtype) return dtype def compute_module_sizes( model: nn.Module, dtype: Optional[Union[str, torch.device]] = None, special_dtypes: Optional[Dict[str, Union[str, torch.device]]] = None, buffers_only: bool = False, ): """ Compute the size of each submodule of a given model. """ if dtype is not None: dtype = _get_proper_dtype(dtype) dtype_size = dtype_byte_size(dtype) if special_dtypes is not None: special_dtypes = {key: _get_proper_dtype(dtyp) for key, dtyp in special_dtypes.items()} special_dtypes_size = {key: dtype_byte_size(dtyp) for key, dtyp in special_dtypes.items()} module_sizes = defaultdict(int) module_list = [] if not buffers_only: module_list = named_module_tensors(model, recurse=True) else: module_list = model.named_buffers(recurse=True) for name, tensor in module_list: if special_dtypes is not None and name in special_dtypes: size = tensor.numel() * special_dtypes_size[name] elif dtype is None: size = tensor.numel() * dtype_byte_size(tensor.dtype) elif str(tensor.dtype).startswith(("torch.uint", "torch.int", "torch.bool")): # According to the code in set_module_tensor_to_device, these types won't be converted # so use their original size here size = tensor.numel() * dtype_byte_size(tensor.dtype) else: size = tensor.numel() * min(dtype_size, dtype_byte_size(tensor.dtype)) name_parts = name.split(".") for idx in range(len(name_parts) + 1): module_sizes[".".join(name_parts[:idx])] += size return module_sizes def compute_module_total_buffer_size( model: nn.Module, dtype: Optional[Union[str, torch.device]] = None, special_dtypes: Optional[Dict[str, Union[str, torch.device]]] = None, ): """ Compute the total size of buffers in each submodule of a given model. """ module_sizes = compute_module_sizes(model, dtype=dtype, special_dtypes=special_dtypes, buffers_only=True) return module_sizes.get("", 0) def get_max_layer_size( modules: List[Tuple[str, torch.nn.Module]], module_sizes: Dict[str, int], no_split_module_classes: List[str] ): """ Utility function that will scan a list of named modules and return the maximum size used by one full layer. The definition of a layer being: - a module with no direct children (just parameters and buffers) - a module whose class name is in the list `no_split_module_classes` Args: modules (`List[Tuple[str, torch.nn.Module]]`): The list of named modules where we want to determine the maximum layer size. module_sizes (`Dict[str, int]`): A dictionary mapping each layer name to its size (as generated by `compute_module_sizes`). no_split_module_classes (`List[str]`): A list of class names for layers we don't want to be split. Returns: `Tuple[int, List[str]]`: The maximum size of a layer with the list of layer names realizing that maximum size. """ max_size = 0 layer_names = [] modules_to_treat = modules.copy() while len(modules_to_treat) > 0: module_name, module = modules_to_treat.pop(0) modules_children = list(module.named_children()) if isinstance(module, torch.nn.Module) else [] if len(modules_children) == 0 or module.__class__.__name__ in no_split_module_classes: # No splitting this one so we compare to the max_size size = module_sizes[module_name] if size > max_size: max_size = size layer_names = [module_name] elif size == max_size: layer_names.append(module_name) else: modules_to_treat = [(f"{module_name}.{n}", v) for n, v in modules_children] + modules_to_treat return max_size, layer_names def get_max_memory(max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None): """ Get the maximum memory available if nothing is passed, converts string to int otherwise. """ import psutil if max_memory is None: if not (torch.cuda.is_available() or is_npu_available() or is_xpu_available()): max_memory = {} else: # Make sure CUDA is initialized on each GPU to have the right memory info. if is_npu_available(): for i in range(torch.npu.device_count()): _ = torch.tensor(0, device=torch.device("npu", i)) max_memory = {i: torch.npu.mem_get_info(i)[0] for i in range(torch.npu.device_count())} elif is_xpu_available(): for i in range(torch.xpu.device_count()): _ = torch.tensor(0, device=torch.device("xpu", i)) max_memory = {i: torch.xpu.max_memory_allocated(i) for i in range(torch.xpu.device_count())} else: for i in range(torch.cuda.device_count()): _ = torch.tensor([0], device=i) max_memory = {i: torch.cuda.mem_get_info(i)[0] for i in range(torch.cuda.device_count())} # allocate everything in the mps device as the RAM is shared if is_mps_available(): max_memory["mps"] = psutil.virtual_memory().available else: max_memory["cpu"] = psutil.virtual_memory().available return max_memory for key in max_memory: if isinstance(max_memory[key], str): max_memory[key] = convert_file_size_to_int(max_memory[key]) # Need to sort the device by type to make sure that we allocate the gpu first. # As gpu/npu/xpu are represented by int, we need to sort them first. gpu_devices = [k for k in max_memory.keys() if isinstance(k, int)] gpu_devices.sort() # check if gpu/npu/xpu devices are available and if not, throw a warning if is_npu_available(): num_devices = torch.npu.device_count() elif is_xpu_available(): num_devices = torch.xpu.device_count() else: num_devices = torch.cuda.device_count() for device in gpu_devices: if device >= num_devices or device < 0: logger.warning(f"Device {device} is not available, available devices are {list(range(num_devices))}") # Add the other devices in the preset order if they are available all_devices = gpu_devices + [k for k in ["mps", "cpu", "disk"] if k in max_memory.keys()] # Raise an error if a device is not recognized for k in max_memory.keys(): if k not in all_devices: raise ValueError( f"Device {k} is not recognized, available devices are integers(for GPU/XPU), 'mps', 'cpu' and 'disk'" ) max_memory = {k: max_memory[k] for k in all_devices} return max_memory def clean_device_map(device_map: Dict[str, Union[int, str, torch.device]], module_name: str = ""): """ Cleans a device_map by grouping all submodules that go on the same device together. """ # Get the value of the current module and if there is only one split across several keys, regroup it. prefix = "" if module_name == "" else f"{module_name}." values = [v for k, v in device_map.items() if k.startswith(prefix)] if len(set(values)) == 1 and len(values) > 1: for k in [k for k in device_map if k.startswith(prefix)]: del device_map[k] device_map[module_name] = values[0] # Recurse over the children children_modules = [k for k in device_map.keys() if k.startswith(prefix) and len(k) > len(module_name)] idx = len(module_name.split(".")) + 1 if len(module_name) > 0 else 1 children_modules = set(".".join(k.split(".")[:idx]) for k in children_modules) for child in children_modules: clean_device_map(device_map, module_name=child) return device_map def load_offloaded_weights(model, index, offload_folder): """ Loads the weights from the offload folder into the model. Args: model (`torch.nn.Module`): The model to load the weights into. index (`dict`): A dictionary containing the parameter name and its metadata for each parameter that was offloaded from the model. offload_folder (`str`): The folder where the offloaded weights are stored. """ if index is None or len(index) == 0: # Nothing to do return for param_name, metadata in index.items(): if "SCB" in param_name: continue fp16_statistics = None if "weight" in param_name and param_name.replace("weight", "SCB") in index.keys(): weight_name = param_name.replace("weight", "SCB") fp16_statistics = load_offloaded_weight( os.path.join(offload_folder, f"{weight_name}.dat"), index[weight_name] ) tensor_file = os.path.join(offload_folder, f"{param_name}.dat") weight = load_offloaded_weight(tensor_file, metadata) set_module_tensor_to_device(model, param_name, "cpu", value=weight, fp16_statistics=fp16_statistics) def get_balanced_memory( model: nn.Module, max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None, no_split_module_classes: Optional[List[str]] = None, dtype: Optional[Union[str, torch.dtype]] = None, special_dtypes: Optional[Dict[str, Union[str, torch.device]]] = None, low_zero: bool = False, ): """ Compute a `max_memory` dictionary for [`infer_auto_device_map`] that will balance the use of each available GPU. <Tip> All computation is done analyzing sizes and dtypes of the model parameters. As a result, the model can be on the meta device (as it would if initialized within the `init_empty_weights` context manager). </Tip> Args: model (`torch.nn.Module`): The model to analyze. max_memory (`Dict`, *optional*): A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset. Example: `max_memory={0: "1GB"}`. no_split_module_classes (`List[str]`, *optional*): A list of layer class names that should never be split across device (for instance any layer that has a residual connection). dtype (`str` or `torch.dtype`, *optional*): If provided, the weights will be converted to that type when loaded. special_dtypes (`Dict[str, Union[str, torch.device]]`, *optional*): If provided, special dtypes to consider for some specific weights (will override dtype used as default for all weights). low_zero (`bool`, *optional*): Minimizes the number of weights on GPU 0, which is convenient when it's used for other operations (like the Transformers generate function). """ # Get default / clean up max_memory user_not_set_max_memory = max_memory is None max_memory = get_max_memory(max_memory) if is_npu_available(): num_devices = len([d for d in max_memory if torch.device(d).type == "npu" and max_memory[d] > 0]) elif is_xpu_available(): num_devices = len( [ d for d in max_memory if ( d != "cpu" and (torch.device(d).type == "xpu" or torch.xpu.get_device_properties(d).dev_type == "gpu") ) and max_memory[d] > 0 ] ) else: num_devices = len([d for d in max_memory if torch.device(d).type == "cuda" and max_memory[d] > 0]) if num_devices == 0: return max_memory if num_devices == 1: # We cannot do low_zero on just one GPU, but we will still reserve some memory for the buffer low_zero = False # If user just asked us to handle memory usage, we should avoid OOM if user_not_set_max_memory: for key in max_memory.keys(): if isinstance(key, int): max_memory[key] *= 0.9 # 90% is a good compromise logger.info( f"We will use 90% of the memory on device {key} for storing the model, and 10% for the buffer to avoid OOM. " "You can set `max_memory` in to a higher value to use more memory (at your own risk)." ) break # only one device module_sizes = compute_module_sizes(model, dtype=dtype, special_dtypes=special_dtypes) per_gpu = module_sizes[""] // (num_devices - 1 if low_zero else num_devices) # We can't just set the memory to model_size // num_devices as it will end being too small: each GPU will get # slightly less layers and some layers will end up offload at the end. So this function computes a buffer size to # add which is the biggest of: # - the size of no split block (if applicable) # - the mean of the layer sizes if no_split_module_classes is None: no_split_module_classes = [] elif not isinstance(no_split_module_classes, (list, tuple)): no_split_module_classes = [no_split_module_classes] # Identify the size of the no_split_block modules if len(no_split_module_classes) > 0: no_split_children = {} for name, size in module_sizes.items(): if name == "": continue submodule = model for submodule_name in name.split("."): submodule = getattr(submodule, submodule_name) class_name = submodule.__class__.__name__ if class_name in no_split_module_classes and class_name not in no_split_children: no_split_children[class_name] = size if set(no_split_children.keys()) == set(no_split_module_classes): break buffer = max(no_split_children.values()) if len(no_split_children) > 0 else 0 else: buffer = 0 # Compute mean of final modules. In the first dict of module sizes, leaves are the parameters leaves = [n for n in module_sizes if len([p for p in module_sizes if n == "" or p.startswith(n + ".")]) == 0] module_sizes = {n: v for n, v in module_sizes.items() if n not in leaves} # Once removed, leaves are the final modules. leaves = [n for n in module_sizes if len([p for p in module_sizes if n == "" or p.startswith(n + ".")]) == 0] mean_leaves = int(sum([module_sizes[n] for n in leaves]) / max(len(leaves), 1)) buffer = int(1.25 * max(buffer, mean_leaves)) per_gpu += buffer # Sorted list of GPUs id (we may have some gpu ids not included in the our max_memory list - let's ignore them) gpus_idx_list = list( sorted( device_id for device_id, device_mem in max_memory.items() if isinstance(device_id, int) and device_mem > 0 ) ) # The last device is left with max_memory just in case the buffer is not enough. for idx in gpus_idx_list[:-1]: max_memory[idx] = min(max_memory[0] if low_zero and idx == 0 else per_gpu, max_memory[idx]) if low_zero: min_zero = max(0, module_sizes[""] - sum([max_memory[i] for i in range(1, num_devices)])) max_memory[0] = min(min_zero, max_memory[0]) return max_memory def calculate_maximum_sizes(model: torch.nn.Module): "Computes the total size of the model and its largest layer" sizes = compute_module_sizes(model) # `transformers` models store this information for us no_split_modules = getattr(model, "_no_split_modules", None) if no_split_modules is None: no_split_modules = [] modules_to_treat = ( list(model.named_parameters(recurse=False)) + list(model.named_children()) + list(model.named_buffers(recurse=False)) ) largest_layer = get_max_layer_size(modules_to_treat, sizes, no_split_modules) total_size = sizes[""] return total_size, largest_layer def infer_auto_device_map( model: nn.Module, max_memory: Optional[Dict[Union[int, str], Union[int, str]]] = None, no_split_module_classes: Optional[List[str]] = None, dtype: Optional[Union[str, torch.dtype]] = None, special_dtypes: Optional[Dict[str, Union[str, torch.dtype]]] = None, verbose: bool = False, clean_result: bool = True, offload_buffers: bool = False, ): """ Compute a device map for a given model giving priority to GPUs, then offload on CPU and finally offload to disk, such that: - we don't exceed the memory available of any of the GPU. - if offload to the CPU is needed, there is always room left on GPU 0 to put back the layer offloaded on CPU that has the largest size. - if offload to the CPU is needed,we don't exceed the RAM available on the CPU. - if offload to the disk is needed, there is always room left on the CPU to put back the layer offloaded on disk that has the largest size. <Tip> All computation is done analyzing sizes and dtypes of the model parameters. As a result, the model can be on the meta device (as it would if initialized within the `init_empty_weights` context manager). </Tip> Args: model (`torch.nn.Module`): The model to analyze. max_memory (`Dict`, *optional*): A dictionary device identifier to maximum memory. Will default to the maximum memory available if unset. Example: `max_memory={0: "1GB"}`. no_split_module_classes (`List[str]`, *optional*): A list of layer class names that should never be split across device (for instance any layer that has a residual connection). dtype (`str` or `torch.dtype`, *optional*): If provided, the weights will be converted to that type when loaded. special_dtypes (`Dict[str, Union[str, torch.device]]`, *optional*): If provided, special dtypes to consider for some specific weights (will override dtype used as default for all weights). verbose (`bool`, *optional*, defaults to `False`): Whether or not to provide debugging statements as the function builds the device_map. clean_result (`bool`, *optional*, defaults to `True`): Clean the resulting device_map by grouping all submodules that go on the same device together. offload_buffers (`bool`, *optional*, defaults to `False`): In the layers that are offloaded on the CPU or the hard drive, whether or not to offload the buffers as well as the parameters. """ # Get default / clean up max_memory max_memory = get_max_memory(max_memory) if no_split_module_classes is None: no_split_module_classes = [] elif not isinstance(no_split_module_classes, (list, tuple)): no_split_module_classes = [no_split_module_classes] devices = list(max_memory.keys()) if "disk" not in devices: devices.append("disk") gpus = [device for device in devices if device not in ["cpu", "disk"]] # Devices that need to keep space for a potential offloaded layer. if "mps" in gpus: main_devices = ["mps"] elif len(gpus) > 0: main_devices = [gpus[0], "cpu"] else: main_devices = ["cpu"] module_sizes = compute_module_sizes(model, dtype=dtype, special_dtypes=special_dtypes) tied_parameters = find_tied_parameters(model) if check_tied_parameters_in_config(model) and len(tied_parameters) == 0: logger.warn( "The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function." ) device_map = OrderedDict() current_device = 0 current_memory_used = 0 device_memory_used = {} device_buffer_sizes = {} # Direct submodules and parameters modules_to_treat = ( list(model.named_parameters(recurse=False)) + list(model.named_children()) + list(model.named_buffers(recurse=False)) ) # Initialize maximum largest layer, to know which space to keep in memory max_layer_size, max_layer_names = get_max_layer_size(modules_to_treat, module_sizes, no_split_module_classes) # Ready ? This is going to be a bit messy. while len(modules_to_treat) > 0: name, module = modules_to_treat.pop(0) if verbose: print(f"\nTreating module {name}.") # Max size in the remaining layers may have changed since we took one, so we maybe update it. max_layer_names = [n for n in max_layer_names if n != name and not n.startswith(name + ".")] if len(max_layer_names) == 0: max_layer_size, max_layer_names = get_max_layer_size( [(n, m) for n, m in modules_to_treat if isinstance(m, torch.nn.Module)], module_sizes, no_split_module_classes, ) # Assess size needed module_size = module_sizes[name] # We keep relevant tied parameters only: one of the tied parameters in the group is inside the current module # and the other is not. # Note: If we are currently processing the name `compute.weight`, an other parameter named e.g. `compute.weight_submodule.parameter` # needs to be considered outside the current module, hence the check with additional dots. tied_param_goups = [ tied_group for tied_group in tied_parameters if any(name + "." in k + "." for k in tied_group) and not all(name + "." in k + "." for k in tied_group) ] if verbose and len(tied_param_goups) > 0: print(f" Found the relevant tied param groups {tied_param_goups}") # Then we keep track of all the parameters that are tied to the current module, but not in the current module tied_params = sum( [[p for p in tied_group if name + "." not in p + "."] for tied_group in tied_param_goups], [] ) if verbose and len(tied_params) > 0: print(f" So those parameters need to be taken into account {tied_params}") device = devices[current_device] current_max_size = max_memory[device] if device != "disk" else None current_memory_reserved = 0 # Reduce max size available by the largest layer. if devices[current_device] in main_devices: current_max_size = current_max_size - max_layer_size current_memory_reserved = max_layer_size # Case 1 -> We're too big! if current_max_size is not None and current_memory_used + module_size > current_max_size: # Split or not split? modules_children = ( [] if isinstance(module, nn.Parameter) or isinstance(module, torch.Tensor) else list(module.named_children()) ) if verbose: print( f"Not enough space on {devices[current_device]} to put {name} (space available " f"{current_max_size - current_memory_used}, module size {module_size})." ) if len(modules_children) == 0 or module.__class__.__name__ in no_split_module_classes: # -> no split, we go to the next device if verbose: print("This module cannot be split, going to the next device.") device_memory_used[device] = current_memory_used + current_memory_reserved current_device += 1 modules_to_treat = [(name, module)] + modules_to_treat current_memory_used = 0 else: # -> split, we replace the module studied by its children + parameters if verbose: print(f"Splitting {name}.") modules_children = list(module.named_parameters(recurse=False)) + modules_children modules_to_treat = [(f"{name}.{n}", v) for n, v in modules_children] + modules_to_treat # Update the max layer size. max_layer_size, max_layer_names = get_max_layer_size( [(n, m) for n, m in modules_to_treat if isinstance(m, torch.nn.Module)], module_sizes, no_split_module_classes, ) # Case 2, it fits! We're not entirely out of the wood though, because we may have some tied parameters. elif len(tied_params) > 0: # First locate all tied modules tied_module_names = [] tied_modules = [] for tied_param in tied_params: tied_module_index = [i for i, (n, _) in enumerate(modules_to_treat) if n in tied_param][0] tied_module_names.append(modules_to_treat[tied_module_index][0]) tied_modules.append(modules_to_treat[tied_module_index][1]) if verbose: print( f" It looks like {name} is going to fit on {devices[current_device]} but we have tied " f"parameters to account for.\n - Names {tied_params}\n - Module names {tied_module_names}" ) # Let's see if it all fits first module_size_with_ties = module_size for tied_param, tied_module_name in zip(tied_params, tied_module_names): module_size_with_ties += module_sizes[tied_module_name] - module_sizes[tied_param] if current_max_size is None or current_memory_used + module_size_with_ties <= current_max_size: # We really really fit! if verbose: print(f"Putting {name} and {tied_module_names} on {devices[current_device]}.") current_memory_used += module_size_with_ties device_map[name] = devices[current_device] for tied_module_name in tied_module_names: if tied_module_name in [m[0] for m in modules_to_treat]: # The module may have been removed by a previous iteration of this loop. tied_module_index = [i for i, (n, _) in enumerate(modules_to_treat) if n == tied_module_name][ 0 ] modules_to_treat.pop(tied_module_index) device_map[tied_module_name] = devices[current_device] if not offload_buffers and isinstance(module, nn.Module): current_buffer_size = compute_module_total_buffer_size( module, dtype=dtype, special_dtypes=special_dtypes ) device_buffer_sizes[device] = device_buffer_sizes.get(device, 0) + current_buffer_size else: # We don't fit with the tied modules. Next question is: can we split one of the tied modules to make it # smaller or do we need to go on the next device? if verbose: print( f"Not enough space on {devices[current_device]} to put {name} and {tied_module_names} (space " f"available {current_max_size - current_memory_used}, needed size {module_size_with_ties})." ) split_happened = False for tied_module_name, tied_module in zip(tied_module_names, tied_modules): tied_module_children = list(tied_module.named_children()) if len(tied_module_children) == 0 or tied_module.__class__.__name__ in no_split_module_classes: # can't break this one. continue if verbose: print(f"Splitting {tied_module_name}.") tied_module_children = list(tied_module.named_parameters(recurse=False)) + tied_module_children tied_module_children = [(f"{tied_module_name}.{n}", v) for n, v in tied_module_children] tied_module_index = [i for i, (n, _) in enumerate(modules_to_treat) if n == tied_module_name][0] modules_to_treat = ( [(name, module)] + modules_to_treat[:tied_module_index] + tied_module_children + modules_to_treat[tied_module_index + 1 :] ) # Update the max layer size. max_layer_size, max_layer_names = get_max_layer_size( [(n, m) for n, m in modules_to_treat if isinstance(m, torch.nn.Module)], module_sizes, no_split_module_classes, ) split_happened = True break if not split_happened: # If the tied module is not split, we go to the next device if verbose: print("None of the tied module can be split, going to the next device.") device_memory_used[device] = current_memory_used + current_memory_reserved current_device += 1 modules_to_treat = [(name, module)] + modules_to_treat current_memory_used = 0 else: if verbose: if current_max_size is None: print(f"Putting {name} (size={module_size}) on {devices[current_device]}.") else: print( f"Putting {name} (size={module_size}) on {devices[current_device]} " f"(available={current_max_size - current_memory_used})." ) current_memory_used += module_size device_memory_used[device] = current_memory_used + current_memory_reserved device_map[name] = devices[current_device] if not offload_buffers and isinstance(module, nn.Module): current_buffer_size = compute_module_total_buffer_size( module, dtype=dtype, special_dtypes=special_dtypes ) device_buffer_sizes[device] = device_buffer_sizes.get(device, 0) + current_buffer_size if clean_result: device_map = clean_device_map(device_map) non_gpu_buffer_size = device_buffer_sizes.get("cpu", 0) + device_buffer_sizes.get("disk", 0) if non_gpu_buffer_size > 0 and not offload_buffers: is_buffer_fit_any_gpu = False for gpu_device, gpu_max_memory in max_memory.items(): if gpu_device == "cpu" or gpu_device == "disk": continue if not is_buffer_fit_any_gpu: gpu_memory_used = device_memory_used.get(gpu_device, 0) if gpu_max_memory >= non_gpu_buffer_size + gpu_memory_used: is_buffer_fit_any_gpu = True if len(gpus) > 0 and not is_buffer_fit_any_gpu: warnings.warn( f"Current model requires {non_gpu_buffer_size} bytes of buffer for offloaded layers, which seems does " f"not fit any GPU's remaining memory. If you are experiencing a OOM later, please consider using " f"offload_buffers=True." ) return device_map def check_device_map(model: nn.Module, device_map: Dict[str, Union[int, str, torch.device]]): """ Checks a device map covers everything in a given model. Args: model (`torch.nn.Module`): The model to check the device map against. device_map (`Dict[str, Union[int, str, torch.device]]`): The device map to check. """ all_model_tensors = [name for name, _ in model.state_dict().items()] for module_name in device_map.keys(): if module_name == "": all_model_tensors.clear() break else: all_model_tensors = [ name for name in all_model_tensors if not name == module_name and not name.startswith(module_name + ".") ] if len(all_model_tensors) > 0: non_covered_params = ", ".join(all_model_tensors) raise ValueError( f"The device_map provided does not give any device for the following parameters: {non_covered_params}" ) def load_state_dict(checkpoint_file, device_map=None): """ Load a checkpoint from a given file. If the checkpoint is in the safetensors format and a device map is passed, the weights can be fast-loaded directly on the GPU. Args: checkpoint_file (`str`): The path to the checkpoint to load. device_map (`Dict[str, Union[int, str, torch.device]]`, *optional*): A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. """ if checkpoint_file.endswith(".safetensors"): with safe_open(checkpoint_file, framework="pt") as f: metadata = f.metadata() weight_names = f.keys() if metadata is None: logger.warn( f"The safetensors archive passed at {checkpoint_file} does not contain metadata. " "Make sure to save your model with the `save_pretrained` method. Defaulting to 'pt' metadata." ) metadata = {"format": "pt"} if metadata.get("format") not in ["pt", "tf", "flax"]: raise OSError( f"The safetensors archive passed at {checkpoint_file} does not contain the valid metadata. Make sure " "you save your model with the `save_pretrained` method." ) elif metadata["format"] != "pt": raise ValueError(f"The checkpoint passed was saved with {metadata['format']}, we need a the pt format.") if device_map is None: return safe_load_file(checkpoint_file) else: # if we only have one device we can load everything directly if len(set(device_map.values())) == 1: return safe_load_file(checkpoint_file, device=list(device_map.values())[0]) devices = list(set(device_map.values()) - {"disk"}) # cpu device should always exist as fallback option if "cpu" not in devices: devices.append("cpu") # For each device, get the weights that go there device_weights = {device: [] for device in devices} for module_name, device in device_map.items(): if device in devices: device_weights[device].extend( [k for k in weight_names if k == module_name or k.startswith(module_name + ".")] ) # all weights that haven't defined a device should be loaded on CPU device_weights["cpu"].extend([k for k in weight_names if k not in sum(device_weights.values(), [])]) tensors = {} if is_tqdm_available(): progress_bar = tqdm( main_process_only=False, total=sum([len(device_weights[device]) for device in devices]), unit="w", smoothing=0, leave=False, ) else: progress_bar = None for device in devices: target_device = device if is_xpu_available(): current_safetensors_version = packaging.version.parse(importlib.metadata.version("safetensors")) if compare_versions(current_safetensors_version, "<", "0.4.2"): raise ModuleNotFoundError( f"You need at least safetensors 0.4.2 for Intel GPU, while you have {current_safetensors_version}" ) if isinstance(device, int): target_device = f"xpu:{device}" with safe_open(checkpoint_file, framework="pt", device=target_device) as f: for key in device_weights[device]: if progress_bar is not None: progress_bar.set_postfix(dev=device, refresh=False) progress_bar.set_description(key) tensors[key] = f.get_tensor(key) if progress_bar is not None: progress_bar.update() if progress_bar is not None: progress_bar.close() return tensors else: return torch.load(checkpoint_file, map_location=torch.device("cpu")) def get_state_dict_offloaded_model(model: nn.Module): """ Returns the state dictionary for an offloaded model via iterative onloading Args: model (`torch.nn.Module`): The offloaded model we want to save """ from ..hooks import AlignDevicesHook state_dict = {} placeholders = set() for name, module in model.named_modules(): if name == "": continue if hasattr(module, "_hf_hook") and isinstance(module._hf_hook, AlignDevicesHook) and module._hf_hook.offload: original_device = module._hf_hook.execution_device # assign hook execution device to cpu module._hf_hook.execution_device = "cpu" # onload meta tensors to execution device try: module._hf_hook.pre_forward(module) except MemoryError: raise MemoryError("Offloaded module must fit in CPU memory to call save_model!") from None module_state_dict = module.state_dict() # offload meta tensors from cpu module._hf_hook.post_forward(module, torch.tensor([])) # re-assign hook to original execution device module._hf_hook.execution_device = original_device else: module_state_dict = module.state_dict() for key in module_state_dict: # ignore placeholder parameters that are still on the meta device if module_state_dict[key].device == torch.device("meta"): placeholders.add(name + f".{key}") continue params = module_state_dict[key] state_dict[name + f".{key}"] = params for key in placeholders.copy(): if key in state_dict: placeholders.remove(key) if placeholders: logger.warning(f"The following tensors were not saved because they were still on meta device: {placeholders}") return state_dict def load_checkpoint_in_model( model: nn.Module, checkpoint: Union[str, os.PathLike], device_map: Optional[Dict[str, Union[int, str, torch.device]]] = None, offload_folder: Optional[Union[str, os.PathLike]] = None, dtype: Optional[Union[str, torch.dtype]] = None, offload_state_dict: bool = False, offload_buffers: bool = False, keep_in_fp32_modules: List[str] = None, offload_8bit_bnb: bool = False, ): """ Loads a (potentially sharded) checkpoint inside a model, potentially sending weights to a given device as they are loaded. <Tip warning={true}> Once loaded across devices, you still need to call [`dispatch_model`] on your model to make it able to run. To group the checkpoint loading and dispatch in one single call, use [`load_checkpoint_and_dispatch`]. </Tip> Args: model (`torch.nn.Module`): The model in which we want to load a checkpoint. checkpoint (`str` or `os.PathLike`): The folder checkpoint to load. It can be: - a path to a file containing a whole model state dict - a path to a `.json` file containing the index to a sharded checkpoint - a path to a folder containing a unique `.index.json` file and the shards of a checkpoint. - a path to a folder containing a unique pytorch_model.bin or a model.safetensors file. device_map (`Dict[str, Union[int, str, torch.device]]`, *optional*): A map that specifies where each submodule should go. It doesn't need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. offload_folder (`str` or `os.PathLike`, *optional*): If the `device_map` contains any value `"disk"`, the folder where we will offload weights. dtype (`str` or `torch.dtype`, *optional*): If provided, the weights will be converted to that type when loaded. offload_state_dict (`bool`, *optional*, defaults to `False`): If `True`, will temporarily offload the CPU state dict on the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard does not fit. offload_buffers (`bool`, *optional*, defaults to `False`): Whether or not to include the buffers in the weights offloaded to disk. keep_in_fp32_modules(`List[str]`, *optional*): A list of the modules that we keep in `torch.float32` dtype. offload_8bit_bnb (`bool`, *optional*): Whether or not to enable offload of 8-bit modules on cpu/disk. """ if offload_8bit_bnb: from .bnb import quantize_and_offload_8bit tied_params = find_tied_parameters(model) if check_tied_parameters_in_config(model) and len(tied_params) == 0: logger.warn( "The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function." ) if device_map is not None: check_tied_parameters_on_same_device(tied_params, device_map) if offload_folder is None and device_map is not None and "disk" in device_map.values(): raise ValueError( "At least one of the model submodule will be offloaded to disk, please pass along an `offload_folder`." ) elif offload_folder is not None and device_map is not None and "disk" in device_map.values(): os.makedirs(offload_folder, exist_ok=True) if isinstance(dtype, str): # We accept "torch.float16" or just "float16" dtype = dtype.replace("torch.", "") dtype = getattr(torch, dtype) checkpoint_files = None index_filename = None if os.path.isfile(checkpoint): if str(checkpoint).endswith(".json"): index_filename = checkpoint else: checkpoint_files = [checkpoint] elif os.path.isdir(checkpoint): # check if the whole state dict is present potential_state_bin = [f for f in os.listdir(checkpoint) if f == WEIGHTS_NAME] potential_state_safetensor = [f for f in os.listdir(checkpoint) if f == SAFE_WEIGHTS_NAME] if len(potential_state_bin) == 1: checkpoint_files = [os.path.join(checkpoint, potential_state_bin[0])] elif len(potential_state_safetensor) == 1: checkpoint_files = [os.path.join(checkpoint, potential_state_safetensor[0])] else: # otherwise check for sharded checkpoints potential_index = [f for f in os.listdir(checkpoint) if f.endswith(".index.json")] if len(potential_index) == 0: raise ValueError( f"{checkpoint} is not a folder containing a `.index.json` file or a {WEIGHTS_NAME} or a {SAFE_WEIGHTS_NAME} file" ) elif len(potential_index) == 1: index_filename = os.path.join(checkpoint, potential_index[0]) else: raise ValueError( f"{checkpoint} containing more than one `.index.json` file, delete the irrelevant ones." ) else: raise ValueError( "`checkpoint` should be the path to a file containing a whole state dict, or the index of a sharded " f"checkpoint, or a folder containing a sharded checkpoint or the whole state dict, but got {checkpoint}." ) if index_filename is not None: checkpoint_folder = os.path.split(index_filename)[0] with open(index_filename) as f: index = json.loads(f.read()) if "weight_map" in index: index = index["weight_map"] checkpoint_files = sorted(list(set(index.values()))) checkpoint_files = [os.path.join(checkpoint_folder, f) for f in checkpoint_files] # Logic for missing/unexepected keys goes here. offload_index = {} if offload_state_dict: state_dict_folder = tempfile.mkdtemp() state_dict_index = {} buffer_names = [name for name, _ in model.named_buffers()] for checkpoint_file in checkpoint_files: checkpoint = load_state_dict(checkpoint_file, device_map=device_map) if device_map is None: model.load_state_dict(checkpoint, strict=False) else: for param_name, param in checkpoint.items(): # skip SCB parameter (for 8-bit serialization) if "SCB" in param_name: continue module_name = param_name while len(module_name) > 0 and module_name not in device_map: module_name = ".".join(module_name.split(".")[:-1]) if module_name == "" and "" not in device_map: # TODO: group all errors and raise at the end. raise ValueError(f"{param_name} doesn't have any device set.") param_device = device_map[module_name] new_dtype = dtype if dtype is not None and torch.is_floating_point(param): if keep_in_fp32_modules is not None and dtype == torch.float16: proceed = False for key in keep_in_fp32_modules: if ((key in param_name) and (key + "." in param_name)) or key == param_name: proceed = True break if proceed: new_dtype = torch.float32 if "weight" in param_name and param_name.replace("weight", "SCB") in checkpoint.keys(): if param.dtype == torch.int8: fp16_statistics = checkpoint[param_name.replace("weight", "SCB")] else: fp16_statistics = None if param_device == "disk": if offload_buffers or param_name not in buffer_names: if new_dtype is None: new_dtype = param.dtype if offload_8bit_bnb: quantize_and_offload_8bit( model, param, param_name, new_dtype, offload_folder, offload_index, fp16_statistics ) continue else: set_module_tensor_to_device(model, param_name, "meta", dtype=new_dtype) offload_weight(param, param_name, offload_folder, index=offload_index) elif param_device == "cpu" and offload_state_dict: if new_dtype is None: new_dtype = param.dtype if offload_8bit_bnb: quantize_and_offload_8bit( model, param, param_name, new_dtype, state_dict_folder, state_dict_index, fp16_statistics ) else: set_module_tensor_to_device(model, param_name, "meta", dtype=new_dtype) offload_weight(param, param_name, state_dict_folder, index=state_dict_index) else: set_module_tensor_to_device( model, param_name, param_device, value=param, dtype=new_dtype, fp16_statistics=fp16_statistics, ) # Force Python to clean up. del checkpoint gc.collect() save_offload_index(offload_index, offload_folder) # Load back offloaded state dict on CPU if offload_state_dict: load_offloaded_weights(model, state_dict_index, state_dict_folder) shutil.rmtree(state_dict_folder) retie_parameters(model, tied_params) def get_mixed_precision_context_manager(native_amp: bool = False, autocast_kwargs: AutocastKwargs = None): """ Return a context manager for autocasting mixed precision Args: native_amp (`bool`, *optional*, defaults to False): Whether mixed precision is actually enabled. cache_enabled (`bool`, *optional*, defaults to True): Whether the weight cache inside autocast should be enabled. """ state = AcceleratorState() if autocast_kwargs is None: autocast_kwargs = {} else: autocast_kwargs = autocast_kwargs.to_kwargs() if native_amp: device_type = ( "cuda" if (state.distributed_type == DistributedType.XLA and is_torch_xla_available(check_is_gpu=True)) else state.device.type ) if state.mixed_precision == "fp16": return torch.autocast(device_type=device_type, dtype=torch.float16, **autocast_kwargs) elif state.mixed_precision == "bf16" and state.distributed_type in [ DistributedType.NO, DistributedType.MULTI_CPU, DistributedType.MULTI_GPU, DistributedType.MULTI_NPU, DistributedType.MULTI_XPU, DistributedType.FSDP, DistributedType.XLA, ]: return torch.autocast(device_type=device_type, dtype=torch.bfloat16, **autocast_kwargs) else: return torch.autocast(device_type=device_type, **autocast_kwargs) else: return contextlib.nullcontext()
accelerate/src/accelerate/utils/modeling.py/0
{ "file_path": "accelerate/src/accelerate/utils/modeling.py", "repo_id": "accelerate", "token_count": 33922 }
7
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import unittest from pathlib import Path from unittest.mock import patch import torch from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError from accelerate.commands.estimate import estimate_command, estimate_command_parser, gather_data from accelerate.commands.launch import _validate_launch_command, launch_command_parser from accelerate.test_utils import execute_subprocess_async from accelerate.test_utils.testing import ( DEFAULT_LAUNCH_COMMAND, get_launch_command, path_in_accelerate_package, require_multi_device, require_timm, require_transformers, run_command, ) from accelerate.utils import patch_environment from accelerate.utils.launch import prepare_simple_launcher_cmd_env class AccelerateLauncherTester(unittest.TestCase): """ Test case for verifying the `accelerate launch` CLI operates correctly. If a `default_config.yaml` file is located in the cache it will temporarily move it for the duration of the tests. """ test_file_path = path_in_accelerate_package("test_utils", "scripts", "test_cli.py") notebook_launcher_path = path_in_accelerate_package("test_utils", "scripts", "test_notebook.py") config_folder = Path.home() / ".cache/huggingface/accelerate" config_file = "default_config.yaml" config_path = config_folder / config_file changed_path = config_folder / "_default_config.yaml" test_config_path = Path("tests/test_configs") @classmethod def setUpClass(cls): if cls.config_path.is_file(): cls.config_path.rename(cls.changed_path) @classmethod def tearDownClass(cls): if cls.changed_path.is_file(): cls.changed_path.rename(cls.config_path) def test_no_config(self): if torch.cuda.is_available() and (torch.cuda.device_count() > 1): cmd = get_launch_command(multi_gpu=True) else: cmd = DEFAULT_LAUNCH_COMMAND cmd.append(self.test_file_path) execute_subprocess_async(cmd, env=os.environ.copy()) def test_config_compatibility(self): for config in sorted(self.test_config_path.glob("**/*.yaml")): if "invalid" in str(config) or "mpi" in str(config): continue with self.subTest(config_file=config): cmd = get_launch_command(config_file=config) + [self.test_file_path] execute_subprocess_async(cmd) def test_invalid_keys(self): config_path = self.test_config_path / "invalid_keys.yaml" with self.assertRaises( RuntimeError, msg="The config file at 'invalid_keys.yaml' had unknown keys ('another_invalid_key', 'invalid_key')", ): cmd = get_launch_command(config_file=config_path) + [self.test_file_path] execute_subprocess_async(cmd) def test_accelerate_test(self): execute_subprocess_async(["accelerate", "test"]) @require_multi_device def test_notebook_launcher(self): """ This test checks a variety of situations and scenarios with the `notebook_launcher` """ cmd = ["python", self.notebook_launcher_path] with patch_environment(omp_num_threads=1, accelerate_num_processes=2): run_command(cmd) def test_mpi_multicpu_config_cmd(self): """ Parses a launch command with a test file and the 0_28_0_mpi.yaml config. Tests getting the command and environment vars and verifies the mpirun command arg values. """ mpi_config_path = str(self.test_config_path / "0_28_0_mpi.yaml") test_file_arg = "--cpu" with patch("sys.argv", ["accelerate", str(self.test_file_path), test_file_arg]): parser = launch_command_parser() args = parser.parse_args() args.config_file = mpi_config_path args, _, _ = _validate_launch_command(args) # Mock out the check for mpirun version to simulate Intel MPI with patch("accelerate.utils.launch.which", return_value=True): with patch("accelerate.utils.launch.subprocess.check_output", return_value=b"Intel MPI"): cmd, _ = prepare_simple_launcher_cmd_env(args) # Verify the mpirun command args expected_mpirun_cmd = ["mpirun", "-f", "/home/user/hostfile", "-ppn", "4", "-n", "16"] self.assertGreater(len(cmd), len(expected_mpirun_cmd)) generated_mpirun_cmd = cmd[0 : len(expected_mpirun_cmd)] self.assertEqual(expected_mpirun_cmd, generated_mpirun_cmd) # Verify the python script and args in the mpirun command python_script_cmd = cmd[len(expected_mpirun_cmd) :] self.assertEqual(len(python_script_cmd), 3) self.assertEqual(python_script_cmd[1], str(self.test_file_path)) self.assertEqual(python_script_cmd[2], test_file_arg) class LaunchArgTester(unittest.TestCase): """ Test cases revolving around the CLI wrappers """ parser = launch_command_parser() def test_hyphen(self): # Try a little from each cluster args = ["--config-file", "test.yaml", "test.py"] result = self.parser.parse_args(args) assert result.config_file == "test.yaml" assert result.multi_gpu is False args = ["--multi-gpu", "--num-processes", "4", "test.py"] result = self.parser.parse_args(args) assert result.multi_gpu is True assert result.num_processes == 4 # And use a mix args = ["--multi-gpu", "--use-deepspeed", "--use-fsdp", "--num_processes", "4", "test.py"] result = self.parser.parse_args(args) assert result.multi_gpu is True assert result.use_deepspeed is True assert result.use_fsdp is True assert result.num_processes == 4 def test_underscore(self): # Try a little from each cluster args = ["--config_file", "test.yaml", "test.py"] result = self.parser.parse_args(args) assert result.config_file == "test.yaml" args = ["--multi_gpu", "--num_processes", "4", "test.py"] result = self.parser.parse_args(args) assert result.multi_gpu is True assert result.num_processes == 4 # And use a mix args = ["--multi_gpu", "--use_deepspeed", "--use_fsdp", "--num-processes", "4", "test.py"] result = self.parser.parse_args(args) assert result.multi_gpu is True assert result.use_deepspeed is True assert result.use_fsdp is True assert result.num_processes == 4 def test_duplicate_entities(self): help_return = self.parser.format_help() args = self.parser.parse_args(["test.py"]) for arg in args.__dict__: if "_" in arg: bad_arg = f'--{arg.replace("_", "-")}' # Need an exception for `num-processes` since it's in the docstring if bad_arg == "--num-processes": assert help_return.count(bad_arg) == 1, f"Found {bad_arg} in `accelerate launch -h`" else: assert bad_arg not in help_return, f"Found {bad_arg} in `accelerate launch -h`" class TpuConfigTester(unittest.TestCase): """ Test case for verifying the `accelerate tpu-config` CLI passes the right `gcloud` command. """ tpu_name = "test-tpu" tpu_zone = "us-central1-a" command = "ls" cmd = ["accelerate", "tpu-config"] base_output = "cd /usr/share" command_file = "tests/test_samples/test_command_file.sh" gcloud = "Running gcloud compute tpus tpu-vm ssh" def test_base(self): output = run_command( self.cmd + ["--command", self.command, "--tpu_zone", self.tpu_zone, "--tpu_name", self.tpu_name, "--debug"], return_stdout=True, ) assert f"{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls --worker all" in output def test_base_backward_compatibility(self): output = run_command( self.cmd + [ "--config_file", "tests/test_configs/0_12_0.yaml", "--command", self.command, "--tpu_zone", self.tpu_zone, "--tpu_name", self.tpu_name, "--debug", ], return_stdout=True, ) assert f"{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls --worker all" in output def test_with_config_file(self): output = run_command( self.cmd + ["--config_file", "tests/test_configs/latest.yaml", "--debug"], return_stdout=True ) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; echo "hello world"; echo "this is a second command" --worker all' in output ) def test_with_config_file_and_command(self): output = run_command( self.cmd + ["--config_file", "tests/test_configs/latest.yaml", "--command", self.command, "--debug"], return_stdout=True, ) assert f"{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls --worker all" in output def test_with_config_file_and_multiple_command(self): output = run_command( self.cmd + [ "--config_file", "tests/test_configs/latest.yaml", "--command", self.command, "--command", 'echo "Hello World"', "--debug", ], return_stdout=True, ) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; ls; echo "Hello World" --worker all' in output ) def test_with_config_file_and_command_file(self): output = run_command( self.cmd + ["--config_file", "tests/test_configs/latest.yaml", "--command_file", self.command_file, "--debug"], return_stdout=True, ) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; echo "hello world"; echo "this is a second command" --worker all' in output ) def test_with_config_file_and_command_file_backward_compatibility(self): output = run_command( self.cmd + [ "--config_file", "tests/test_configs/0_12_0.yaml", "--command_file", self.command_file, "--tpu_zone", self.tpu_zone, "--tpu_name", self.tpu_name, "--debug", ], return_stdout=True, ) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; echo "hello world"; echo "this is a second command" --worker all' in output ) def test_accelerate_install(self): output = run_command( self.cmd + ["--config_file", "tests/test_configs/latest.yaml", "--install_accelerate", "--debug"], return_stdout=True, ) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; pip install accelerate -U; echo "hello world"; echo "this is a second command" --worker all' in output ) def test_accelerate_install_version(self): output = run_command( self.cmd + [ "--config_file", "tests/test_configs/latest.yaml", "--install_accelerate", "--accelerate_version", "12.0.0", "--debug", ], return_stdout=True, ) assert ( f'{self.gcloud} test-tpu --zone us-central1-a --command {self.base_output}; pip install accelerate==12.0.0; echo "hello world"; echo "this is a second command" --worker all' in output ) class ModelEstimatorTester(unittest.TestCase): """ Test case for checking the output of `accelerate estimate-memory` is correct. - Uses `estimate_command` when trying to catch raised errors - Uses `gather_data` when just verifying the calculations are correct """ parser = estimate_command_parser() def test_invalid_model_name(self): with self.assertRaises( RepositoryNotFoundError, msg="Repo for model `somebrokenname` does not exist on the Hub" ): args = self.parser.parse_args(["somebrokenname"]) estimate_command(args) @require_timm def test_invalid_model_name_timm(self): with self.assertRaises(RuntimeError, msg="Tried to load `muellerzr/dummy` with `timm` but"): args = self.parser.parse_args(["muellerzr/dummy", "--library_name", "timm"]) estimate_command(args) @require_transformers def test_invalid_model_name_transformers(self): with self.assertRaises(RuntimeError, msg="Tried to load `muellerzr/dummy` with `transformers` but"): args = self.parser.parse_args(["muellerzr/dummy", "--library_name", "transformers"]) estimate_command(args) def test_no_metadata(self): with self.assertRaises( ValueError, msg="Model `muellerzr/dummy` does not have any library metadata on the Hub" ): args = self.parser.parse_args(["muellerzr/dummy"]) estimate_command(args) def test_gated(self): with self.assertRaises(GatedRepoError, msg="Repo for model `meta-llama/Llama-2-7b-hf` is gated"): args = self.parser.parse_args(["meta-llama/Llama-2-7b-hf"]) with patch_environment(hf_hub_disable_implicit_token="1"): estimate_command(args) @require_transformers def test_remote_code(self): # Also tests that custom `Auto` classes work args = self.parser.parse_args(["hf-internal-testing/test_dynamic_model"]) with self.assertRaises(ValueError, msg="--trust_remote_code"): gather_data(args) # Verify it works with the flag args = self.parser.parse_args(["hf-internal-testing/test_dynamic_model", "--trust_remote_code"]) gather_data(args) @require_transformers def test_explicit_dtypes(self): args = self.parser.parse_args(["bert-base-cased", "--dtypes", "float32", "float16"]) output = gather_data(args) # The largest layer and total size of the model in bytes largest_layer, total_size = 89075712, 433249280 # Check that full precision -> int4 is calculating correctly assert len(output) == 2, f"Output was missing a precision, expected 2 but received {len(output)}" for i, factor in enumerate([1, 2]): precision = 32 // factor precision_str = f"float{precision}" largest_layer_estimate = largest_layer / factor total_size_estimate = total_size / factor total_training_size_estimate = total_size_estimate * 4 assert precision_str == output[i][0], f"Output is missing precision `{precision_str}`" assert ( largest_layer_estimate == output[i][1] ), f"Calculation for largest layer size in `{precision_str}` is incorrect." assert ( total_size_estimate == output[i][2] ), f"Calculation for total size in `{precision_str}` is incorrect." assert ( total_training_size_estimate == output[i][3] ), f"Calculation for total training size in `{precision_str}` is incorrect." @require_transformers def test_transformers_model(self): args = self.parser.parse_args(["bert-base-cased", "--dtypes", "float32"]) output = gather_data(args) # The largest layer and total size of the model in bytes largest_layer, total_size = 89075712, 433249280 assert ( largest_layer == output[0][1] ), f"Calculation for largest layer size in `fp32` is incorrect, expected {largest_layer} but received {output[0][1]}" assert ( total_size == output[0][2] ), f"Calculation for total size in `fp32` is incorrect, expected {total_size} but received {output[0][2]}" @require_transformers def test_no_split_modules(self): # idefics-80b-instruct has ["IdeficsDecoderLayer", "IdeficsGatedCrossAttentionLayer"] args = self.parser.parse_args(["HuggingFaceM4/idefics-80b-instruct", "--dtypes", "float32"]) output = gather_data(args) # without factoring in `no_split` modules, the largest layer is 721420288 bytes assert output[0][1] != 721420288, "Largest layer calculation incorrect, did not factor in `no_split` modules." # the real answer is 3240165632 bytes assert output[0][1] == 3240165632 @require_timm def test_timm_model(self): args = self.parser.parse_args(["timm/resnet50.a1_in1k", "--library_name", "timm"]) output = gather_data(args) # The largest layer and total size of the model in bytes largest_layer, total_size = 9437184, 102441032 assert ( largest_layer == output[0][1] ), f"Calculation for largest layer size in `fp32` is incorrect, expected {largest_layer} but received {output[0][1]}" assert ( total_size == output[0][2] ), f"Calculation for total size in `fp32` is incorrect, expected {total_size} but received {output[0][2]}"
accelerate/tests/test_cli.py/0
{ "file_path": "accelerate/tests/test_cli.py", "repo_id": "accelerate", "token_count": 8019 }
8
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import unittest import torch from accelerate import Accelerator from accelerate.big_modeling import dispatch_model from accelerate.test_utils import ( DEFAULT_LAUNCH_COMMAND, assert_exception, device_count, execute_subprocess_async, get_launch_command, path_in_accelerate_package, require_huggingface_suite, require_multi_device, require_multi_gpu, require_non_torch_xla, require_pippy, ) from accelerate.utils import patch_environment class MultiDeviceTester(unittest.TestCase): test_file_path = path_in_accelerate_package("test_utils", "scripts", "test_script.py") data_loop_file_path = path_in_accelerate_package("test_utils", "scripts", "test_distributed_data_loop.py") operation_file_path = path_in_accelerate_package("test_utils", "scripts", "test_ops.py") pippy_file_path = path_in_accelerate_package("test_utils", "scripts", "external_deps", "test_pippy.py") @require_multi_device def test_multi_device(self): print(f"Found {device_count} devices.") cmd = DEFAULT_LAUNCH_COMMAND + [self.test_file_path] with patch_environment(omp_num_threads=1): execute_subprocess_async(cmd) @require_multi_device def test_multi_device_ops(self): print(f"Found {device_count} devices.") cmd = DEFAULT_LAUNCH_COMMAND + [self.operation_file_path] with patch_environment(omp_num_threads=1): execute_subprocess_async(cmd) @require_multi_device def test_pad_across_processes(self): print(f"Found {device_count} devices.") cmd = DEFAULT_LAUNCH_COMMAND + [inspect.getfile(self.__class__)] with patch_environment(omp_num_threads=1): execute_subprocess_async(cmd) @require_non_torch_xla @require_multi_gpu def test_distributed_data_loop(self): """ This TestCase checks the behaviour that occurs during distributed training or evaluation, when the batch size does not evenly divide the dataset size. """ print(f"Found {device_count} devices, using 2 devices only") cmd = get_launch_command(num_processes=2) + [self.data_loop_file_path] with patch_environment(omp_num_threads=1, cuda_visible_devices="0,1"): execute_subprocess_async(cmd) @require_multi_gpu @require_pippy @require_huggingface_suite def test_pippy(self): """ Checks the integration with the pippy framework """ print(f"Found {device_count} devices") cmd = get_launch_command(multi_gpu=True, num_processes=device_count) + [self.pippy_file_path] with patch_environment(omp_num_threads=1): execute_subprocess_async(cmd) if __name__ == "__main__": accelerator = Accelerator() shape = (accelerator.state.process_index + 2, 10) tensor = torch.randint(0, 10, shape).to(accelerator.device) error_msg = "" tensor1 = accelerator.pad_across_processes(tensor) if tensor1.shape[0] != accelerator.state.num_processes + 1: error_msg += f"Found shape {tensor1.shape} but should have {accelerator.state.num_processes + 1} at dim 0." if not torch.equal(tensor1[: accelerator.state.process_index + 2], tensor): error_msg += "Tensors have different values." if not torch.all(tensor1[accelerator.state.process_index + 2 :] == 0): error_msg += "Padding was not done with the right value (0)." tensor2 = accelerator.pad_across_processes(tensor, pad_first=True) if tensor2.shape[0] != accelerator.state.num_processes + 1: error_msg += f"Found shape {tensor2.shape} but should have {accelerator.state.num_processes + 1} at dim 0." index = accelerator.state.num_processes - accelerator.state.process_index - 1 if not torch.equal(tensor2[index:], tensor): error_msg += "Tensors have different values." if not torch.all(tensor2[:index] == 0): error_msg += "Padding was not done with the right value (0)." # Raise error at the end to make sure we don't stop at the first failure. if len(error_msg) > 0: raise ValueError(error_msg) # Check device_map accelerator.print("Test `device_map` cannot be prepared.") class ModelForTest(torch.nn.Module): def __init__(self): super().__init__() self.linear1 = torch.nn.Linear(3, 4) self.batchnorm = torch.nn.BatchNorm1d(4) self.linear2 = torch.nn.Linear(4, 5) def forward(self, x): return self.linear2(self.batchnorm(self.linear1(x))) device_map = {"linear1": 0, "batchnorm": "cpu", "linear2": 1} model = ModelForTest() dispatch_model(model, device_map=device_map) with assert_exception(ValueError, "You can't train a model that has been loaded with"): model = accelerator.prepare_model(model)
accelerate/tests/test_multigpu.py/0
{ "file_path": "accelerate/tests/test_multigpu.py", "repo_id": "accelerate", "token_count": 2091 }
9
# Model arguments model_name_or_path: BramVanroy/gpt2-cpt-dutch model_revision: main torch_dtype: bfloat16 # Data training arguments chat_template: "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}" dataset_mixer: BramVanroy/ultrachat_200k_dutch: 1.0 dataset_splits: - train_sft - test_sft preprocessing_num_workers: 12 # SFT trainer config bf16: true do_eval: true evaluation_strategy: epoch gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: False hub_model_id: gpt2-sft-dutch hub_strategy: every_save learning_rate: 2.0e-05 log_level: info logging_steps: 5 logging_strategy: steps lr_scheduler_type: cosine max_seq_length: 1024 max_steps: -1 num_train_epochs: 1 output_dir: data/gpt2-sft-dutch overwrite_output_dir: true per_device_eval_batch_size: 8 per_device_train_batch_size: 8 push_to_hub: true remove_unused_columns: true report_to: - wandb save_strategy: "steps" save_steps: 100 save_total_limit: 1 seed: 42 warmup_ratio: 0.1
alignment-handbook/recipes/gpt2-nl/sft/config_full.yaml/0
{ "file_path": "alignment-handbook/recipes/gpt2-nl/sft/config_full.yaml", "repo_id": "alignment-handbook", "token_count": 553 }
10
# Model arguments model_name_or_path: google/gemma-7b model_revision: main tokenizer_name_or_path: philschmid/gemma-tokenizer-chatml # Custom tokenizer with <|im_start|> and <|im_end|> tokens torch_dtype: bfloat16 use_flash_attention_2: true # Data training arguments dataset_mixer: HuggingFaceH4/deita-10k-v0-sft: 1.0 dataset_splits: - train_sft - test_sft preprocessing_num_workers: 12 # SFT trainer config bf16: true dataset_kwargs: add_special_tokens: false # We already wrap <bos> and <eos> in the chat template append_concat_token: false # No need to add <eos> across samples do_eval: true evaluation_strategy: epoch gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false hub_model_id: zephyr-7b-gemma-sft hub_strategy: every_save learning_rate: 2.0e-05 log_level: info logging_steps: 5 logging_strategy: steps lr_scheduler_type: cosine max_seq_length: 2048 max_steps: -1 num_train_epochs: 3 output_dir: data/zephyr-7b-gemma-sft overwrite_output_dir: true per_device_eval_batch_size: 4 per_device_train_batch_size: 4 push_to_hub: true remove_unused_columns: true report_to: - tensorboard - wandb save_strategy: "no" seed: 42 warmup_ratio: 0.1
alignment-handbook/recipes/zephyr-7b-gemma/sft/config_full.yaml/0
{ "file_path": "alignment-handbook/recipes/zephyr-7b-gemma/sft/config_full.yaml", "repo_id": "alignment-handbook", "token_count": 480 }
11
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import unittest from alignment import DataArguments, H4ArgumentParser, ModelArguments, SFTConfig class H4ArgumentParserTest(unittest.TestCase): def setUp(self): self.parser = H4ArgumentParser((ModelArguments, DataArguments, SFTConfig)) self.yaml_file_path = "tests/fixtures/config_sft_full.yaml" def test_load_yaml(self): model_args, data_args, training_args = self.parser.parse_yaml_file(os.path.abspath(self.yaml_file_path)) self.assertEqual(model_args.model_name_or_path, "mistralai/Mistral-7B-v0.1") def test_load_yaml_and_args(self): command_line_args = [ "--model_name_or_path=test", "--use_peft=true", "--lora_r=16", "--lora_dropout=0.5", ] model_args, data_args, training_args = self.parser.parse_yaml_and_args( os.path.abspath(self.yaml_file_path), command_line_args ) self.assertEqual(model_args.model_name_or_path, "test") self.assertEqual(model_args.use_peft, True) self.assertEqual(model_args.lora_r, 16) self.assertEqual(model_args.lora_dropout, 0.5)
alignment-handbook/tests/test_configs.py/0
{ "file_path": "alignment-handbook/tests/test_configs.py", "repo_id": "alignment-handbook", "token_count": 697 }
12
# candle [![discord server](https://dcbadge.vercel.app/api/server/hugging-face-879548962464493619)](https://discord.gg/hugging-face-879548962464493619) [![Latest version](https://img.shields.io/crates/v/candle-core.svg)](https://crates.io/crates/candle-core) [![Documentation](https://docs.rs/candle-core/badge.svg)](https://docs.rs/candle-core) ![License](https://img.shields.io/crates/l/candle-core.svg) Candle is a minimalist ML framework for Rust with a focus on performance (including GPU support) and ease of use. Try our online demos: [whisper](https://huggingface.co/spaces/lmz/candle-whisper), [LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2), [T5](https://huggingface.co/spaces/radames/Candle-T5-Generation-Wasm), [yolo](https://huggingface.co/spaces/lmz/candle-yolo), [Segment Anything](https://huggingface.co/spaces/radames/candle-segment-anything-wasm). ## Get started Make sure that you have [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) correctly installed as described in [**Installation**](https://huggingface.github.io/candle/guide/installation.html). Let's see how to run a simple matrix multiplication. Write the following to your `myapp/src/main.rs` file: ```rust use candle_core::{Device, Tensor}; fn main() -> Result<(), Box<dyn std::error::Error>> { let device = Device::Cpu; let a = Tensor::randn(0f32, 1., (2, 3), &device)?; let b = Tensor::randn(0f32, 1., (3, 4), &device)?; let c = a.matmul(&b)?; println!("{c}"); Ok(()) } ``` `cargo run` should display a tensor of shape `Tensor[[2, 4], f32]`. Having installed `candle` with Cuda support, simply define the `device` to be on GPU: ```diff - let device = Device::Cpu; + let device = Device::new_cuda(0)?; ``` For more advanced examples, please have a look at the following section. ## Check out our examples These online demos run entirely in your browser: - [yolo](https://huggingface.co/spaces/lmz/candle-yolo): pose estimation and object recognition. - [whisper](https://huggingface.co/spaces/lmz/candle-whisper): speech recognition. - [LLaMA2](https://huggingface.co/spaces/lmz/candle-llama2): text generation. - [T5](https://huggingface.co/spaces/radames/Candle-T5-Generation-Wasm): text generation. - [Phi-1.5, and Phi-2](https://huggingface.co/spaces/radames/Candle-Phi-1.5-Wasm): text generation. - [Segment Anything Model](https://huggingface.co/spaces/radames/candle-segment-anything-wasm): Image segmentation. - [BLIP](https://huggingface.co/spaces/radames/Candle-BLIP-Image-Captioning): image captioning. We also provide a some command line based examples using state of the art models: - [LLaMA and LLaMA-v2](./candle-examples/examples/llama/): general LLM, includes the SOLAR-10.7B variant. - [Falcon](./candle-examples/examples/falcon/): general LLM. - [Gemma](./candle-examples/examples/gemma/): 2b and 7b general LLMs from Google Deepmind. - [Phi-1, Phi-1.5, and Phi-2](./candle-examples/examples/phi/): 1.3b and 2.7b general LLMs with performance on par with LLaMA-v2 7b. - [StableLM-3B-4E1T](./candle-examples/examples/stable-lm/): a 3b general LLM pre-trained on 1T tokens of English and code datasets. Also supports StableLM-2, a 1.6b LLM trained on 2T tokens, as well as the code variants. - [Mamba](./candle-examples/examples/mamba/): an inference only implementation of the Mamba state space model. - [Mistral7b-v0.1](./candle-examples/examples/mistral/): a 7b general LLM with better performance than all publicly available 13b models as of 2023-09-28. - [Mixtral8x7b-v0.1](./candle-examples/examples/mixtral/): a sparse mixture of experts 8x7b general LLM with better performance than a Llama 2 70B model with much faster inference. - [StarCoder](./candle-examples/examples/bigcode/) and [StarCoder2](./candle-examples/examples/starcoder2/): LLM specialized to code generation. - [Qwen1.5](./candle-examples/examples/qwen/): Bilingual (English/Chinese) LLMs. - [RWKV v5 and v6](./candle-examples/examples/rwkv/): An RNN with transformer level LLM performance. - [Replit-code-v1.5](./candle-examples/examples/replit-code/): a 3.3b LLM specialized for code completion. - [Yi-6B / Yi-34B](./candle-examples/examples/yi/): two bilingual (English/Chinese) general LLMs with 6b and 34b parameters. - [Quantized LLaMA](./candle-examples/examples/quantized/): quantized version of the LLaMA model using the same quantization techniques as [llama.cpp](https://github.com/ggerganov/llama.cpp). <img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/quantized/assets/aoc.gif" width="600"> - [Stable Diffusion](./candle-examples/examples/stable-diffusion/): text to image generative model, support for the 1.5, 2.1, SDXL 1.0 and Turbo versions. <img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg" width="200"> - [Wuerstchen](./candle-examples/examples/wuerstchen/): another text to image generative model. <img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/wuerstchen/assets/cat.jpg" width="200"> - [yolo-v3](./candle-examples/examples/yolo-v3/) and [yolo-v8](./candle-examples/examples/yolo-v8/): object detection and pose estimation models. <img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/yolo-v8/assets/bike.od.jpg" width="200"><img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/yolo-v8/assets/bike.pose.jpg" width="200"> - [segment-anything](./candle-examples/examples/segment-anything/): image segmentation model with prompt. <img src="https://github.com/huggingface/candle/raw/main/candle-examples/examples/segment-anything/assets/sam_merged.jpg" width="200"> - [SegFormer](./candle-examples/examples/segformer/): transformer based semantic segmantation model. - [Whisper](./candle-examples/examples/whisper/): speech recognition model. - [EnCodec](./candle-examples/examples/encodec/): high-quality audio compression model using residual vector quantization. - [MetaVoice](./candle-examples/examples/metavoice/): foundational model for text-to-speech. - [T5](./candle-examples/examples/t5), [Bert](./candle-examples/examples/bert/), [JinaBert](./candle-examples/examples/jina-bert/) : useful for sentence embeddings. - [DINOv2](./candle-examples/examples/dinov2/): computer vision model trained using self-supervision (can be used for imagenet classification, depth evaluation, segmentation). - [VGG](./candle-examples/examples/vgg/), [RepVGG](./candle-examples/examples/repvgg): computer vision models. - [BLIP](./candle-examples/examples/blip/): image to text model, can be used to generate captions for an image. - [TrOCR](./candle-examples/examples/trocr/): a transformer OCR model, with dedicated submodels for hand-writing and printed recognition. - [Marian-MT](./candle-examples/examples/marian-mt/): neural machine translation model, generates the translated text from the input text. Run them using commands like: ``` cargo run --example quantized --release ``` In order to use **CUDA** add `--features cuda` to the example command line. If you have cuDNN installed, use `--features cudnn` for even more speedups. There are also some wasm examples for whisper and [llama2.c](https://github.com/karpathy/llama2.c). You can either build them with `trunk` or try them online: [whisper](https://huggingface.co/spaces/lmz/candle-whisper), [llama2](https://huggingface.co/spaces/lmz/candle-llama2), [T5](https://huggingface.co/spaces/radames/Candle-T5-Generation-Wasm), [Phi-1.5, and Phi-2](https://huggingface.co/spaces/radames/Candle-Phi-1.5-Wasm), [Segment Anything Model](https://huggingface.co/spaces/radames/candle-segment-anything-wasm). For LLaMA2, run the following command to retrieve the weight files and start a test server: ```bash cd candle-wasm-examples/llama2-c wget https://huggingface.co/spaces/lmz/candle-llama2/resolve/main/model.bin wget https://huggingface.co/spaces/lmz/candle-llama2/resolve/main/tokenizer.json trunk serve --release --port 8081 ``` And then head over to [http://localhost:8081/](http://localhost:8081/). <!--- ANCHOR: useful_libraries ---> ## Useful External Resources - [`candle-tutorial`](https://github.com/ToluClassics/candle-tutorial): A very detailed tutorial showing how to convert a PyTorch model to Candle. - [`candle-lora`](https://github.com/EricLBuehler/candle-lora): Efficient and ergonomic LoRA implementation for Candle. `candle-lora` has out-of-the-box LoRA support for many models from Candle, which can be found [here](https://github.com/EricLBuehler/candle-lora/tree/master/candle-lora-transformers/examples). - [`optimisers`](https://github.com/KGrewal1/optimisers): A collection of optimisers including SGD with momentum, AdaGrad, AdaDelta, AdaMax, NAdam, RAdam, and RMSprop. - [`candle-vllm`](https://github.com/EricLBuehler/candle-vllm): Efficient platform for inference and serving local LLMs including an OpenAI compatible API server. - [`candle-ext`](https://github.com/mokeyish/candle-ext): An extension library to Candle that provides PyTorch functions not currently available in Candle. - [`kalosm`](https://github.com/floneum/floneum/tree/master/interfaces/kalosm): A multi-modal meta-framework in Rust for interfacing with local pre-trained models with support for controlled generation, custom samplers, in-memory vector databases, audio transcription, and more. - [`candle-sampling`](https://github.com/EricLBuehler/candle-sampling): Sampling techniques for Candle. - [`gpt-from-scratch-rs`](https://github.com/jeroenvlek/gpt-from-scratch-rs): A port of Andrej Karpathy's _Let's build GPT_ tutorial on YouTube showcasing the Candle API on a toy problem. - [`candle-einops`](https://github.com/tomsanbear/candle-einops): A pure rust implementation of the python [einops](https://github.com/arogozhnikov/einops) library. If you have an addition to this list, please submit a pull request. <!--- ANCHOR_END: useful_libraries ---> <!--- ANCHOR: features ---> ## Features - Simple syntax, looks and feels like PyTorch. - Model training. - Embed user-defined ops/kernels, such as [flash-attention v2](https://github.com/huggingface/candle/blob/89ba005962495f2bfbda286e185e9c3c7f5300a3/candle-flash-attn/src/lib.rs#L152). - Backends. - Optimized CPU backend with optional MKL support for x86 and Accelerate for macs. - CUDA backend for efficiently running on GPUs, multiple GPU distribution via NCCL. - WASM support, run your models in a browser. - Included models. - Language Models. - LLaMA v1 and v2 with variants such as SOLAR-10.7B. - Falcon. - StarCoder, StarCoder2. - Phi 1, 1.5, and 2. - Mamba, Minimal Mamba - Gemma 2b and 7b. - Mistral 7b v0.1. - Mixtral 8x7b v0.1. - StableLM-3B-4E1T, StableLM-2-1.6B, Stable-Code-3B. - Replit-code-v1.5-3B. - Bert. - Yi-6B and Yi-34B. - Qwen1.5. - RWKV v5 and v6. - Quantized LLMs. - Llama 7b, 13b, 70b, as well as the chat and code variants. - Mistral 7b, and 7b instruct. - Mixtral 8x7b. - Zephyr 7b a and b (Mistral-7b based). - OpenChat 3.5 (Mistral-7b based). - Text to text. - T5 and its variants: FlanT5, UL2, MADLAD400 (translation), CoEdit (Grammar correction). - Marian MT (Machine Translation). - Text to image. - Stable Diffusion v1.5, v2.1, XL v1.0. - Wurstchen v2. - Image to text. - BLIP. - TrOCR. - Audio. - Whisper, multi-lingual speech-to-text. - EnCodec, audio compression model. - MetaVoice-1B, text-to-speech model. - Computer Vision Models. - DINOv2, ConvMixer, EfficientNet, ResNet, ViT, VGG, RepVGG, ConvNeXT, ConvNeXTv2, MobileOne, EfficientVit (MSRA). - yolo-v3, yolo-v8. - Segment-Anything Model (SAM). - SegFormer. - File formats: load models from safetensors, npz, ggml, or PyTorch files. - Serverless (on CPU), small and fast deployments. - Quantization support using the llama.cpp quantized types. <!--- ANCHOR_END: features ---> ## How to use <!--- ANCHOR: cheatsheet ---> Cheatsheet: | | Using PyTorch | Using Candle | |------------|------------------------------------------|------------------------------------------------------------------| | Creation | `torch.Tensor([[1, 2], [3, 4]])` | `Tensor::new(&[[1f32, 2.], [3., 4.]], &Device::Cpu)?` | | Creation | `torch.zeros((2, 2))` | `Tensor::zeros((2, 2), DType::F32, &Device::Cpu)?` | | Indexing | `tensor[:, :4]` | `tensor.i((.., ..4))?` | | Operations | `tensor.view((2, 2))` | `tensor.reshape((2, 2))?` | | Operations | `a.matmul(b)` | `a.matmul(&b)?` | | Arithmetic | `a + b` | `&a + &b` | | Device | `tensor.to(device="cuda")` | `tensor.to_device(&Device::new_cuda(0)?)?` | | Dtype | `tensor.to(dtype=torch.float16)` | `tensor.to_dtype(&DType::F16)?` | | Saving | `torch.save({"A": A}, "model.bin")` | `candle::safetensors::save(&HashMap::from([("A", A)]), "model.safetensors")?` | | Loading | `weights = torch.load("model.bin")` | `candle::safetensors::load("model.safetensors", &device)` | <!--- ANCHOR_END: cheatsheet ---> ## Structure - [candle-core](./candle-core): Core ops, devices, and `Tensor` struct definition - [candle-nn](./candle-nn/): Tools to build real models - [candle-examples](./candle-examples/): Examples of using the library in realistic settings - [candle-kernels](./candle-kernels/): CUDA custom kernels - [candle-datasets](./candle-datasets/): Datasets and data loaders. - [candle-transformers](./candle-transformers): transformers-related utilities. - [candle-flash-attn](./candle-flash-attn): Flash attention v2 layer. - [candle-onnx](./candle-onnx/): ONNX model evaluation. ## FAQ ### Why should I use Candle? Candle's core goal is to *make serverless inference possible*. Full machine learning frameworks like PyTorch are very large, which makes creating instances on a cluster slow. Candle allows deployment of lightweight binaries. Secondly, Candle lets you *remove Python* from production workloads. Python overhead can seriously hurt performance, and the [GIL](https://www.backblaze.com/blog/the-python-gil-past-present-and-future/) is a notorious source of headaches. Finally, Rust is cool! A lot of the HF ecosystem already has Rust crates, like [safetensors](https://github.com/huggingface/safetensors) and [tokenizers](https://github.com/huggingface/tokenizers). ### Other ML frameworks - [dfdx](https://github.com/coreylowman/dfdx) is a formidable crate, with shapes being included in types. This prevents a lot of headaches by getting the compiler to complain about shape mismatches right off the bat. However, we found that some features still require nightly, and writing code can be a bit daunting for non rust experts. We're leveraging and contributing to other core crates for the runtime so hopefully both crates can benefit from each other. - [burn](https://github.com/burn-rs/burn) is a general crate that can leverage multiple backends so you can choose the best engine for your workload. - [tch-rs](https://github.com/LaurentMazare/tch-rs.git) Bindings to the torch library in Rust. Extremely versatile, but they bring in the entire torch library into the runtime. The main contributor of `tch-rs` is also involved in the development of `candle`. ### Common Errors #### Missing symbols when compiling with the mkl feature. If you get some missing symbols when compiling binaries/tests using the mkl or accelerate features, e.g. for mkl you get: ``` = note: /usr/bin/ld: (....o): in function `blas::sgemm': .../blas-0.22.0/src/lib.rs:1944: undefined reference to `sgemm_' collect2: error: ld returned 1 exit status = note: some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified = note: use the `-l` flag to specify native libraries to link = note: use the `cargo:rustc-link-lib` directive to specify the native libraries to link with Cargo ``` or for accelerate: ``` Undefined symbols for architecture arm64: "_dgemm_", referenced from: candle_core::accelerate::dgemm::h1b71a038552bcabe in libcandle_core... "_sgemm_", referenced from: candle_core::accelerate::sgemm::h2cf21c592cba3c47 in libcandle_core... ld: symbol(s) not found for architecture arm64 ``` This is likely due to a missing linker flag that was needed to enable the mkl library. You can try adding the following for mkl at the top of your binary: ```rust extern crate intel_mkl_src; ``` or for accelerate: ```rust extern crate accelerate_src; ``` #### Cannot run the LLaMA examples: access to source requires login credentials ``` Error: request error: https://huggingface.co/meta-llama/Llama-2-7b-hf/resolve/main/tokenizer.json: status code 401 ``` This is likely because you're not permissioned for the LLaMA-v2 model. To fix this, you have to register on the huggingface-hub, accept the [LLaMA-v2 model conditions](https://huggingface.co/meta-llama/Llama-2-7b-hf), and set up your authentication token. See issue [#350](https://github.com/huggingface/candle/issues/350) for more details. #### Missing cute/cutlass headers when compiling flash-attn ``` In file included from kernels/flash_fwd_launch_template.h:11:0, from kernels/flash_fwd_hdim224_fp16_sm80.cu:5: kernels/flash_fwd_kernel.h:8:10: fatal error: cute/algorithm/copy.hpp: No such file or directory #include <cute/algorithm/copy.hpp> ^~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. Error: nvcc error while compiling: ``` [cutlass](https://github.com/NVIDIA/cutlass) is provided as a git submodule so you may want to run the following command to check it in properly. ```bash git submodule update --init ``` #### Compiling with flash-attention fails ``` /usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with โ€˜...โ€™: ``` This is a bug in gcc-11 triggered by the Cuda compiler. To fix this, install a different, supported gcc version - for example gcc-10, and specify the path to the compiler in the CANDLE_NVCC_CCBIN environment variable. ``` env CANDLE_NVCC_CCBIN=/usr/lib/gcc/x86_64-linux-gnu/10 cargo ... ``` #### Linking error on windows when running rustdoc or mdbook tests ``` Couldn't compile the test. ---- .\candle-book\src\inference\hub.md - Using_the_hub::Using_in_a_real_model_ (line 50) stdout ---- error: linking with `link.exe` failed: exit code: 1181 //very long chain of linking = note: LINK : fatal error LNK1181: cannot open input file 'windows.0.48.5.lib' ``` Make sure you link all native libraries that might be located outside a project target, e.g., to run mdbook tests, you should run: ``` mdbook test candle-book -L .\target\debug\deps\ ` -L native=$env:USERPROFILE\.cargo\registry\src\index.crates.io-6f17d22bba15001f\windows_x86_64_msvc-0.42.2\lib ` -L native=$env:USERPROFILE\.cargo\registry\src\index.crates.io-6f17d22bba15001f\windows_x86_64_msvc-0.48.5\lib ``` #### Extremely slow model load time with WSL This may be caused by the models being loaded from `/mnt/c`, more details on [stackoverflow](https://stackoverflow.com/questions/68972448/why-is-wsl-extremely-slow-when-compared-with-native-windows-npm-yarn-processing). #### Tracking down errors You can set `RUST_BACKTRACE=1` to be provided with backtraces when a candle error is generated.
candle/README.md/0
{ "file_path": "candle/README.md", "repo_id": "candle", "token_count": 7636 }
13
# Pytorch cheatsheet {{#include ../../../README.md:cheatsheet}}
candle/candle-book/src/guide/cheatsheet.md/0
{ "file_path": "candle/candle-book/src/guide/cheatsheet.md", "repo_id": "candle", "token_count": 26 }
14
//! Implement conversion traits for tensors use crate::{DType, Device, Error, Tensor, WithDType}; use half::{bf16, f16, slice::HalfFloatSliceExt}; use std::convert::TryFrom; impl<T: WithDType> TryFrom<&Tensor> for Vec<T> { type Error = Error; fn try_from(tensor: &Tensor) -> Result<Self, Self::Error> { tensor.to_vec1::<T>() } } impl<T: WithDType> TryFrom<&Tensor> for Vec<Vec<T>> { type Error = Error; fn try_from(tensor: &Tensor) -> Result<Self, Self::Error> { tensor.to_vec2::<T>() } } impl<T: WithDType> TryFrom<&Tensor> for Vec<Vec<Vec<T>>> { type Error = Error; fn try_from(tensor: &Tensor) -> Result<Self, Self::Error> { tensor.to_vec3::<T>() } } impl<T: WithDType> TryFrom<Tensor> for Vec<T> { type Error = Error; fn try_from(tensor: Tensor) -> Result<Self, Self::Error> { Vec::<T>::try_from(&tensor) } } impl<T: WithDType> TryFrom<Tensor> for Vec<Vec<T>> { type Error = Error; fn try_from(tensor: Tensor) -> Result<Self, Self::Error> { Vec::<Vec<T>>::try_from(&tensor) } } impl<T: WithDType> TryFrom<Tensor> for Vec<Vec<Vec<T>>> { type Error = Error; fn try_from(tensor: Tensor) -> Result<Self, Self::Error> { Vec::<Vec<Vec<T>>>::try_from(&tensor) } } impl<T: WithDType> TryFrom<&[T]> for Tensor { type Error = Error; fn try_from(v: &[T]) -> Result<Self, Self::Error> { Tensor::from_slice(v, v.len(), &Device::Cpu) } } impl<T: WithDType> TryFrom<Vec<T>> for Tensor { type Error = Error; fn try_from(v: Vec<T>) -> Result<Self, Self::Error> { let len = v.len(); Tensor::from_vec(v, len, &Device::Cpu) } } macro_rules! from_tensor { ($typ:ident) => { impl TryFrom<&Tensor> for $typ { type Error = Error; fn try_from(tensor: &Tensor) -> Result<Self, Self::Error> { tensor.to_scalar::<$typ>() } } impl TryFrom<Tensor> for $typ { type Error = Error; fn try_from(tensor: Tensor) -> Result<Self, Self::Error> { $typ::try_from(&tensor) } } impl TryFrom<$typ> for Tensor { type Error = Error; fn try_from(v: $typ) -> Result<Self, Self::Error> { Tensor::new(v, &Device::Cpu) } } }; } from_tensor!(f64); from_tensor!(f32); from_tensor!(f16); from_tensor!(bf16); from_tensor!(i64); from_tensor!(u32); from_tensor!(u8); impl Tensor { pub fn write_bytes<W: std::io::Write>(&self, f: &mut W) -> crate::Result<()> { use byteorder::{LittleEndian, WriteBytesExt}; let vs = self.flatten_all()?; match self.dtype() { DType::BF16 => { let vs = vs.to_vec1::<bf16>()?; for &v in vs.reinterpret_cast() { f.write_u16::<LittleEndian>(v)? } } DType::F16 => { let vs = vs.to_vec1::<f16>()?; for &v in vs.reinterpret_cast() { f.write_u16::<LittleEndian>(v)? } } DType::F32 => { // TODO: Avoid using a buffer when data is already on the CPU. for v in vs.to_vec1::<f32>()? { f.write_f32::<LittleEndian>(v)? } } DType::F64 => { for v in vs.to_vec1::<f64>()? { f.write_f64::<LittleEndian>(v)? } } DType::U32 => { for v in vs.to_vec1::<u32>()? { f.write_u32::<LittleEndian>(v)? } } DType::I64 => { for v in vs.to_vec1::<i64>()? { f.write_i64::<LittleEndian>(v)? } } DType::U8 => { let vs = vs.to_vec1::<u8>()?; f.write_all(&vs)?; } } Ok(()) } }
candle/candle-core/src/convert.rs/0
{ "file_path": "candle/candle-core/src/convert.rs", "repo_id": "candle", "token_count": 2242 }
15
use crate::{Error, Tensor}; use std::ops::{ Bound, Range, RangeBounds, RangeFrom, RangeFull, RangeInclusive, RangeTo, RangeToInclusive, }; impl Tensor { /// Intended to be use by the trait `.i()` /// /// ``` /// # use candle_core::{Tensor, DType, Device, IndexOp}; /// let a = Tensor::zeros((2, 3), DType::F32, &Device::Cpu)?; /// /// let c = a.i(0..1)?; /// assert_eq!(c.shape().dims(), &[1, 3]); /// /// let c = a.i(0)?; /// assert_eq!(c.shape().dims(), &[3]); /// /// let c = a.i((.., ..2) )?; /// assert_eq!(c.shape().dims(), &[2, 2]); /// /// let c = a.i((.., ..=2))?; /// assert_eq!(c.shape().dims(), &[2, 3]); /// /// # Ok::<(), candle_core::Error>(()) /// ``` fn index(&self, indexers: &[TensorIndexer]) -> Result<Self, Error> { let mut x = self.clone(); let dims = self.shape().dims(); let mut current_dim = 0; for (i, indexer) in indexers.iter().enumerate() { x = match indexer { TensorIndexer::Select(n) => x.narrow(current_dim, *n, 1)?.squeeze(current_dim)?, TensorIndexer::Narrow(left_bound, right_bound) => { let start = match left_bound { Bound::Included(n) => *n, Bound::Excluded(n) => *n + 1, Bound::Unbounded => 0, }; let stop = match right_bound { Bound::Included(n) => *n + 1, Bound::Excluded(n) => *n, Bound::Unbounded => dims[i], }; let out = x.narrow(current_dim, start, stop.saturating_sub(start))?; current_dim += 1; out } TensorIndexer::IndexSelect(indexes) => { if indexes.rank() != 1 { crate::bail!("multi-dimensional tensor indexing is not supported") } let out = x.index_select(&indexes.to_device(x.device())?, current_dim)?; current_dim += 1; out } TensorIndexer::Err(e) => crate::bail!("indexing error {e:?}"), }; } Ok(x) } } #[derive(Debug)] /// Generic structure used to index a slice of the tensor pub enum TensorIndexer { /// This selects the elements for which an index has some specific value. Select(usize), /// This is a regular slice, purely indexing a chunk of the tensor Narrow(Bound<usize>, Bound<usize>), /// Indexing via a 1d tensor IndexSelect(Tensor), Err(Error), } impl From<usize> for TensorIndexer { fn from(index: usize) -> Self { TensorIndexer::Select(index) } } impl From<&[u32]> for TensorIndexer { fn from(index: &[u32]) -> Self { match Tensor::new(index, &crate::Device::Cpu) { Ok(tensor) => TensorIndexer::IndexSelect(tensor), Err(e) => TensorIndexer::Err(e), } } } impl From<Vec<u32>> for TensorIndexer { fn from(index: Vec<u32>) -> Self { let len = index.len(); match Tensor::from_vec(index, len, &crate::Device::Cpu) { Ok(tensor) => TensorIndexer::IndexSelect(tensor), Err(e) => TensorIndexer::Err(e), } } } impl From<&Tensor> for TensorIndexer { fn from(tensor: &Tensor) -> Self { TensorIndexer::IndexSelect(tensor.clone()) } } trait RB: RangeBounds<usize> {} impl RB for Range<usize> {} impl RB for RangeFrom<usize> {} impl RB for RangeFull {} impl RB for RangeInclusive<usize> {} impl RB for RangeTo<usize> {} impl RB for RangeToInclusive<usize> {} impl<T: RB> From<T> for TensorIndexer { fn from(range: T) -> Self { use std::ops::Bound::*; let start = match range.start_bound() { Included(idx) => Included(*idx), Excluded(idx) => Excluded(*idx), Unbounded => Unbounded, }; let end = match range.end_bound() { Included(idx) => Included(*idx), Excluded(idx) => Excluded(*idx), Unbounded => Unbounded, }; TensorIndexer::Narrow(start, end) } } /// Trait used to implement multiple signatures for ease of use of the slicing /// of a tensor pub trait IndexOp<T> { /// Returns a slicing iterator which are the chunks of data necessary to /// reconstruct the desired tensor. fn i(&self, index: T) -> Result<Tensor, Error>; } impl<T> IndexOp<T> for Tensor where T: Into<TensorIndexer>, { fn i(&self, index: T) -> Result<Tensor, Error> { self.index(&[index.into()]) } } macro_rules! index_op_tuple { ($($t:ident),+) => { #[allow(non_snake_case)] impl<$($t),*> IndexOp<($($t,)*)> for Tensor where $($t: Into<TensorIndexer>,)* { fn i(&self, ($($t,)*): ($($t,)*)) -> Result<Tensor, Error> { self.index(&[$($t.into(),)*]) } } }; } index_op_tuple!(A); index_op_tuple!(A, B); index_op_tuple!(A, B, C); index_op_tuple!(A, B, C, D); index_op_tuple!(A, B, C, D, E); index_op_tuple!(A, B, C, D, E, F); index_op_tuple!(A, B, C, D, E, F, G);
candle/candle-core/src/indexer.rs/0
{ "file_path": "candle/candle-core/src/indexer.rs", "repo_id": "candle", "token_count": 2652 }
16
use crate::{CpuStorage, Device, Result, Shape, Storage, Tensor}; use k_quants::*; use std::borrow::Cow; #[cfg(target_feature = "avx")] pub mod avx; mod dummy_cuda; mod dummy_metal; pub mod ggml_file; pub mod gguf_file; pub mod k_quants; #[cfg(feature = "metal")] pub mod metal; #[cfg(not(feature = "metal"))] mod metal { pub use super::dummy_metal::*; } #[cfg(feature = "cuda")] pub mod cuda; #[cfg(not(feature = "cuda"))] mod cuda { pub use super::dummy_cuda::*; } #[cfg(target_feature = "neon")] pub mod neon; #[cfg(target_feature = "simd128")] pub mod simd128; pub mod utils; use half::f16; pub use k_quants::GgmlType; pub struct QTensor { storage: QStorage, shape: Shape, } impl Device { fn qzeros(&self, elem_count: usize, dtype: GgmlDType) -> Result<QStorage> { match self { Device::Cpu => { let storage = dtype.cpu_zeros(elem_count); Ok(QStorage::Cpu(storage)) } Device::Metal(metal) => { let storage = metal::QMetalStorage::zeros(metal, elem_count, dtype)?; Ok(QStorage::Metal(storage)) } Device::Cuda(cuda) => { let storage = cuda::QCudaStorage::zeros(cuda, elem_count, dtype)?; Ok(QStorage::Cuda(storage)) } } } } pub enum QStorage { Cpu(Box<dyn QuantizedType>), Metal(metal::QMetalStorage), Cuda(cuda::QCudaStorage), } impl QStorage { fn block_size(&self) -> usize { match self { QStorage::Cpu(storage) => storage.block_size(), QStorage::Metal(storage) => storage.dtype().block_size(), QStorage::Cuda(storage) => storage.dtype().block_size(), } } fn dtype(&self) -> GgmlDType { match self { QStorage::Cpu(storage) => storage.dtype(), QStorage::Metal(storage) => storage.dtype(), QStorage::Cuda(storage) => storage.dtype(), } } fn device(&self) -> Device { match self { QStorage::Cpu(_storage) => Device::Cpu, QStorage::Metal(storage) => Device::Metal(storage.device().clone()), QStorage::Cuda(storage) => Device::Cuda(storage.device().clone()), } } fn size_in_bytes(&self) -> usize { match self { QStorage::Cpu(storage) => storage.storage_size_in_bytes(), QStorage::Metal(storage) => storage.storage_size_in_bytes(), QStorage::Cuda(storage) => storage.storage_size_in_bytes(), } } fn quantize(&mut self, src: &Storage) -> Result<()> { match (self, src) { (QStorage::Cpu(storage), Storage::Cpu(src)) => { storage.from_float(src.as_slice::<f32>()?)?; } (QStorage::Metal(storage), Storage::Metal(src)) => storage.quantize(src)?, (QStorage::Cuda(storage), Storage::Cuda(src)) => storage.quantize(src)?, _ => crate::bail!("Invalid dequantize storage locations do not match"), } Ok(()) } fn dequantize(&self, elem_count: usize) -> Result<Storage> { match self { QStorage::Cpu(storage) => Ok(Storage::Cpu(storage.dequantize(elem_count)?)), QStorage::Metal(storage) => Ok(Storage::Metal(storage.dequantize(elem_count)?)), QStorage::Cuda(storage) => Ok(Storage::Cuda(storage.dequantize(elem_count)?)), } } fn data(&self) -> Result<Cow<[u8]>> { match self { QStorage::Cpu(storage) => { let data_ptr = storage.as_ptr(); let size_in_bytes = storage.storage_size_in_bytes(); let data = unsafe { std::slice::from_raw_parts(data_ptr, size_in_bytes) }; Ok(Cow::from(data)) } QStorage::Metal(_) | QStorage::Cuda(_) => { crate::bail!("not implemented"); } } } } #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] pub enum GgmlDType { F32, F16, Q4_0, Q4_1, Q5_0, Q5_1, Q8_0, Q8_1, Q2K, Q3K, Q4K, Q5K, Q6K, Q8K, } impl GgmlDType { pub(crate) fn from_u32(u: u32) -> Result<Self> { let dtype = match u { 0 => Self::F32, 1 => Self::F16, 2 => Self::Q4_0, 3 => Self::Q4_1, 6 => Self::Q5_0, 7 => Self::Q5_1, 8 => Self::Q8_0, 9 => Self::Q8_1, 10 => Self::Q2K, 11 => Self::Q3K, 12 => Self::Q4K, 13 => Self::Q5K, 14 => Self::Q6K, 15 => Self::Q8K, _ => crate::bail!("unknown dtype for tensor {u}"), }; Ok(dtype) } pub(crate) fn to_u32(self) -> u32 { match self { Self::F32 => 0, Self::F16 => 1, Self::Q4_0 => 2, Self::Q4_1 => 3, Self::Q5_0 => 6, Self::Q5_1 => 7, Self::Q8_0 => 8, Self::Q8_1 => 9, Self::Q2K => 10, Self::Q3K => 11, Self::Q4K => 12, Self::Q5K => 13, Self::Q6K => 14, Self::Q8K => 15, } } /// The block dtype pub fn cpu_zeros(&self, elem_count: usize) -> Box<dyn QuantizedType> { match self { Self::F32 => Box::new(vec![f32::zeros(); elem_count]), Self::F16 => Box::new(vec![f16::zeros(); elem_count]), Self::Q4_0 => Box::new(vec![BlockQ4_0::zeros(); elem_count / BlockQ4_0::BLCK_SIZE]), Self::Q4_1 => Box::new(vec![BlockQ4_1::zeros(); elem_count / BlockQ4_1::BLCK_SIZE]), Self::Q5_0 => Box::new(vec![BlockQ5_0::zeros(); elem_count / BlockQ5_0::BLCK_SIZE]), Self::Q5_1 => Box::new(vec![BlockQ5_1::zeros(); elem_count / BlockQ5_1::BLCK_SIZE]), Self::Q8_0 => Box::new(vec![BlockQ8_0::zeros(); elem_count / BlockQ8_0::BLCK_SIZE]), Self::Q8_1 => Box::new(vec![BlockQ8_1::zeros(); elem_count / BlockQ8_1::BLCK_SIZE]), Self::Q2K => Box::new(vec![BlockQ2K::zeros(); elem_count / BlockQ2K::BLCK_SIZE]), Self::Q3K => Box::new(vec![BlockQ3K::zeros(); elem_count / BlockQ3K::BLCK_SIZE]), Self::Q4K => Box::new(vec![BlockQ4K::zeros(); elem_count / BlockQ4K::BLCK_SIZE]), Self::Q5K => Box::new(vec![BlockQ5K::zeros(); elem_count / BlockQ5K::BLCK_SIZE]), Self::Q6K => Box::new(vec![BlockQ6K::zeros(); elem_count / BlockQ6K::BLCK_SIZE]), Self::Q8K => Box::new(vec![BlockQ8K::zeros(); elem_count / BlockQ8K::BLCK_SIZE]), } } /// The type size for blocks in bytes. pub fn type_size(&self) -> usize { use k_quants::*; match self { Self::F32 => 4, Self::F16 => 2, Self::Q4_0 => std::mem::size_of::<BlockQ4_0>(), Self::Q4_1 => std::mem::size_of::<BlockQ4_1>(), Self::Q5_0 => std::mem::size_of::<BlockQ5_0>(), Self::Q5_1 => std::mem::size_of::<BlockQ5_1>(), // https://github.com/ggerganov/llama.cpp/blob/468ea24fb4633a0d681f7ac84089566c1c6190cb/ggml.c#L932 Self::Q8_0 => std::mem::size_of::<BlockQ8_0>(), Self::Q8_1 => std::mem::size_of::<BlockQ8_1>(), Self::Q2K => std::mem::size_of::<BlockQ2K>(), Self::Q3K => std::mem::size_of::<BlockQ3K>(), Self::Q4K => std::mem::size_of::<BlockQ4K>(), Self::Q5K => std::mem::size_of::<BlockQ5K>(), Self::Q6K => std::mem::size_of::<BlockQ6K>(), Self::Q8K => std::mem::size_of::<BlockQ8K>(), } } /// The block size, i.e. the number of elements stored in each block. pub fn block_size(&self) -> usize { match self { Self::F32 => 1, Self::F16 => 1, Self::Q4_0 => k_quants::QK4_0, Self::Q4_1 => k_quants::QK4_1, Self::Q5_0 => k_quants::QK5_0, Self::Q5_1 => k_quants::QK5_1, Self::Q8_0 => k_quants::QK8_0, Self::Q8_1 => k_quants::QK8_1, Self::Q2K | Self::Q3K | Self::Q4K | Self::Q5K | Self::Q6K | Self::Q8K => k_quants::QK_K, } } } // A version of GgmlType without `vec_dot` so that it can be dyn boxed. pub trait QuantizedType: Send + Sync { fn dtype(&self) -> GgmlDType; fn matmul_t(&self, mkn: (usize, usize, usize), lhs: &[f32], dst: &mut [f32]) -> Result<()>; fn dequantize(&self, elem_count: usize) -> Result<CpuStorage>; fn storage_size_in_bytes(&self) -> usize; fn as_ptr(&self) -> *const u8; fn block_size(&self) -> usize; #[allow(clippy::wrong_self_convention)] fn from_float(&mut self, xs: &[f32]) -> Result<()>; fn size(&self) -> usize; } impl<T: k_quants::GgmlType + Send + Sync> QuantizedType for Vec<T> { fn matmul_t(&self, mkn: (usize, usize, usize), lhs: &[f32], dst: &mut [f32]) -> Result<()> { k_quants::matmul(mkn, lhs, self.as_slice(), dst) } fn size(&self) -> usize { self.len() * core::mem::size_of::<T>() } fn from_float(&mut self, xs: &[f32]) -> Result<()> { T::from_float(xs, self) } fn dtype(&self) -> GgmlDType { T::DTYPE } fn block_size(&self) -> usize { T::BLCK_SIZE } fn dequantize(&self, elem_count: usize) -> Result<CpuStorage> { let mut ys = vec![0.0f32; elem_count]; T::to_float(self.as_slice(), &mut ys)?; Ok(CpuStorage::F32(ys)) } fn storage_size_in_bytes(&self) -> usize { self.len() * std::mem::size_of::<T>() } fn as_ptr(&self) -> *const u8 { self.as_ptr() as *const u8 } } impl std::fmt::Debug for QTensor { fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { write!(f, "QTensor[{:?}; {:?}]", self.shape, self.dtype()) } } fn check_shape(shape: &Shape, block_size: usize) -> Result<()> { let dims = shape.dims(); if dims.is_empty() { crate::bail!("scalar tensor cannot be quantized {shape:?}") } if dims[dims.len() - 1] % block_size != 0 { crate::bail!( "quantized tensor must have their last dim divisible by block size {shape:?} {}", block_size ) } Ok(()) } impl QTensor { pub fn new<S: Into<Shape>>(storage: QStorage, shape: S) -> Result<Self> { let shape = shape.into(); check_shape(&shape, storage.block_size())?; Ok(Self { storage, shape }) } pub fn quantize(src: &Tensor, dtype: GgmlDType) -> Result<Self> { let shape = src.shape(); let block_size = dtype.block_size(); check_shape(shape, block_size)?; let src = src.to_dtype(crate::DType::F32)?.flatten_all()?; let elem_count = shape.elem_count(); if elem_count % block_size != 0 { crate::bail!( "tensor size ({shape:?}) is not divisible by block size {}", block_size ) } let mut storage = src.device().qzeros(elem_count, dtype)?; storage.quantize(&src.storage())?; Ok(Self { storage, shape: shape.clone(), }) } pub fn dtype(&self) -> GgmlDType { self.storage.dtype() } pub fn device(&self) -> Device { self.storage.device() } pub fn rank(&self) -> usize { self.shape.rank() } pub fn shape(&self) -> &Shape { &self.shape } pub fn dequantize(&self, device: &Device) -> Result<Tensor> { let storage = self.storage.dequantize(self.shape.elem_count())?; let none = crate::op::BackpropOp::none(); let is_variable = false; crate::tensor::from_storage(storage, self.shape.clone(), none, is_variable) .to_device(device) } pub fn storage_size_in_bytes(&self) -> usize { self.storage.size_in_bytes() } pub fn data(&self) -> Result<Cow<'_, [u8]>> { self.storage.data() } } #[derive(Clone, Debug)] pub enum QMatMul { QTensor(std::sync::Arc<QTensor>), Tensor(Tensor), } thread_local! { static DEQUANTIZE_ALL: bool = { match std::env::var("CANDLE_DEQUANTIZE_ALL") { Ok(s) => { !s.is_empty() && s != "0" }, Err(_) => false, } } } impl QMatMul { pub fn from_arc(qtensor: std::sync::Arc<QTensor>) -> Result<Self> { let dequantize = match qtensor.dtype() { GgmlDType::F32 | GgmlDType::F16 => true, _ => DEQUANTIZE_ALL.with(|b| *b), }; let t = if dequantize { let tensor = qtensor.dequantize(&qtensor.device())?; Self::Tensor(tensor) } else { Self::QTensor(qtensor) }; Ok(t) } pub fn from_qtensor(qtensor: QTensor) -> Result<Self> { Self::from_arc(std::sync::Arc::new(qtensor)) } } impl crate::CustomOp1 for QTensor { fn name(&self) -> &'static str { "qmatmul" } fn cpu_fwd( &self, storage: &crate::CpuStorage, layout: &crate::Layout, ) -> Result<(crate::CpuStorage, Shape)> { if !layout.is_contiguous() { crate::bail!("input tensor is not contiguous {layout:?}") } let src_shape = layout.shape(); // self is transposed so n is first then k. let (n, k) = self.shape.dims2()?; if src_shape.rank() < 2 { crate::bail!("input tensor has only one dimension {layout:?}") } let mut dst_shape = src_shape.dims().to_vec(); let last_k = dst_shape.pop().unwrap(); if last_k != k { crate::bail!("input tensor {layout:?} incompatible with {:?}", self.shape) } dst_shape.push(n); let dst_shape = Shape::from(dst_shape); #[allow(clippy::infallible_destructuring_match)] let self_storage = match &self.storage { QStorage::Cpu(storage) => storage, QStorage::Metal(_) | QStorage::Cuda(_) => crate::bail!("Invalid storage"), }; let slice = storage.as_slice::<f32>()?; let slice = &slice[layout.start_offset()..layout.start_offset() + src_shape.elem_count()]; let mut dst_storage = vec![0f32; dst_shape.elem_count()]; self_storage.matmul_t((dst_shape.elem_count() / n, k, n), slice, &mut dst_storage)?; Ok((crate::CpuStorage::F32(dst_storage), dst_shape)) } fn metal_fwd( &self, storage: &crate::MetalStorage, layout: &crate::Layout, ) -> Result<(crate::MetalStorage, Shape)> { let self_storage = match &self.storage { QStorage::Metal(metal) => metal, _ => unreachable!("Cannot call metal matmul on non metal QTensor"), }; self_storage.fwd(&self.shape, storage, layout) } fn cuda_fwd( &self, storage: &crate::CudaStorage, layout: &crate::Layout, ) -> Result<(crate::CudaStorage, Shape)> { let self_storage = match &self.storage { QStorage::Cuda(cuda) => cuda, _ => unreachable!("Cannot call cuda matmul on non cuda QTensor"), }; self_storage.fwd(&self.shape, storage, layout) } } impl crate::Module for QMatMul { fn forward(&self, xs: &Tensor) -> Result<Tensor> { match self { Self::QTensor(t) => xs.apply_op1_no_bwd(t.as_ref()), Self::Tensor(w) => { let w = match *xs.dims() { [b1, b2, _, _] => w.broadcast_left((b1, b2))?.t()?, [bsize, _, _] => w.broadcast_left(bsize)?.t()?, _ => w.t()?, }; xs.matmul(&w) } } } }
candle/candle-core/src/quantized/mod.rs/0
{ "file_path": "candle/candle-core/src/quantized/mod.rs", "repo_id": "candle", "token_count": 8178 }
17
# candle-datasets
candle/candle-datasets/README.md/0
{ "file_path": "candle/candle-datasets/README.md", "repo_id": "candle", "token_count": 7 }
18
# candle-blip The [blip-image-captioning](https://huggingface.co/Salesforce/blip-image-captioning-base) model can generate captions for an input image. ## Running on an example ```bash cargo run --example blip --release -- --image candle-examples/examples/yolo-v8/assets/bike.jpg ``` ``` Running on CPU, to run on GPU, build this example with `--features cuda` loaded image Tensor[dims 3, 384, 384; f32] model built several cyclists are riding down a road with cars behind them% ``` ![Leading group, Giro d'Italia 2021](../yolo-v8/assets/bike.jpg)
candle/candle-examples/examples/blip/README.md/0
{ "file_path": "candle/candle-examples/examples/blip/README.md", "repo_id": "candle", "token_count": 190 }
19
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use clap::{Parser, ValueEnum}; use candle::{DType, IndexOp, D}; use candle_nn::{Module, VarBuilder}; use candle_transformers::models::efficientvit; #[derive(Clone, Copy, Debug, ValueEnum)] enum Which { M0, M1, M2, M3, M4, M5, } impl Which { fn model_filename(&self) -> String { let name = match self { Self::M0 => "m0", Self::M1 => "m1", Self::M2 => "m2", Self::M3 => "m3", Self::M4 => "m4", Self::M5 => "m5", }; format!("timm/efficientvit_{}.r224_in1k", name) } fn config(&self) -> efficientvit::Config { match self { Self::M0 => efficientvit::Config::m0(), Self::M1 => efficientvit::Config::m1(), Self::M2 => efficientvit::Config::m2(), Self::M3 => efficientvit::Config::m3(), Self::M4 => efficientvit::Config::m4(), Self::M5 => efficientvit::Config::m5(), } } } #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] image: String, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, #[arg(value_enum, long, default_value_t=Which::M0)] which: Which, } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let image = candle_examples::imagenet::load_image224(args.image)?; println!("loaded image {image:?}"); let model_file = match args.model { None => { let model_name = args.which.model_filename(); let api = hf_hub::api::sync::Api::new()?; let api = api.model(model_name); api.get("model.safetensors")? } Some(model) => model.into(), }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file], DType::F32, &device)? }; let model = efficientvit::efficientvit(&args.which.config(), 1000, vb)?; println!("model built"); let logits = model.forward(&image.unsqueeze(0)?)?; let prs = candle_nn::ops::softmax(&logits, D::Minus1)? .i(0)? .to_vec1::<f32>()?; let mut prs = prs.iter().enumerate().collect::<Vec<_>>(); prs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for &(category_idx, pr) in prs.iter().take(5) { println!( "{:24}: {:.2}%", candle_examples::imagenet::CLASSES[category_idx], 100. * pr ); } Ok(()) }
candle/candle-examples/examples/efficientvit/main.rs/0
{ "file_path": "candle/candle-examples/examples/efficientvit/main.rs", "repo_id": "candle", "token_count": 1271 }
20
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::{Error as E, Result}; use clap::{Parser, ValueEnum}; mod model; use model::{Config, Model}; use candle::{DType, Device, Module, Tensor}; use candle_examples::token_output_stream::TokenOutputStream; use candle_nn::VarBuilder; use candle_transformers::generation::LogitsProcessor; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::Tokenizer; struct TextGeneration { model: Model, device: Device, tokenizer: TokenOutputStream, logits_processor: LogitsProcessor, repeat_penalty: f32, repeat_last_n: usize, } impl TextGeneration { #[allow(clippy::too_many_arguments)] fn new( model: Model, tokenizer: Tokenizer, seed: u64, temp: Option<f64>, top_p: Option<f64>, repeat_penalty: f32, repeat_last_n: usize, device: &Device, ) -> Self { let logits_processor = LogitsProcessor::new(seed, temp, top_p); Self { model, tokenizer: TokenOutputStream::new(tokenizer), logits_processor, repeat_penalty, repeat_last_n, device: device.clone(), } } fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> { use std::io::Write; self.tokenizer.clear(); let mut tokens = self .tokenizer .tokenizer() .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); for &t in tokens.iter() { if let Some(t) = self.tokenizer.next_token(t)? { print!("{t}") } } std::io::stdout().flush()?; let mut generated_tokens = 0usize; let eos_token = match self.tokenizer.get_token("<|endoftext|>") { Some(token) => token, None => anyhow::bail!("cannot find the </s> token"), }; let start_gen = std::time::Instant::now(); for _ in 0..sample_len { let input = Tensor::new(tokens.as_slice(), &self.device)?.unsqueeze(0)?; let logits = self.model.forward(&input)?; let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?; let logits = if self.repeat_penalty == 1. { logits } else { let start_at = tokens.len().saturating_sub(self.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, self.repeat_penalty, &tokens[start_at..], )? }; let next_token = self.logits_processor.sample(&logits)?; tokens.push(next_token); generated_tokens += 1; if next_token == eos_token { break; } if let Some(t) = self.tokenizer.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } } let dt = start_gen.elapsed(); if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? { print!("{rest}"); } std::io::stdout().flush()?; println!( "\n{generated_tokens} tokens generated ({:.2} token/s)", generated_tokens as f64 / dt.as_secs_f64(), ); Ok(()) } } #[derive(Parser, ValueEnum, Clone, Copy, PartialEq, Eq, Debug)] enum Which { Mamba130m, Mamba370m, Mamba790m, Mamba1_4b, Mamba2_8b, Mamba2_8bSlimPj, } impl std::fmt::Display for Which { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!(f, "{:?}", self) } } impl Which { fn model_id(&self) -> &'static str { match self { Self::Mamba130m => "state-spaces/mamba-130m", Self::Mamba370m => "state-spaces/mamba-370m", Self::Mamba790m => "state-spaces/mamba-790m", Self::Mamba1_4b => "state-spaces/mamba-1.4b", Self::Mamba2_8b => "state-spaces/mamba-2.8b", Self::Mamba2_8bSlimPj => "state-spaces/mamba-2.8b-slimpj'", } } fn revision(&self) -> &'static str { match self { Self::Mamba130m | Self::Mamba370m | Self::Mamba790m | Self::Mamba1_4b | Self::Mamba2_8bSlimPj => "refs/pr/1", Self::Mamba2_8b => "refs/pr/4", } } } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] prompt: String, /// The temperature used to generate samples. #[arg(long)] temperature: Option<f64>, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The length of the sample to generate (in tokens). #[arg(long, short = 'n', default_value_t = 5000)] sample_len: usize, #[arg(long, default_value = "mamba130m")] which: Which, #[arg(long)] model_id: Option<String>, #[arg(long)] revision: Option<String>, #[arg(long)] tokenizer_file: Option<String>, #[arg(long)] weight_files: Option<String>, #[arg(long)] config_file: Option<String>, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature.unwrap_or(0.), args.repeat_penalty, args.repeat_last_n ); let start = std::time::Instant::now(); let api = Api::new()?; let repo = api.repo(Repo::with_revision( args.model_id .unwrap_or_else(|| args.which.model_id().to_string()), RepoType::Model, args.revision .unwrap_or_else(|| args.which.revision().to_string()), )); let tokenizer_filename = match args.tokenizer_file { Some(file) => std::path::PathBuf::from(file), None => api .model("EleutherAI/gpt-neox-20b".to_string()) .get("tokenizer.json")?, }; let config_filename = match args.config_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("config.json")?, }; let filenames = match args.weight_files { Some(files) => files .split(',') .map(std::path::PathBuf::from) .collect::<Vec<_>>(), None => { vec![repo.get("model.safetensors")?] } }; println!("retrieved the files in {:?}", start.elapsed()); let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let start = std::time::Instant::now(); let config: Config = serde_json::from_slice(&std::fs::read(config_filename)?)?; let device = candle_examples::device(args.cpu)?; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&filenames, DType::F32, &device)? }; let model = Model::new(&config, vb.pp("backbone"))?; println!("loaded the model in {:?}", start.elapsed()); let mut pipeline = TextGeneration::new( model, tokenizer, args.seed, args.temperature, args.top_p, args.repeat_penalty, args.repeat_last_n, &device, ); pipeline.run(&args.prompt, args.sample_len)?; Ok(()) }
candle/candle-examples/examples/mamba-minimal/main.rs/0
{ "file_path": "candle/candle-examples/examples/mamba-minimal/main.rs", "repo_id": "candle", "token_count": 4087 }
21
#![allow(dead_code)] // https://huggingface.co/facebook/musicgen-small/tree/main // https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/models/musicgen/modeling_musicgen.py // TODO: Add an offline mode. // TODO: Add a KV cache. #[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; mod musicgen_model; use musicgen_model::{GenConfig, MusicgenForConditionalGeneration}; use anyhow::{Error as E, Result}; use candle::{DType, Tensor}; use candle_nn::VarBuilder; use clap::Parser; use hf_hub::{api::sync::Api, Repo, RepoType}; const DTYPE: DType = DType::F32; #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// The model weight file, in safetensor format. #[arg(long)] model: Option<String>, /// The tokenizer config. #[arg(long)] tokenizer: Option<String>, #[arg( long, default_value = "90s rock song with loud guitars and heavy drums" )] prompt: String, } fn main() -> Result<()> { use tokenizers::Tokenizer; let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let tokenizer = match args.tokenizer { Some(tokenizer) => std::path::PathBuf::from(tokenizer), None => Api::new()? .model("facebook/musicgen-small".to_string()) .get("tokenizer.json")?, }; let mut tokenizer = Tokenizer::from_file(tokenizer).map_err(E::msg)?; let tokenizer = tokenizer .with_padding(None) .with_truncation(None) .map_err(E::msg)?; let model = match args.model { Some(model) => std::path::PathBuf::from(model), None => Api::new()? .repo(Repo::with_revision( "facebook/musicgen-small".to_string(), RepoType::Model, "refs/pr/13".to_string(), )) .get("model.safetensors")?, }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model], DTYPE, &device)? }; let config = GenConfig::small(); let mut model = MusicgenForConditionalGeneration::load(vb, config)?; let tokens = tokenizer .encode(args.prompt.as_str(), true) .map_err(E::msg)? .get_ids() .to_vec(); println!("tokens: {tokens:?}"); let tokens = Tensor::new(tokens.as_slice(), &device)?.unsqueeze(0)?; println!("{tokens:?}"); let embeds = model.text_encoder.forward(&tokens)?; println!("{embeds}"); Ok(()) }
candle/candle-examples/examples/musicgen/main.rs/0
{ "file_path": "candle/candle-examples/examples/musicgen/main.rs", "repo_id": "candle", "token_count": 1151 }
22
#![allow(unused)] //! Wrappers around the Python API of Gymnasium (the new version of OpenAI gym) use candle::{Device, Result, Tensor}; use pyo3::prelude::*; use pyo3::types::PyDict; /// The return value for a step. #[derive(Debug)] pub struct Step<A> { pub state: Tensor, pub action: A, pub reward: f64, pub terminated: bool, pub truncated: bool, } impl<A: Copy> Step<A> { /// Returns a copy of this step changing the observation tensor. pub fn copy_with_obs(&self, state: &Tensor) -> Step<A> { Step { state: state.clone(), action: self.action, reward: self.reward, terminated: self.terminated, truncated: self.truncated, } } } /// An OpenAI Gym session. pub struct GymEnv { env: PyObject, action_space: usize, observation_space: Vec<usize>, } fn w(res: PyErr) -> candle::Error { candle::Error::wrap(res) } impl GymEnv { /// Creates a new session of the specified OpenAI Gym environment. pub fn new(name: &str) -> Result<GymEnv> { Python::with_gil(|py| { let gym = py.import("gymnasium")?; let make = gym.getattr("make")?; let env = make.call1((name,))?; let action_space = env.getattr("action_space")?; let action_space = if let Ok(val) = action_space.getattr("n") { val.extract()? } else { let action_space: Vec<usize> = action_space.getattr("shape")?.extract()?; action_space[0] }; let observation_space = env.getattr("observation_space")?; let observation_space = observation_space.getattr("shape")?.extract()?; Ok(GymEnv { env: env.into(), action_space, observation_space, }) }) .map_err(w) } /// Resets the environment, returning the observation tensor. pub fn reset(&self, seed: u64) -> Result<Tensor> { let state: Vec<f32> = Python::with_gil(|py| { let kwargs = PyDict::new(py); kwargs.set_item("seed", seed)?; let state = self.env.call_method(py, "reset", (), Some(kwargs))?; state.as_ref(py).get_item(0)?.extract() }) .map_err(w)?; Tensor::new(state, &Device::Cpu) } /// Applies an environment step using the specified action. pub fn step<A: pyo3::IntoPy<pyo3::Py<pyo3::PyAny>> + Clone>( &self, action: A, ) -> Result<Step<A>> { let (state, reward, terminated, truncated) = Python::with_gil(|py| { let step = self.env.call_method(py, "step", (action.clone(),), None)?; let step = step.as_ref(py); let state: Vec<f32> = step.get_item(0)?.extract()?; let reward: f64 = step.get_item(1)?.extract()?; let terminated: bool = step.get_item(2)?.extract()?; let truncated: bool = step.get_item(3)?.extract()?; Ok((state, reward, terminated, truncated)) }) .map_err(w)?; let state = Tensor::new(state, &Device::Cpu)?; Ok(Step { state, action, reward, terminated, truncated, }) } /// Returns the number of allowed actions for this environment. pub fn action_space(&self) -> usize { self.action_space } /// Returns the shape of the observation tensors. pub fn observation_space(&self) -> &[usize] { &self.observation_space } }
candle/candle-examples/examples/reinforcement-learning/gym_env.rs/0
{ "file_path": "candle/candle-examples/examples/reinforcement-learning/gym_env.rs", "repo_id": "candle", "token_count": 1716 }
23
# candle-segment-anything: Segment-Anything Model This example is based on Meta AI [Segment-Anything Model](https://github.com/facebookresearch/segment-anything). This model provides a robust and fast image segmentation pipeline that can be tweaked via some prompting (requesting some points to be in the target mask, requesting some points to be part of the background so _not_ in the target mask, specifying some bounding box). The default backbone can be replaced by the smaller and faster TinyViT model based on [MobileSAM](https://github.com/ChaoningZhang/MobileSAM). ## Running some example. ```bash cargo run --example segment-anything --release -- \ --image candle-examples/examples/yolo-v8/assets/bike.jpg --use-tiny --point 0.6,0.6 --point 0.6,0.55 ``` Running this command generates a `sam_merged.jpg` file containing the original image with a blue overlay of the selected mask. The red dots represent the prompt specified by `--point 0.6,0.6 --point 0.6,0.55`, this prompt is assumed to be part of the target mask. The values used for `--point` should be a comma delimited pair of float values. They are proportional to the image dimension, i.e. use 0.5 for the image center. Original image: ![Leading group, Giro d'Italia 2021](../yolo-v8/assets/bike.jpg) Segment results by prompting with a single point `--point 0.6,0.55`: ![Leading group, Giro d'Italia 2021](./assets/single_pt_prompt.jpg) Segment results by prompting with multiple points `--point 0.6,0.6 --point 0.6,0.55`: ![Leading group, Giro d'Italia 2021](./assets/two_pt_prompt.jpg) ### Command-line flags - `--use-tiny`: use the TinyViT based MobileSAM backbone rather than the default one. - `--point`: specifies the location of the target points. - `--threshold`: sets the threshold value to be part of the mask, a negative value results in a larger mask and can be specified via `--threshold=-1.2`.
candle/candle-examples/examples/segment-anything/README.md/0
{ "file_path": "candle/candle-examples/examples/segment-anything/README.md", "repo_id": "candle", "token_count": 573 }
24
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::Error as E; use clap::{Parser, ValueEnum}; use candle::{DType, Tensor}; use candle_examples::token_output_stream::TokenOutputStream; use candle_nn::VarBuilder; use candle_transformers::models::{trocr, vit}; use tokenizers::Tokenizer; mod image_processor; #[derive(Clone, Debug, Copy, ValueEnum)] enum Which { #[value(name = "base")] BaseHandwritten, #[value(name = "large")] LargeHandwritten, BasePrinted, LargePrinted, } impl Which { fn repo_and_branch_name(&self) -> (&str, &str) { match self { Self::BaseHandwritten => ("microsoft/trocr-base-handwritten", "refs/pr/3"), Self::LargeHandwritten => ("microsoft/trocr-large-handwritten", "refs/pr/6"), Self::BasePrinted => ("microsoft/trocr-base-printed", "refs/pr/7"), Self::LargePrinted => ("microsoft/trocr-large-printed", "main"), } } } #[derive(Debug, Clone, serde::Deserialize)] struct Config { encoder: vit::Config, decoder: trocr::TrOCRConfig, } #[derive(Parser, Debug)] struct Args { #[arg(long)] model: Option<String>, /// Choose the variant of the model to run. #[arg(long, default_value = "base")] which: Which, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// The image file to be processed. #[arg(long)] image: String, /// Tokenization config. #[arg(long)] tokenizer: Option<String>, } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let api = hf_hub::api::sync::Api::new()?; let mut tokenizer_dec = { let tokenizer_file = match args.tokenizer { None => api .model(String::from("ToluClassics/candle-trocr-tokenizer")) .get("tokenizer.json")?, Some(tokenizer) => std::path::PathBuf::from(tokenizer), }; let tokenizer = Tokenizer::from_file(&tokenizer_file).map_err(E::msg)?; TokenOutputStream::new(tokenizer) }; let device = candle_examples::device(args.cpu)?; let vb = { let model = match args.model { Some(model) => std::path::PathBuf::from(model), None => { let (repo, branch) = args.which.repo_and_branch_name(); api.repo(hf_hub::Repo::with_revision( repo.to_string(), hf_hub::RepoType::Model, branch.to_string(), )) .get("model.safetensors")? } }; println!("model: {:?}", model); unsafe { VarBuilder::from_mmaped_safetensors(&[model], DType::F32, &device)? } }; let (encoder_config, decoder_config) = { let (repo, branch) = args.which.repo_and_branch_name(); let config_filename = api .repo(hf_hub::Repo::with_revision( repo.to_string(), hf_hub::RepoType::Model, branch.to_string(), )) .get("config.json")?; let config: Config = serde_json::from_reader(std::fs::File::open(config_filename)?)?; (config.encoder, config.decoder) }; let mut model = trocr::TrOCRModel::new(&encoder_config, &decoder_config, vb)?; let processor_config = image_processor::ProcessorConfig::default(); let processor = image_processor::ViTImageProcessor::new(&processor_config); let image = vec![args.image.as_str()]; let image = processor.preprocess(image)?; let encoder_xs = model.encoder().forward(&image)?; let mut logits_processor = candle_transformers::generation::LogitsProcessor::new(1337, None, None); let mut token_ids: Vec<u32> = vec![decoder_config.decoder_start_token_id]; for index in 0..1000 { let context_size = if index >= 1 { 1 } else { token_ids.len() }; let start_pos = token_ids.len().saturating_sub(context_size); let input_ids = Tensor::new(&token_ids[start_pos..], &device)?.unsqueeze(0)?; let logits = model.decode(&input_ids, &encoder_xs, start_pos)?; let logits = logits.squeeze(0)?; let logits = logits.get(logits.dim(0)? - 1)?; let token = logits_processor.sample(&logits)?; token_ids.push(token); if let Some(t) = tokenizer_dec.next_token(token)? { use std::io::Write; print!("{t}"); std::io::stdout().flush()?; } if token == decoder_config.eos_token_id { break; } } if let Some(rest) = tokenizer_dec.decode_rest().map_err(E::msg)? { print!("{rest}"); } println!(); Ok(()) }
candle/candle-examples/examples/trocr/main.rs/0
{ "file_path": "candle/candle-examples/examples/trocr/main.rs", "repo_id": "candle", "token_count": 2160 }
25
pub const NAMES: [&str; 80] = [ "person", "bicycle", "car", "motorbike", "aeroplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "sofa", "pottedplant", "bed", "diningtable", "toilet", "tvmonitor", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush", ];
candle/candle-examples/src/coco_classes.rs/0
{ "file_path": "candle/candle-examples/src/coco_classes.rs", "repo_id": "candle", "token_count": 648 }
26
// Pytorch also has an implementation of Philox RNG: https://github.com/pytorch/pytorch/blob/8ca3c881db3e3510fcb7725389f6a0633c9b992c/torch/csrc/jit/tensorexpr/cuda_random.h #pragma once // Philox CUDA. namespace flash { struct ull2 { unsigned long long x; unsigned long long y; }; inline __device__ uint2 mulhilo32(const unsigned int a, const unsigned int b) { uint2 *res; unsigned long long tmp; asm ("mul.wide.u32 %0, %1, %2;\n\t" : "=l"(tmp) : "r"(a), "r"(b)); res = (uint2*)(&tmp); return *res; } inline __device__ uint4 philox_single_round(const uint4 ctr, const uint2 key) { constexpr unsigned long kPhiloxSA = 0xD2511F53; constexpr unsigned long kPhiloxSB = 0xCD9E8D57; uint2 res0 = mulhilo32(kPhiloxSA, ctr.x); uint2 res1 = mulhilo32(kPhiloxSB, ctr.z); uint4 ret = {res1.y ^ ctr.y ^ key.x, res1.x, res0.y ^ ctr.w ^ key.y, res0.x}; return ret; } inline __device__ uint4 philox(unsigned long long seed, unsigned long long subsequence, unsigned long long offset) { constexpr unsigned long kPhilox10A = 0x9E3779B9; constexpr unsigned long kPhilox10B = 0xBB67AE85; uint2 key = reinterpret_cast<uint2&>(seed); uint4 counter; ull2 *tmp = reinterpret_cast<ull2*>(&counter); tmp->x = offset; tmp->y = subsequence; #pragma unroll for (int i = 0; i < 6; i++) { counter = philox_single_round(counter, key); key.x += (kPhilox10A); key.y += (kPhilox10B); } uint4 output = philox_single_round(counter, key); return output; } } // namespace flash namespace { class Philox { public: __device__ inline Philox(unsigned long long seed, unsigned long long subsequence, unsigned long long offset) : STATE(0) , seed_(seed) , offset_(offset) , key(reinterpret_cast<const uint2&>(seed)) { //key.x = (unsigned int)seed; //key.y = (unsigned int)(seed >> 32); //counter = make_uint4(0, 0, 0, 0); //counter.z = (unsigned int)(subsequence); //counter.w = (unsigned int)(subsequence >> 32); //STATE = 0; //incr_n(offset / 4); // key = reinterpret_cast<const uint2&>(seed); ull2 * tmp = reinterpret_cast<ull2*>(&counter); tmp->x = offset / 4; tmp->y = subsequence; // if ((threadIdx.x == 0) && (blockIdx.x == 0) && (blockIdx.y == 0)) { // printf("Philox counter: %d, %d, %d, %d\n", counter.x, counter.y, counter.z, counter.w); // } } __device__ inline uint4 operator()() { // // if (STATE == 0) { // uint4 counter_ = counter; // uint2 key_ = key; // // 7-round philox // #pragma unroll // for (int i = 0; i < 6; i++) { // counter_ = flash::philox_single_round(counter_, key_); // key_.x += (kPhilox10A); // key_.y += (kPhilox10B); // } // // output = philox_single_round(counter_, key_); // uint4 output = flash::philox_single_round(counter_, key_); // // if ((threadIdx.x == 0) && (blockIdx.x == 0) && (blockIdx.y == 0)) { // // printf("Philox counter: %u, %u, %u, %u\n", counter.x, counter.y, counter.z, counter.w); // // printf("Philox output: %u, %u, %u, %u\n", output.x, output.y, output.z, output.w); // // } // incr(); // // } // // return a float4 directly // // unsigned long ret; // // switch(STATE) { // // case 0: ret = output.x; break; // // case 1: ret = output.y; break; // // case 2: ret = output.z; break; // // case 3: ret = output.w; break; // //} // // STATE = (STATE + 1) % 4; // return output; return flash::philox(seed_, offset_, offset_); } private: unsigned long long offset_, seed_; struct ull2 { uint64_t x; uint64_t y; }; uint4 counter; // uint4 output; const uint2 key; unsigned int STATE; __device__ inline void incr_n(unsigned long long n) { unsigned int nlo = (unsigned int)(n); unsigned int nhi = (unsigned int)(n >> 32); counter.x += nlo; if (counter.x < nlo) nhi++; counter.y += nhi; if (nhi <= counter.y) return; if (++counter.z) return; ++counter.w; } __device__ uint4 incr128 (uint4 ctr) { uint4 res; asm ("add.cc.u32 %0, %4, %8;\n\t" "addc.cc.u32 %1, %5, %9;\n\t" "addc.cc.u32 %2, %6, %10;\n\t" "addc.u32 %3, %7, %11;\n\t" : "=r"(res.x), "=r"(res.y), "=r"(res.z), "=r"(res.w) : "r"(ctr.x), "r"(ctr.y), "r"(ctr.z), "r"(ctr.w), "n"(1), "n"(0), "n"(0), "n"(0)); return res; } __device__ inline void incr() { // if ((threadIdx.x == 0) && (blockIdx.x == 0) && (blockIdx.y == 0)) { // printf("Counter before: %u, %u, %u, %u\n", counter.x, counter.y, counter.z, counter.w); // } counter = incr128(counter); // if ((threadIdx.x == 0) && (blockIdx.x == 0) && (blockIdx.y == 0)) { // printf("Counter after: %u, %u, %u, %u\n", counter.x, counter.y, counter.z, counter.w); // } } static const unsigned long kPhilox10A = 0x9E3779B9; static const unsigned long kPhilox10B = 0xBB67AE85; // static const unsigned long kPhiloxSA = 0xD2511F53; // static const unsigned long kPhiloxSB = 0xCD9E8D57; }; } // namespace
candle/candle-flash-attn/kernels/philox.cuh/0
{ "file_path": "candle/candle-flash-attn/kernels/philox.cuh", "repo_id": "candle", "token_count": 2511 }
27
#include "compatibility.cuh" #include<stdint.h> #include<cmath> // TODO: This is often used to check that the data is contiguous so that // kernels can be easily mapped. However this only returns true for row // major, if all the inputs are column major, we could apply the fast path // too (but we wouldn't if some of them are row major and some column major). __device__ bool is_contiguous( const size_t num_dims, const size_t *dims, const size_t *strides ) { size_t acc = 1; for (unsigned int d = 0; d < num_dims; d++) { unsigned int dim_idx = num_dims - 1 - d; if (acc != strides[dim_idx]) { return false; } acc *= dims[dim_idx]; } return true; } __device__ unsigned int get_strided_index( unsigned int idx, const size_t num_dims, const size_t *dims, const size_t *strides ) { unsigned int strided_i = 0; for (unsigned int d = 0; d < num_dims; d++) { unsigned int dim_idx = num_dims - 1 - d; strided_i += (idx % dims[dim_idx]) * strides[dim_idx]; idx /= dims[dim_idx]; } return strided_i; } __device__ unsigned int restrided( const unsigned int strided_i, const size_t num_dims, const size_t *dims, const size_t *strides, const size_t *new_strides ) { unsigned int idx = 0; for (int d = 0; d < num_dims; d++) { idx += (strides[d] == 0 ? 0 : (strided_i / strides[d]) % dims[d]) * new_strides[d]; } return idx; } // Sourced from https://graphics.stanford.edu/~seander/bithacks.html#RoundUpPowerOf2 // Input must be less than or equal to 2 ^ 16 // used in reductions __device__ __forceinline__ unsigned int next_power_of_two(unsigned int v) { v--; v |= v >> 1; v |= v >> 2; v |= v >> 4; v |= v >> 8; v++; return v; } // Efficiently computes the sum of each chunk in "data" of size chunk_len, and // stores the sums in out[i / chunk_len] template<typename T> __device__ void chunk_sum( const size_t chunk_len, const T data, T* out ) { __shared__ T buf[1024]; // assumes that threads where i >= numel have already exited unsigned int i = blockIdx.x * blockDim.x + threadIdx.x; unsigned int block_i = threadIdx.x; // Fall back to atomicAdd if chunk_len is small to reduce overhead if (chunk_len <= 2) { atomicAdd(out + i / chunk_len, data); return; } buf[block_i] = data; unsigned int chunk_i = i % chunk_len; unsigned int chunk_start = max((int)(block_i - chunk_i), 0); unsigned int chunk_end = min((unsigned int)(block_i + chunk_len - chunk_i), blockDim.x); chunk_i = block_i - chunk_start; size_t max_chunk_len = min(chunk_end - chunk_start, blockDim.x); size_t incr = next_power_of_two(max_chunk_len) >> 1; __syncthreads(); // Uses sequential addressing as discussed in // https://developer.download.nvidia.com/assets/cuda/files/reduction.pdf for (; incr > 0; incr >>= 1) { unsigned int block_i_2 = block_i + incr; if (block_i_2 < chunk_end && chunk_i < incr) { // This is sound because __syncthreads and the conditions above // ensure that no data races occur buf[block_i] += buf[block_i_2]; } __syncthreads(); } if (block_i == chunk_start) { atomicAdd(out + i / chunk_len, buf[block_i]); } } __device__ __forceinline__ bool isnang(float a) { return isnan(a); } __device__ __forceinline__ bool isnang(double a) { return isnan(a); } __device__ __forceinline__ float recipg(float a) { return 1.0 / a; } __device__ __forceinline__ double recipg(double a) { return 1.0 / a; } __device__ __forceinline__ float cosg(float a) { return cosf(a); } __device__ __forceinline__ double cosg(double a) { return cos(a); } __device__ __forceinline__ float sing(float a) { return sinf(a); } __device__ __forceinline__ double sing(double a) { return sin(a); } __device__ __forceinline__ float sqrtg(float a) { return sqrtf(a); } __device__ __forceinline__ double sqrtg(double a) { return sqrt(a); } __device__ __forceinline__ float powg(float a, float b) { return powf(a, b); } __device__ __forceinline__ double powg(double a, double b) { return pow(a, b); } __device__ __forceinline__ float tanhg(float a) { return tanhf(a); } __device__ __forceinline__ double tanhg(double a) { return tanh(a); } __device__ __forceinline__ float erfg(float a) { return erff(a); } __device__ __forceinline__ double erfg(double a) { return erf(a); } __device__ __forceinline__ float ceilg(float a) { return ceilf(a); } __device__ __forceinline__ double ceilg(double a) { return ceil(a); } __device__ __forceinline__ float floorg(float a) { return floorf(a); } __device__ __forceinline__ double floorg(double a) { return floor(a); } __device__ __forceinline__ float roundg(float a) { return roundf(a); } __device__ __forceinline__ double roundg(double a) { return round(a); } __device__ __forceinline__ float normcdfg(float a) { return normcdff(a); } __device__ __forceinline__ double normcdfg(double a) { return normcdf(a); } __device__ __forceinline__ float maxg(float a, float b) { return fmaxf(a, b); } __device__ __forceinline__ double maxg(double a, double b) { return fmax(a, b); } __device__ __forceinline__ float ming(float a, float b) { return fminf(a, b); } __device__ __forceinline__ double ming(double a, double b) { return fmin(a, b); } __device__ __forceinline__ float logg(float a) { return logf(a); } __device__ __forceinline__ double logg(double a) { return log(a); } __device__ __forceinline__ float expg(float a) { return expf(a); } __device__ __forceinline__ double expg(double a) { return exp(a); } __device__ __forceinline__ float absg(float a) { return fabsf(a); } __device__ __forceinline__ double absg(double a) { return fabs(a); } __device__ __forceinline__ float copysigng(float a, float b) { return copysignf(a, b); } __device__ __forceinline__ double copysigng(double a, double b) { return copysign(a, b); } __device__ __forceinline__ int64_t ming(int64_t a, int64_t b) { return min(a, b); } __device__ __forceinline__ int64_t maxg(int64_t a, int64_t b) { return max(a, b); } __device__ __forceinline__ uint32_t ming(uint32_t a, uint32_t b) { return min(a, b); } __device__ __forceinline__ uint32_t maxg(uint32_t a, uint32_t b) { return max(a, b); } __device__ __forceinline__ uint8_t ming(uint8_t a, uint8_t b) { return min(a, b); } __device__ __forceinline__ uint8_t maxg(uint8_t a, uint8_t b) { return max(a, b); } #if __CUDA_ARCH__ >= 530 __device__ __forceinline__ __half powg(__half a, __half b) { return __float2half(powf(__half2float(a), __half2float(b))); } __device__ __forceinline__ bool isnang(__half a) { return __hisnan(a); } __device__ __forceinline__ __half sqrtg(__half a) { return hsqrt(a); } __device__ __forceinline__ __half cosg(__half a) { return hcos(a); } __device__ __forceinline__ __half sing(__half a) { return hsin(a); } __device__ __forceinline__ __half recipg(__half a) { __half one = 1.0; return one / a; } __device__ __forceinline__ __half maxg(__half a, __half b) { return __hmax_nan(a, b); } __device__ __forceinline__ __half tanhg(__half a) { return __float2half(tanhf(__half2float(a))); } __device__ __forceinline__ __half erfg(__half a) { return __float2half(erff(__half2float(a))); } __device__ __forceinline__ __half ceilg(__half a) { return __float2half(ceilf(__half2float(a))); } __device__ __forceinline__ __half floorg(__half a) { return __float2half(floorf(__half2float(a))); } __device__ __forceinline__ __half roundg(__half a) { return __float2half(roundf(__half2float(a))); } __device__ __forceinline__ __half normcdfg(__half a) { return __float2half(normcdff(__half2float(a))); } __device__ __forceinline__ __half ming(__half a, __half b) { return __hmin_nan(a, b); } __device__ __forceinline__ __half logg(__half a) { return hlog(a); } __device__ __forceinline__ __half expg(__half a) { return hexp(a); } __device__ __forceinline__ __half absg(__half a) { return __habs(a); } __device__ __forceinline__ __half copysigng(__half a, __half b) { return __float2half(copysignf(__half2float(a), __half2float(b))); } #endif #if __CUDA_ARCH__ >= 800 __device__ __forceinline__ __nv_bfloat16 powg(__nv_bfloat16 a, __nv_bfloat16 b) { return __float2bfloat16(powf(__bfloat162float(a), __bfloat162float(b))); } __device__ __forceinline__ bool isnang(__nv_bfloat16 a) { return __hisnan(a); } __device__ __forceinline__ __nv_bfloat16 sqrtg(__nv_bfloat16 a) { return hsqrt(a); } __device__ __forceinline__ __nv_bfloat16 cosg(__nv_bfloat16 a) { return hcos(a); } __device__ __forceinline__ __nv_bfloat16 sing(__nv_bfloat16 a) { return hsin(a); } __device__ __forceinline__ __nv_bfloat16 recipg(__nv_bfloat16 a) { __nv_bfloat16 one = 1.0; return one / a; } __device__ __forceinline__ __nv_bfloat16 maxg(__nv_bfloat16 a, __nv_bfloat16 b) { return __hmax_nan(a, b); } __device__ __forceinline__ __nv_bfloat16 tanhg(__nv_bfloat16 a) { return __float2bfloat16(tanhf(__bfloat162float(a))); } __device__ __forceinline__ __nv_bfloat16 erfg(__nv_bfloat16 a) { return __float2bfloat16(erff(__bfloat162float(a))); } __device__ __forceinline__ __nv_bfloat16 ceilg(__nv_bfloat16 a) { return __float2bfloat16(ceilf(__bfloat162float(a))); } __device__ __forceinline__ __nv_bfloat16 floorg(__nv_bfloat16 a) { return __float2bfloat16(floorf(__bfloat162float(a))); } __device__ __forceinline__ __nv_bfloat16 roundg(__nv_bfloat16 a) { return __float2bfloat16(roundf(__bfloat162float(a))); } __device__ __forceinline__ __nv_bfloat16 normcdfg(__nv_bfloat16 a) { return __float2bfloat16(normcdff(__bfloat162float(a))); } __device__ __forceinline__ __nv_bfloat16 ming(__nv_bfloat16 a, __nv_bfloat16 b) { return __hmin_nan(a, b); } __device__ __forceinline__ __nv_bfloat16 logg(__nv_bfloat16 a) { return hlog(a); } __device__ __forceinline__ __nv_bfloat16 expg(__nv_bfloat16 a) { return hexp(a); } __device__ __forceinline__ __nv_bfloat16 absg(__nv_bfloat16 a) { return __habs(a); } __device__ __forceinline__ __nv_bfloat16 copysigng(__nv_bfloat16 a, __nv_bfloat16 b) { return __float2bfloat16(copysignf(__bfloat162float(a), __bfloat162float(b))); } #endif
candle/candle-kernels/src/cuda_utils.cuh/0
{ "file_path": "candle/candle-kernels/src/cuda_utils.cuh", "repo_id": "candle", "token_count": 3936 }
28
//! Batch Normalization. //! //! This layer applies Batch Normalization over a mini-batch of inputs as described in [`Batch //! Normalization`]. The input is expected to have at least three dimensions. //! //! Note that this implementation is for inference only, there is no possibility to track the //! running stats. //! //! [`Batch Normalization`]: https://arxiv.org/abs/1502.03167 use candle::{DType, Result, Tensor, Var}; #[derive(Debug, Clone, Copy, PartialEq)] pub struct BatchNormConfig { pub eps: f64, pub remove_mean: bool, /// The meaning of affine here is different from LayerNorm: when false there is no learnable /// parameter at all, 1 used for gamma and 0 for beta. pub affine: bool, /// Controls exponential moving average of running stats. Defaults to 0.1 /// /// `running_stat * (1.0 - momentum) + stat * momentum`. pub momentum: f64, } impl Default for BatchNormConfig { fn default() -> Self { Self { eps: 1e-5, remove_mean: true, affine: true, momentum: 0.1, } } } impl From<f64> for BatchNormConfig { fn from(eps: f64) -> Self { Self { eps, ..Default::default() } } } #[derive(Clone, Debug)] pub struct BatchNorm { running_mean: Var, running_var: Var, weight_and_bias: Option<(Tensor, Tensor)>, remove_mean: bool, eps: f64, momentum: f64, } impl BatchNorm { fn check_validity(&self, num_features: usize) -> Result<()> { if self.eps < 0. { candle::bail!("batch-norm eps cannot be negative {}", self.eps) } if !(0.0..=1.0).contains(&self.momentum) { candle::bail!( "batch-norm momentum must be between 0 and 1, is {}", self.momentum ) } if self.running_mean.dims() != [num_features] { candle::bail!( "batch-norm running mean has unexpected shape {:?} should have shape [{num_features}]", self.running_mean.shape(), ) } if self.running_var.dims() != [num_features] { candle::bail!( "batch-norm running variance has unexpected shape {:?} should have shape [{num_features}]", self.running_var.shape(), ) } if let Some((ref weight, ref bias)) = self.weight_and_bias.as_ref() { if weight.dims() != [num_features] { candle::bail!( "batch-norm weight has unexpected shape {:?} should have shape [{num_features}]", weight.shape(), ) } if bias.dims() != [num_features] { candle::bail!( "batch-norm weight has unexpected shape {:?} should have shape [{num_features}]", bias.shape(), ) } } Ok(()) } pub fn new( num_features: usize, running_mean: Tensor, running_var: Tensor, weight: Tensor, bias: Tensor, eps: f64, ) -> Result<Self> { let out = Self { running_mean: Var::from_tensor(&running_mean)?, running_var: Var::from_tensor(&running_var)?, weight_and_bias: Some((weight, bias)), remove_mean: true, eps, momentum: 0.1, }; out.check_validity(num_features)?; Ok(out) } pub fn new_no_bias( num_features: usize, running_mean: Tensor, running_var: Tensor, eps: f64, ) -> Result<Self> { let out = Self { running_mean: Var::from_tensor(&running_mean)?, running_var: Var::from_tensor(&running_var)?, weight_and_bias: None, remove_mean: true, eps, momentum: 0.1, }; out.check_validity(num_features)?; Ok(out) } pub fn new_with_momentum( num_features: usize, running_mean: Tensor, running_var: Tensor, weight: Tensor, bias: Tensor, eps: f64, momentum: f64, ) -> Result<Self> { let out = Self { running_mean: Var::from_tensor(&running_mean)?, running_var: Var::from_tensor(&running_var)?, weight_and_bias: Some((weight, bias)), remove_mean: true, eps, momentum, }; out.check_validity(num_features)?; Ok(out) } pub fn new_no_bias_with_momentum( num_features: usize, running_mean: Tensor, running_var: Tensor, eps: f64, momentum: f64, ) -> Result<Self> { let out = Self { running_mean: Var::from_tensor(&running_mean)?, running_var: Var::from_tensor(&running_var)?, weight_and_bias: None, remove_mean: true, eps, momentum, }; out.check_validity(num_features)?; Ok(out) } pub fn running_mean(&self) -> &Tensor { self.running_mean.as_tensor() } pub fn running_var(&self) -> &Tensor { self.running_var.as_tensor() } pub fn eps(&self) -> f64 { self.eps } pub fn weight_and_bias(&self) -> Option<(&Tensor, &Tensor)> { self.weight_and_bias.as_ref().map(|v| (&v.0, &v.1)) } pub fn momentum(&self) -> f64 { self.momentum } pub fn forward_train(&self, x: &Tensor) -> Result<Tensor> { let num_features = self.running_mean.as_tensor().dim(0)?; let x_dtype = x.dtype(); let internal_dtype = match x_dtype { DType::F16 | DType::BF16 => DType::F32, d => d, }; if x.rank() < 2 { candle::bail!( "batch-norm input tensor must have at least two dimensions ({:?})", x.shape() ) } if x.dim(1)? != num_features { candle::bail!( "batch-norm input doesn't have the expected number of features ({:?} <> {})", x.shape(), num_features ) } let x = x.to_dtype(internal_dtype)?; let x = x.transpose(0, 1)?; let x_dims_post_transpose = x.dims(); // Flatten all the dimensions exception the channel one as this performs a Spatial Batch // Normalization. let x = x.flatten_from(1)?.contiguous()?; let x = if self.remove_mean { // The mean is taken over dim 1 as this is the batch dim after the transpose(0, 1) above. let mean_x = x.mean_keepdim(1)?; let updated_running_mean = ((self.running_mean.as_tensor() * (1.0 - self.momentum))? + (mean_x.flatten_all()? * self.momentum)?)?; self.running_mean.set(&updated_running_mean)?; x.broadcast_sub(&mean_x)? } else { x }; // The mean is taken over dim 1 as this is the batch dim after the transpose(0, 1) above. let norm_x = x.sqr()?.mean_keepdim(1)?; let updated_running_var = { let batch_size = x.dim(1)? as f64; let running_var_weight = 1.0 - self.momentum; let norm_x_weight = self.momentum * batch_size / (batch_size - 1.0); ((self.running_var.as_tensor() * running_var_weight)? + (&norm_x.flatten_all()? * norm_x_weight)?)? }; self.running_var.set(&updated_running_var)?; let x = x .broadcast_div(&(norm_x + self.eps)?.sqrt()?)? .to_dtype(x_dtype)?; let x = match &self.weight_and_bias { None => x, Some((weight, bias)) => { let weight = weight.reshape(((), 1))?; let bias = bias.reshape(((), 1))?; x.broadcast_mul(&weight)?.broadcast_add(&bias)? } }; x.reshape(x_dims_post_transpose)?.transpose(0, 1) } fn forward_eval(&self, x: &Tensor) -> Result<Tensor> { let target_shape: Vec<usize> = x .dims() .iter() .enumerate() .map(|(idx, v)| if idx == 1 { *v } else { 1 }) .collect(); let target_shape = target_shape.as_slice(); let x = x .broadcast_sub( &self .running_mean .as_detached_tensor() .reshape(target_shape)?, )? .broadcast_div( &(self .running_var .as_detached_tensor() .reshape(target_shape)? + self.eps)? .sqrt()?, )?; match &self.weight_and_bias { None => Ok(x), Some((weight, bias)) => { let weight = weight.reshape(target_shape)?; let bias = bias.reshape(target_shape)?; x.broadcast_mul(&weight)?.broadcast_add(&bias) } } } } impl crate::ModuleT for BatchNorm { fn forward_t(&self, x: &Tensor, train: bool) -> Result<Tensor> { if train { self.forward_train(x) } else { self.forward_eval(x) } } } pub fn batch_norm<C: Into<BatchNormConfig>>( num_features: usize, config: C, vb: crate::VarBuilder, ) -> Result<BatchNorm> { use crate::Init; let config = config.into(); if config.eps < 0. { candle::bail!("batch-norm eps cannot be negative {}", config.eps) } let running_mean = vb.get_with_hints(num_features, "running_mean", Init::Const(0.))?; let running_var = vb.get_with_hints(num_features, "running_var", Init::Const(1.))?; let weight_and_bias = if config.affine { let weight = vb.get_with_hints(num_features, "weight", Init::Const(1.))?; let bias = vb.get_with_hints(num_features, "bias", Init::Const(0.))?; Some((weight, bias)) } else { None }; Ok(BatchNorm { running_mean: Var::from_tensor(&running_mean)?, running_var: Var::from_tensor(&running_var)?, weight_and_bias, remove_mean: config.remove_mean, eps: config.eps, momentum: config.momentum, }) }
candle/candle-nn/src/batch_norm.rs/0
{ "file_path": "candle/candle-nn/src/batch_norm.rs", "repo_id": "candle", "token_count": 5325 }
29
use candle::{DType, Device, Result, Shape, Tensor, Var}; use std::collections::HashMap; use std::sync::{Arc, Mutex}; /// A `VarMap` is a store that holds named variables. Variables can be retrieved from the stores /// and new variables can be added by providing some initialization config in case they are /// missing. /// `VarMap` structures can be serialized in the safetensors format. #[derive(Clone)] pub struct VarMap { data: Arc<Mutex<HashMap<String, Var>>>, } impl VarMap { /// Create a new empty `VarMap`. #[allow(clippy::new_without_default)] pub fn new() -> Self { let data = Arc::new(Mutex::new(HashMap::new())); Self { data } } /// Retrieve all the variables currently stored in the map. pub fn all_vars(&self) -> Vec<Var> { let tensor_data = self.data.lock().unwrap(); #[allow(clippy::map_clone)] tensor_data.values().map(|c| c.clone()).collect::<Vec<_>>() } /// Save the map in the safetensors format. pub fn save<P: AsRef<std::path::Path>>(&self, path: P) -> Result<()> { let tensor_data = self.data.lock().unwrap(); let data = tensor_data.iter().map(|(k, v)| (k, v.as_tensor())); safetensors::tensor::serialize_to_file(data, &None, path.as_ref())?; Ok(()) } /// Load some values from a safetensors file and modify the existing variables to have these /// values. /// /// Note that values for variables that are currently not in the map are not kept. pub fn load<P: AsRef<std::path::Path>>(&mut self, path: P) -> Result<()> { let path = path.as_ref(); let data = unsafe { candle::safetensors::MmapedSafetensors::new(path)? }; let mut tensor_data = self.data.lock().unwrap(); for (name, var) in tensor_data.iter_mut() { let data = data.load(name, var.device())?; if let Err(err) = var.set(&data) { candle::bail!("error setting {name} using data from {path:?}: {err}",) } } Ok(()) } /// Set a named variable to some value. pub fn set_one<K: AsRef<str>, V: AsRef<Tensor>>(&mut self, name: K, value: V) -> Result<()> { let tensor_data = self.data.lock().unwrap(); let name = name.as_ref(); match tensor_data.get(name) { None => candle::bail!("cannot find {name} in VarMap"), Some(var) => { if let Err(err) = var.set(value.as_ref()) { candle::bail!("error setting {name}: {err}",) } } } Ok(()) } /// Set some named variables to some values. /// /// If an error is returned, some of the variables might have already been set to their new /// values. pub fn set<I: Iterator<Item = (K, V)>, K: AsRef<str>, V: AsRef<Tensor>>( &mut self, iter: I, ) -> Result<()> { let tensor_data = self.data.lock().unwrap(); for (name, value) in iter { let name = name.as_ref(); match tensor_data.get(name) { None => candle::bail!("cannot find {name} in VarMap"), Some(var) => { if let Err(err) = var.set(value.as_ref()) { candle::bail!("error setting {name}: {err}",) } } } } Ok(()) } /// Retrieve or add a new variable. pub fn get<S: Into<Shape>>( &self, shape: S, path: &str, init: crate::Init, dtype: DType, device: &Device, ) -> Result<Tensor> { let shape = shape.into(); let mut tensor_data = self.data.lock().unwrap(); if let Some(tensor) = tensor_data.get(path) { let tensor_shape = tensor.shape(); if &shape != tensor_shape { candle::bail!("shape mismatch on {path}: {shape:?} <> {tensor_shape:?}") } return Ok(tensor.as_tensor().clone()); } let var = init.var(shape, dtype, device)?; let tensor = var.as_tensor().clone(); tensor_data.insert(path.to_string(), var); Ok(tensor) } pub fn data(&self) -> &Mutex<HashMap<String, Var>> { &self.data } }
candle/candle-nn/src/var_map.rs/0
{ "file_path": "candle/candle-nn/src/var_map.rs", "repo_id": "candle", "token_count": 1973 }
30
from candle import Tensor, QTensor, DType from typing import ( Dict, Tuple, Any, Optional, Union, Iterator, Set, overload, Mapping, TypeVar, List, ) from collections import OrderedDict, namedtuple TensorLike = Union[Tensor, QTensor] T = TypeVar("T", bound="Module") class _IncompatibleKeys(namedtuple("IncompatibleKeys", ["missing_keys", "unexpected_keys"])): def __repr__(self): if not self.missing_keys and not self.unexpected_keys: return "<All keys matched successfully>" return super().__repr__() __str__ = __repr__ # see: https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/module.py class Module: """ Pytorch like Module. Base class for all neural network modules. Your models should also subclass this class. """ _modules: Dict[str, Optional["Module"]] _buffers: Dict[str, Optional[TensorLike]] _non_persistent_buffers_set: Set[str] _quantizable_buffers: Set[str] _version: int = 1 def __init__(self, *args, **kwargs) -> None: """ Initializes internal Module state """ super().__setattr__("_modules", OrderedDict()) super().__setattr__("_buffers", OrderedDict()) super().__setattr__("_non_persistent_buffers_set", set()) super().__setattr__("_quantizable_buffers", set()) def __call__(self, *input): """ Call self as a function. """ return self.forward(*input) def forward(self, *input): """ Defines the computation performed at every call. Should be overridden by all subclasses. """ pass def children(self) -> Iterator["Module"]: r"""Returns an iterator over immediate children modules. Yields: Module: a child module """ for name, module in self.named_children(): yield module def named_children(self) -> Iterator[Tuple[str, "Module"]]: r"""Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields: (str, Module): Tuple containing a name and child module Example:: >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module) """ memo = set() for name, module in self._modules.items(): if module is not None and module not in memo: memo.add(module) yield name, module def add_module(self, name: str, module: Optional["Module"]) -> None: r"""Adds a child module to the current module. The module can be accessed as an attribute using the given name. Args: name (str): name of the child module. The child module can be accessed from this module using the given name module (Module): child module to be added to the module. """ if not isinstance(module, Module) and module is not None: raise TypeError(f"{str(module)} is not a Module subclass") elif not isinstance(name, str): raise TypeError(f"module name should be a string. Got {name}") elif hasattr(self, name) and name not in self._modules: raise KeyError(f"attribute '{name}' already exists") elif "." in name: raise KeyError(f'module name can\'t contain ".", got: {name}') elif name == "": raise KeyError('module name can\'t be empty string ""') self._modules[name] = module def register_module(self, name: str, module: Optional["Module"]) -> None: r"""Alias for :func:`add_module`.""" self.add_module(name, module) def modules(self) -> Iterator["Module"]: r"""Returns an iterator over all modules in the network.""" for _, module in self.named_modules(): yield module def named_modules( self, memo: Optional[Set["Module"]] = None, prefix: str = "", remove_duplicate: bool = True, ): r"""Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Args: memo: a memo to store the set of modules already added to the result prefix: a prefix that will be added to the name of the module remove_duplicate: whether to remove the duplicated module instances in the result or not Yields: (str, Module): Tuple of name and module Note: Duplicate modules are returned only once. In the following example, ``l`` will be returned only once. """ if memo is None: memo = set() if self not in memo: if remove_duplicate: memo.add(self) yield prefix, self for name, module in self._modules.items(): if module is None: continue submodule_prefix = prefix + ("." if prefix else "") + name for m in module.named_modules(memo, submodule_prefix, remove_duplicate): yield m def buffers(self, recurse: bool = True) -> Iterator[TensorLike]: """ Returns an iterator over module buffers. """ for name, buf in self.named_buffers(recurse=recurse): yield buf def named_buffers( self, prefix: str = "", recurse: bool = True, remove_duplicate: bool = True ) -> Iterator[Tuple[str, TensorLike]]: r"""Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Args: prefix (str): prefix to prepend to all buffer names. recurse (bool, optional): if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Defaults to True. remove_duplicate (bool, optional): whether to remove the duplicated buffers in the result. Defaults to True. Yields: (str, Tensor): Tuple containing the name and buffer Example:: >>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size()) """ gen = self._named_members( lambda module: module._buffers.items(), prefix=prefix, recurse=recurse, remove_duplicate=remove_duplicate, ) yield from gen # The user can pass an optional arbitrary mappable object to `state_dict`, in which case `state_dict` returns # back that same object. But if they pass nothing, an `OrderedDict` is created and returned. T_destination = TypeVar("T_destination", bound=Dict[str, Any]) @overload def state_dict(self, *, destination: T_destination, prefix: str = ..., keep_vars: bool = ...) -> T_destination: ... @overload def state_dict(self, *, prefix: str = ..., keep_vars: bool = ...) -> Dict[str, Any]: ... def state_dict(self, *args, destination=None, prefix="", keep_vars=False): r"""Returns a dictionary containing references to the whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to ``None`` are not included. .. note:: The returned object is a shallow copy. It contains references to the module's parameters and buffers. .. warning:: Currently ``state_dict()`` also accepts positional arguments for ``destination``, ``prefix`` and ``keep_vars`` in order. However, this is being deprecated and keyword arguments will be enforced in future releases. .. warning:: Please avoid the use of argument ``destination`` as it is not designed for end-users. Args: destination (dict, optional): If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an ``OrderedDict`` will be created and returned. Default: ``None``. prefix (str, optional): a prefix added to parameter and buffer names to compose the keys in state_dict. Default: ``''``. keep_vars (bool, optional): by default the :class:`~candle.Tensor` s returned in the state dict are detached from autograd. If it's set to ``True``, detaching will not be performed. Default: ``False``. Returns: dict: a dictionary containing a whole state of the module Example:: >>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight'] """ # TODO: Remove `args` and the parsing logic when BC allows. if len(args) > 0: if destination is None: destination = args[0] if len(args) > 1 and prefix == "": prefix = args[1] if len(args) > 2 and keep_vars is False: keep_vars = args[2] if destination is None: destination = OrderedDict() destination._metadata = OrderedDict() local_metadata = dict(version=self._version) if hasattr(destination, "_metadata"): destination._metadata[prefix[:-1]] = local_metadata self._save_to_state_dict(destination, prefix, keep_vars) for name, module in self._modules.items(): if module is not None: module.state_dict( destination=destination, prefix=prefix + name + ".", keep_vars=keep_vars, ) return destination def _save_to_state_dict(self, destination, prefix, keep_vars): r"""Saves module state to `destination` dictionary, containing a state of the module, but not its descendants. This is called on every submodule in :meth:`~candle.nn.Module.state_dict`. In rare cases, subclasses can achieve class-specific behavior by overriding this method with custom logic. Args: destination (dict): a dict where state will be stored prefix (str): the prefix for parameters and buffers used in this module """ for name, buf in self._buffers.items(): if buf is not None and name not in self._non_persistent_buffers_set: if isinstance(buf, Tensor): destination[prefix + name] = buf if keep_vars else buf.detach() else: destination[prefix + name] = buf def load_state_dict(self, state_dict: Mapping[str, Any], strict: bool = True, assign: bool = False): r"""Copies parameters and buffers from :attr:`state_dict` into this module and its descendants. If :attr:`strict` is ``True``, then the keys of :attr:`state_dict` must exactly match the keys returned by this module's :meth:`~candle.nn.Module.state_dict` function. .. warning:: If :attr:`assign` is ``True`` the optimizer must be created after the call to :attr:`load_state_dict`. Args: state_dict (dict): a dict containing parameters and persistent buffers. strict (bool, optional): whether to strictly enforce that the keys in :attr:`state_dict` match the keys returned by this module's :meth:`~candle.nn.Module.state_dict` function. Default: ``True`` assign (bool, optional): whether to assign items in the state dictionary to their corresponding keys in the module instead of copying them inplace into the module's current parameters and buffers. When ``False``, the properties of the tensors in the current module are preserved while when ``True``, the properties of the Tensors in the state dict are preserved. Default: ``False`` Returns: ``NamedTuple`` with ``missing_keys`` and ``unexpected_keys`` fields: * **missing_keys** is a list of str containing the missing keys * **unexpected_keys** is a list of str containing the unexpected keys Note: If a parameter or buffer is registered as ``None`` and its corresponding key exists in :attr:`state_dict`, :meth:`load_state_dict` will raise a ``RuntimeError``. """ if not isinstance(state_dict, Mapping): raise TypeError(f"Expected state_dict to be dict-like, got {type(state_dict)}.") missing_keys: List[str] = [] unexpected_keys: List[str] = [] error_msgs: List[str] = [] # copy state_dict so _load_from_state_dict can modify it metadata = getattr(state_dict, "_metadata", None) state_dict = OrderedDict(state_dict) if metadata is not None: # mypy isn't aware that "_metadata" exists in state_dict state_dict._metadata = metadata # type: ignore[attr-defined] def load(module, local_state_dict, prefix=""): local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {}) if assign: local_metadata["assign_to_params_buffers"] = assign module._load_from_state_dict( local_state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs, ) for name, child in module._modules.items(): if child is not None: child_prefix = prefix + name + "." child_state_dict = {k: v for k, v in local_state_dict.items() if k.startswith(child_prefix)} load(child, child_state_dict, child_prefix) load(self, state_dict) del load if strict: if len(unexpected_keys) > 0: error_msgs.insert( 0, "Unexpected key(s) in state_dict: {}. ".format(", ".join(f'"{k}"' for k in unexpected_keys)), ) if len(missing_keys) > 0: error_msgs.insert( 0, "Missing key(s) in state_dict: {}. ".format(", ".join(f'"{k}"' for k in missing_keys)), ) if len(error_msgs) > 0: raise RuntimeError( "Error(s) in loading state_dict for {}:\n\t{}".format(self.__class__.__name__, "\n\t".join(error_msgs)) ) return _IncompatibleKeys(missing_keys, unexpected_keys) def _load_from_state_dict( self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs, ): r"""Copies parameters and buffers from :attr:`state_dict` into only this module, but not its descendants. This is called on every submodule in :meth:`~candle.nn.Module.load_state_dict`. Metadata saved for this module in input :attr:`state_dict` is provided as :attr:`local_metadata`. For state dicts without metadata, :attr:`local_metadata` is empty. Subclasses can achieve class-specific backward compatible loading using the version number at `local_metadata.get("version", None)`. Additionally, :attr:`local_metadata` can also contain the key `assign_to_params_buffers` that indicates whether keys should be assigned their corresponding tensor in the state_dict. .. note:: :attr:`state_dict` is not the same object as the input :attr:`state_dict` to :meth:`~candle.nn.Module.load_state_dict`. So it can be modified. Args: state_dict (dict): a dict containing parameters and persistent buffers. prefix (str): the prefix for parameters and buffers used in this module local_metadata (dict): a dict containing the metadata for this module. See strict (bool): whether to strictly enforce that the keys in :attr:`state_dict` with :attr:`prefix` match the names of parameters and buffers in this module missing_keys (list of str): if ``strict=True``, add missing keys to this list unexpected_keys (list of str): if ``strict=True``, add unexpected keys to this list error_msgs (list of str): error messages should be added to this list, and will be reported together in :meth:`~candle.nn.Module.load_state_dict` """ persistent_buffers = {k: v for k, v in self._buffers.items() if k not in self._non_persistent_buffers_set} local_name_params = persistent_buffers.items() local_state = {k: v for k, v in local_name_params if v is not None} for name, param in local_state.items(): key = prefix + name if key in state_dict: input_param = state_dict[key] if not isinstance(input_param, (Tensor, QTensor)): error_msgs.append( f'While copying the parameter named "{key}", ' "expected Tensor-like object from checkpoint but " f"received {type(input_param)}" ) continue if input_param.shape != param.shape: # local shape should match the one in checkpoint error_msgs.append( "size mismatch for {}: copying a param with shape {} from checkpoint, " "the shape in current model is {}.".format(key, input_param.shape, param.shape) ) continue try: # Shape checks are already done above -> Just assign tensor setattr(self, name, input_param) except Exception as ex: error_msgs.append( f'While copying the parameter named "{key}", ' f"whose dimensions in the model are {param.shape} and " f"whose dimensions in the checkpoint are {input_param.shape}, " f"an exception occurred : {ex.args}." ) elif strict: missing_keys.append(key) if strict: for key in state_dict.keys(): if key.startswith(prefix): input_name = key[len(prefix) :] input_name = input_name.split(".", 1)[0] # get the name of param/buffer/child if input_name not in self._modules and input_name not in local_state: unexpected_keys.append(key) def _named_members(self, get_members_fn, prefix="", recurse=True, remove_duplicate: bool = True): r"""Helper method for yielding various names + members of modules.""" memo = set() modules = self.named_modules(prefix=prefix, remove_duplicate=remove_duplicate) if recurse else [(prefix, self)] for module_prefix, module in modules: members = get_members_fn(module) for k, v in members: if v is None or v in memo: continue if remove_duplicate: memo.add(v) name = module_prefix + ("." if module_prefix else "") + k yield name, v def _get_name(self): return self.__class__.__name__ def _apply(self, fn): for module in self.children(): module._apply(fn) for key, buf in self._buffers.items(): if buf is not None: self._buffers[key] = fn(buf) return self def __move_tensor_to_device(self, tensor: TensorLike, device: str): if isinstance(tensor, Tensor): return tensor.to_device(device) else: raise NotImplementedError("Cannot offload QTensor to cuda, yet!") def device(self) -> str: """ Gets the device of the module, by inspecting its tensors. """ tensor = next(self.buffers()) if isinstance(tensor, Tensor): return tensor.device else: # QTensors can only be on the CPU return "cpu" def cuda(self: T) -> T: r"""Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. .. note:: This method modifies the module in-place. Returns: Module: self """ def to_cuda(t: TensorLike): return self.__move_tensor_to_device(t, "cuda") return self._apply(to_cuda) def cpu(self: T) -> T: r"""Moves all model parameters and buffers to the CPU. .. note:: This method modifies the module in-place. Returns: Module: self """ def to_cpu(t: TensorLike): return self.__move_tensor_to_device(t, "cpu") return self._apply(to_cpu) def __cast_tensor(self, tensor: TensorLike, dtype: Union[DType, str]): if isinstance(tensor, Tensor): return tensor.to_dtype(dtype) else: raise TypeError("candle.Module.to only accepts Tensor dtypes, but got desired dtype={}".format(dtype)) def type(self: T, dst_type: Union[DType, str]) -> T: r"""Casts all parameters and buffers to :attr:`dst_type`. .. note:: This method modifies the module in-place. Args: dst_type (type or string): the desired type Returns: Module: self """ def cast(t: TensorLike): return self.__cast_tensor(t, dst_type) return self._apply(cast) @overload def to( self: T, device: str = ..., dtype: Optional[Union[DType, str]] = ..., ) -> T: ... @overload def to(self: T, dtype: Union[DType, str]) -> T: ... def to(self, *args, **kwargs): r"""Moves and/or casts the parameters and buffers. This can be called as .. function:: to(device=None, dtype=None) :noindex: .. function:: to(dtype) :noindex: See below for examples. .. note:: This method modifies the module in-place. Args: device (:class:`candle.device`): the desired device of the parameters and buffers in this module dtype (:class:`candle.dtype`): the desired floating point dtype of the parameters and buffers in this module Returns: Module: self """ device = None dtype = None if args: for arg in args: # Assuming arg can be a string representing a device or a dtype if isinstance(arg, str): lower_arg = str(arg).lower() if lower_arg.startswith("cuda") or lower_arg == "cpu": device = lower_arg else: dtype = arg elif isinstance(arg, DType): dtype = str(arg) else: raise TypeError("Module.to() received an invalid combination of arguments. Got: {}".format(args)) if kwargs: device = kwargs.get("device", device) dtype = str(kwargs.get("dtype", dtype)) if device: device = device.lower() if dtype: dtype = dtype.lower() if dtype not in ["f32", "f16", "f64"]: raise TypeError( "candle.Module.to only accepts floating point" "dtypes, but got desired dtype={}".format(dtype) ) def convert(t): if dtype: t = self.__cast_tensor(t, dtype) if device: t = self.__move_tensor_to_device(t, device) return t return self._apply(convert) def __setattr__(self, __name: str, __value: Any) -> None: if isinstance(__value, Module): self._modules[__name] = __value elif isinstance(__value, QTensor): if __name in self._quantizable_buffers: type = __value.ggml_dtype.lower() if type in ["f32", "f16"]: # It is faster to just dequantize the tensor here and use the normal tensor operations dequant = __value.dequantize() if type == "f16": dequant = dequant.to_dtype("f16") self._buffers[__name] = dequant else: self._buffers[__name] = __value else: # We expect a normal tensor here => dequantize it self._buffers[__name] = __value.dequantize() elif isinstance(__value, Tensor): self._buffers[__name] = __value else: super().__setattr__(__name, __value) def __getattr__(self, __name: str) -> Any: if "_modules" in self.__dict__: modules = self.__dict__["_modules"] if __name in modules: return modules[__name] if "_buffers" in self.__dict__: tensors = self.__dict__["_buffers"] if __name in tensors: return tensors[__name] return super().__getattribute__(__name) def __delattr__(self, name): if name in self._buffers: del self._buffers[name] elif name in self._modules: del self._modules[name] else: super().__delattr__(name)
candle/candle-pyo3/py_src/candle/nn/module.py/0
{ "file_path": "candle/candle-pyo3/py_src/candle/nn/module.py", "repo_id": "candle", "token_count": 12028 }
31
import candle print(f"mkl: {candle.utils.has_mkl()}") print(f"accelerate: {candle.utils.has_accelerate()}") print(f"num-threads: {candle.utils.get_num_threads()}") print(f"cuda: {candle.utils.cuda_is_available()}") t = candle.Tensor(42.0) print(t) print(t.shape, t.rank, t.device) print(t + t) t = candle.Tensor([3.0, 1, 4, 1, 5, 9, 2, 6]) print(t) print(t + t) t = t.reshape([2, 4]) print(t.matmul(t.t())) print(t.to_dtype(candle.u8)) print(t.to_dtype("u8")) t = candle.randn((5, 3)) print(t) print(t.dtype) t = candle.randn((16, 256)) quant_t = t.quantize("q6k") dequant_t = quant_t.dequantize() diff2 = (t - dequant_t).sqr() print(diff2.mean_all())
candle/candle-pyo3/test.py/0
{ "file_path": "candle/candle-pyo3/test.py", "repo_id": "candle", "token_count": 340 }
32
use super::with_tracing::{linear, Embedding, Linear}; use candle::{Result, Tensor}; use candle_nn::{layer_norm, LayerNorm, VarBuilder}; #[derive(Debug, Clone)] pub struct Config { pub vocab_size: usize, pub decoder_vocab_size: Option<usize>, pub max_position_embeddings: usize, pub encoder_layers: usize, pub encoder_ffn_dim: usize, pub encoder_attention_heads: usize, pub decoder_layers: usize, pub decoder_ffn_dim: usize, pub decoder_attention_heads: usize, pub use_cache: bool, pub is_encoder_decoder: bool, pub activation_function: candle_nn::Activation, pub d_model: usize, pub decoder_start_token_id: u32, pub scale_embedding: bool, pub pad_token_id: u32, pub eos_token_id: u32, pub forced_eos_token_id: u32, pub share_encoder_decoder_embeddings: bool, } impl Config { // https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-fr-en/blob/main/config.json pub fn opus_mt_tc_big_fr_en() -> Self { Self { activation_function: candle_nn::Activation::Relu, d_model: 1024, decoder_attention_heads: 16, decoder_ffn_dim: 4096, decoder_layers: 6, decoder_start_token_id: 53016, decoder_vocab_size: Some(53017), encoder_attention_heads: 16, encoder_ffn_dim: 4096, encoder_layers: 6, eos_token_id: 43311, forced_eos_token_id: 43311, is_encoder_decoder: true, max_position_embeddings: 1024, pad_token_id: 53016, scale_embedding: true, share_encoder_decoder_embeddings: true, use_cache: true, vocab_size: 53017, } } // https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json pub fn opus_mt_fr_en() -> Self { Self { activation_function: candle_nn::Activation::Swish, d_model: 512, decoder_attention_heads: 8, decoder_ffn_dim: 2048, decoder_layers: 6, decoder_start_token_id: 59513, decoder_vocab_size: Some(59514), encoder_attention_heads: 8, encoder_ffn_dim: 2048, encoder_layers: 6, eos_token_id: 0, forced_eos_token_id: 0, is_encoder_decoder: true, max_position_embeddings: 512, pad_token_id: 59513, scale_embedding: true, share_encoder_decoder_embeddings: true, use_cache: true, vocab_size: 59514, } } } #[derive(Debug, Clone)] struct SinusoidalPositionalEmbedding { emb: Embedding, } impl SinusoidalPositionalEmbedding { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let dev = vb.device(); let dtype = vb.dtype(); let num_positions = cfg.max_position_embeddings; let dim = cfg.d_model; let inv_freq: Vec<_> = (0..dim) .step_by(2) .map(|i| 1f32 / 10000f32.powf(i as f32 / dim as f32)) .collect(); let inv_freq_len = inv_freq.len(); let inv_freq = Tensor::from_vec(inv_freq, (1, inv_freq_len), dev)?.to_dtype(dtype)?; let t = Tensor::arange(0u32, num_positions as u32, dev)? .to_dtype(dtype)? .reshape((num_positions, 1))?; let freqs = t.matmul(&inv_freq)?; let sin = freqs.sin()?; let cos = freqs.cos()?; let weights = Tensor::cat(&[&sin, &cos], 1)?.contiguous()?; let emb = Embedding::from_weights(weights)?; Ok(Self { emb }) } fn forward(&self, input_ids: &Tensor, past_kv_len: usize) -> Result<Tensor> { let seq_len = input_ids.dim(1)?; Tensor::arange( past_kv_len as u32, (past_kv_len + seq_len) as u32, input_ids.device(), )? .apply(&self.emb) } } #[derive(Debug, Clone)] struct Attention { q_proj: Linear, k_proj: Linear, v_proj: Linear, out_proj: Linear, scaling: f64, num_heads: usize, head_dim: usize, kv_cache: Option<(Tensor, Tensor)>, is_decoder: bool, } impl Attention { fn new(cfg: &Config, is_decoder: bool, vb: VarBuilder) -> Result<Self> { let num_heads = if is_decoder { cfg.decoder_attention_heads } else { cfg.encoder_attention_heads }; let embed_dim = cfg.d_model; let head_dim = embed_dim / num_heads; let scaling = (head_dim as f64).powf(-0.5); let q_proj = linear(embed_dim, embed_dim, vb.pp("q_proj"))?; let k_proj = linear(embed_dim, embed_dim, vb.pp("k_proj"))?; let v_proj = linear(embed_dim, embed_dim, vb.pp("v_proj"))?; let out_proj = linear(embed_dim, embed_dim, vb.pp("out_proj"))?; Ok(Self { q_proj, k_proj, v_proj, out_proj, scaling, num_heads, head_dim, kv_cache: None, is_decoder, }) } fn _shape(&self, tensor: &Tensor, bsz: usize) -> Result<Tensor> { tensor .reshape((bsz, (), self.num_heads, self.head_dim))? .transpose(1, 2)? .contiguous() } fn forward( &mut self, xs: &Tensor, kv_states: Option<&Tensor>, attn_mask: Option<&Tensor>, ) -> Result<Tensor> { let (b_sz, tgt_len, _) = xs.dims3()?; let query_states = (xs.apply(&self.q_proj)? * self.scaling)?; let (key_states, value_states) = match kv_states { None => { let key_states = self._shape(&xs.apply(&self.k_proj)?, b_sz)?; let value_states = self._shape(&xs.apply(&self.v_proj)?, b_sz)?; if self.is_decoder { let kv_states = match &self.kv_cache { None => (key_states, value_states), Some((p_key_states, p_value_states)) => { let key_states = Tensor::cat(&[p_key_states, &key_states], 2)?; let value_states = Tensor::cat(&[p_value_states, &value_states], 2)?; (key_states, value_states) } }; self.kv_cache = Some(kv_states.clone()); kv_states } else { (key_states, value_states) } } Some(kv_states) => { let key_states = self._shape(&kv_states.apply(&self.k_proj)?, b_sz)?; let value_states = self._shape(&kv_states.apply(&self.v_proj)?, b_sz)?; (key_states, value_states) } }; let proj_shape = (b_sz * self.num_heads, (), self.head_dim); let query_states = self._shape(&query_states, b_sz)?.reshape(proj_shape)?; let key_states = key_states.reshape(proj_shape)?; let value_states = value_states.reshape(proj_shape)?; let attn_weights = query_states.matmul(&key_states.transpose(1, 2)?)?; let attn_weights = match attn_mask { None => attn_weights, Some(attn_mask) => attn_weights.broadcast_add(attn_mask)?, }; let attn_probs = candle_nn::ops::softmax_last_dim(&attn_weights)?; let attn_output = attn_probs.matmul(&value_states)?; attn_output .reshape((b_sz, self.num_heads, tgt_len, self.head_dim))? .transpose(1, 2)? .reshape((b_sz, tgt_len, self.head_dim * self.num_heads))? .apply(&self.out_proj) } fn reset_kv_cache(&mut self) { self.kv_cache = None } } #[derive(Debug, Clone)] struct EncoderLayer { self_attn: Attention, self_attn_layer_norm: LayerNorm, activation_fn: candle_nn::Activation, fc1: Linear, fc2: Linear, final_layer_norm: LayerNorm, } impl EncoderLayer { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let self_attn = Attention::new(cfg, true, vb.pp("self_attn"))?; let self_attn_layer_norm = layer_norm(cfg.d_model, 1e-5, vb.pp("self_attn_layer_norm"))?; let fc1 = linear(cfg.d_model, cfg.encoder_ffn_dim, vb.pp("fc1"))?; let fc2 = linear(cfg.encoder_ffn_dim, cfg.d_model, vb.pp("fc2"))?; let final_layer_norm = layer_norm(cfg.d_model, 1e-5, vb.pp("final_layer_norm"))?; Ok(Self { self_attn, self_attn_layer_norm, activation_fn: cfg.activation_function, fc1, fc2, final_layer_norm, }) } fn forward(&mut self, xs: &Tensor) -> Result<Tensor> { let residual = xs; let xs = (self.self_attn.forward(xs, None, None)? + residual)? .apply(&self.self_attn_layer_norm)?; let residual = &xs; let xs = xs .apply(&self.fc1)? .apply(&self.activation_fn)? .apply(&self.fc2)?; (xs + residual)?.apply(&self.final_layer_norm) } fn reset_kv_cache(&mut self) { self.self_attn.reset_kv_cache() } } #[derive(Debug, Clone)] struct DecoderLayer { self_attn: Attention, self_attn_layer_norm: LayerNorm, activation_fn: candle_nn::Activation, encoder_attn: Attention, encoder_attn_layer_norm: LayerNorm, fc1: Linear, fc2: Linear, final_layer_norm: LayerNorm, } impl DecoderLayer { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let self_attn = Attention::new(cfg, true, vb.pp("self_attn"))?; let self_attn_layer_norm = layer_norm(cfg.d_model, 1e-5, vb.pp("self_attn_layer_norm"))?; let encoder_attn = Attention::new(cfg, true, vb.pp("encoder_attn"))?; let encoder_attn_layer_norm = layer_norm(cfg.d_model, 1e-5, vb.pp("encoder_attn_layer_norm"))?; let fc1 = linear(cfg.d_model, cfg.decoder_ffn_dim, vb.pp("fc1"))?; let fc2 = linear(cfg.decoder_ffn_dim, cfg.d_model, vb.pp("fc2"))?; let final_layer_norm = layer_norm(cfg.d_model, 1e-5, vb.pp("final_layer_norm"))?; Ok(Self { self_attn, self_attn_layer_norm, activation_fn: cfg.activation_function, encoder_attn, encoder_attn_layer_norm, fc1, fc2, final_layer_norm, }) } fn forward( &mut self, xs: &Tensor, encoder_xs: Option<&Tensor>, attn_mask: &Tensor, ) -> Result<Tensor> { let residual = xs; let xs = (self.self_attn.forward(xs, None, Some(attn_mask))? + residual)? .apply(&self.self_attn_layer_norm)?; let xs = match encoder_xs { None => xs, Some(encoder_xs) => { let residual = &xs; let xs = self.encoder_attn.forward(&xs, Some(encoder_xs), None)?; (residual + xs)?.apply(&self.encoder_attn_layer_norm)? } }; let residual = &xs; let xs = xs .apply(&self.fc1)? .apply(&self.activation_fn)? .apply(&self.fc2)?; let xs = (xs + residual)?.apply(&self.final_layer_norm)?; Ok(xs) } fn reset_kv_cache(&mut self) { self.self_attn.reset_kv_cache(); self.encoder_attn.reset_kv_cache() } } #[derive(Debug, Clone)] pub struct Encoder { embed_tokens: Embedding, embed_positions: SinusoidalPositionalEmbedding, layers: Vec<EncoderLayer>, embed_scale: Option<f64>, } impl Encoder { fn new(cfg: &Config, embed_tokens: &Embedding, vb: VarBuilder) -> Result<Self> { let embed_positions = SinusoidalPositionalEmbedding::new(cfg, vb.pp("embed_positions"))?; let mut layers = Vec::with_capacity(cfg.encoder_layers); let vb_l = vb.pp("layers"); for idx in 0..cfg.encoder_layers { let layer = EncoderLayer::new(cfg, vb_l.pp(idx))?; layers.push(layer) } let embed_scale = if cfg.scale_embedding { Some((cfg.d_model as f64).sqrt()) } else { None }; Ok(Self { embed_tokens: embed_tokens.clone(), embed_positions, layers, embed_scale, }) } pub fn forward(&mut self, xs: &Tensor, past_kv_len: usize) -> Result<Tensor> { let xs = xs.apply(&self.embed_tokens)?; let xs = match self.embed_scale { None => xs, Some(scale) => (xs * scale)?, }; let embed_pos = self .embed_positions .forward(&xs, past_kv_len)? .unsqueeze(0)?; let mut xs = xs.broadcast_add(&embed_pos)?; for layer in self.layers.iter_mut() { xs = layer.forward(&xs)? } Ok(xs) } pub fn reset_kv_cache(&mut self) { for layer in self.layers.iter_mut() { layer.reset_kv_cache() } } } #[derive(Debug, Clone)] pub struct Decoder { embed_tokens: Embedding, embed_positions: SinusoidalPositionalEmbedding, layers: Vec<DecoderLayer>, embed_scale: Option<f64>, } impl Decoder { fn new(cfg: &Config, embed_tokens: &Embedding, vb: VarBuilder) -> Result<Self> { let embed_positions = SinusoidalPositionalEmbedding::new(cfg, vb.pp("embed_positions"))?; let mut layers = Vec::with_capacity(cfg.decoder_layers); let vb_l = vb.pp("layers"); for idx in 0..cfg.decoder_layers { let layer = DecoderLayer::new(cfg, vb_l.pp(idx))?; layers.push(layer) } let embed_scale = if cfg.scale_embedding { Some((cfg.d_model as f64).sqrt()) } else { None }; Ok(Self { embed_tokens: embed_tokens.clone(), embed_positions, layers, embed_scale, }) } pub fn forward( &mut self, xs: &Tensor, encoder_xs: Option<&Tensor>, past_kv_len: usize, attn_mask: &Tensor, ) -> Result<Tensor> { let xs = xs.apply(&self.embed_tokens)?; let xs = match self.embed_scale { None => xs, Some(scale) => (xs * scale)?, }; let embed_pos = self .embed_positions .forward(&xs, past_kv_len)? .unsqueeze(0)?; let mut xs = xs.broadcast_add(&embed_pos)?; for layer in self.layers.iter_mut() { xs = layer.forward(&xs, encoder_xs, attn_mask)?; } Ok(xs) } pub fn reset_kv_cache(&mut self) { for layer in self.layers.iter_mut() { layer.reset_kv_cache() } } } #[derive(Debug, Clone)] struct Model { shared: Embedding, encoder: Encoder, decoder: Decoder, } impl Model { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let shared = Embedding::new(cfg.vocab_size, cfg.d_model, vb.pp("shared"))?; let encoder = Encoder::new(cfg, &shared, vb.pp("encoder"))?; let decoder = Decoder::new(cfg, &shared, vb.pp("decoder"))?; Ok(Self { shared, encoder, decoder, }) } fn reset_kv_cache(&mut self) { self.encoder.reset_kv_cache(); self.decoder.reset_kv_cache(); } } #[derive(Debug, Clone)] pub struct MTModel { model: Model, lm_head: Linear, final_logits_bias: Tensor, } impl MTModel { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let target_vocab_size = cfg.decoder_vocab_size.unwrap_or(cfg.vocab_size); let final_logits_bias = vb.get((1, target_vocab_size), "final_logits_bias")?; let model = Model::new(cfg, vb.pp("model"))?; let lm_head = Linear::from_weights(model.shared.embeddings().clone(), None); Ok(Self { model, lm_head, final_logits_bias, }) } pub fn encoder(&mut self) -> &mut Encoder { &mut self.model.encoder } pub fn decoder(&mut self) -> &mut Decoder { &mut self.model.decoder } pub fn decode( &mut self, xs: &Tensor, encoder_xs: &Tensor, past_kv_len: usize, ) -> Result<Tensor> { let seq_len = xs.dim(1)?; let mask: Vec<_> = (0..seq_len) .flat_map(|i| (0..seq_len).map(move |j| if j > i { f32::NEG_INFINITY } else { 0f32 })) .collect(); let mask = Tensor::from_vec(mask, (seq_len, seq_len), xs.device())?; self.model .decoder .forward(xs, Some(encoder_xs), past_kv_len, &mask)? .apply(&self.lm_head)? .broadcast_add(&self.final_logits_bias) } pub fn reset_kv_cache(&mut self) { self.model.reset_kv_cache(); } }
candle/candle-transformers/src/models/marian.rs/0
{ "file_path": "candle/candle-transformers/src/models/marian.rs", "repo_id": "candle", "token_count": 8917 }
33
use crate::quantized_nn::{layer_norm, linear, Linear}; pub use crate::quantized_var_builder::VarBuilder; use candle::{DType, Device, IndexOp, Module, Result, Tensor, D}; use candle_nn::Activation; pub use crate::models::mixformer::Config; const MAX_SEQ_LEN: usize = 4096; #[derive(Debug, Clone)] struct Embedding { wte: crate::quantized_nn::Embedding, } impl Embedding { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let wte = crate::quantized_nn::Embedding::new(cfg.vocab_size, cfg.n_embd, vb.pp("wte"))?; Ok(Self { wte }) } } impl Module for Embedding { fn forward(&self, xs: &Tensor) -> Result<Tensor> { self.wte.forward(xs) } } fn get_mask(size: usize, device: &Device) -> Result<Tensor> { let mask: Vec<_> = (0..size) .flat_map(|i| (0..size).map(move |j| u8::from(j > i))) .collect(); Tensor::from_slice(&mask, (size, size), device) } fn masked_fill(on_false: &Tensor, mask: &Tensor, on_true: f32) -> Result<Tensor> { let shape = mask.shape(); let on_true = Tensor::new(on_true, on_false.device())?.broadcast_as(shape.dims())?; let m = mask.where_cond(&on_true, on_false)?; Ok(m) } #[derive(Debug, Clone)] struct RotaryEmbedding { sin: Tensor, cos: Tensor, } impl RotaryEmbedding { fn new(dim: usize, max_seq_len: usize, dev: &Device) -> Result<Self> { let inv_freq: Vec<_> = (0..dim) .step_by(2) .map(|i| 1f32 / 10000f32.powf(i as f32 / dim as f32)) .collect(); let inv_freq_len = inv_freq.len(); let inv_freq = Tensor::from_vec(inv_freq, (1, inv_freq_len), dev)?; let t = Tensor::arange(0u32, max_seq_len as u32, dev)? .to_dtype(DType::F32)? .reshape((max_seq_len, 1))?; let freqs = t.matmul(&inv_freq)?; Ok(Self { sin: freqs.sin()?, cos: freqs.cos()?, }) } fn apply_rotary_emb_qkv( &self, qkv: &Tensor, seqlen_offset: usize, ) -> Result<(Tensor, Tensor, Tensor)> { let (_b_size, seqlen, three, _, _headdim) = qkv.dims5()?; if three != 3 { candle::bail!("unexpected shape for qkv {:?}", qkv.shape()) } let (_rotary_seqlen, rotary_dim) = self.cos.dims2()?; let rotary_dim = rotary_dim * 2; let q_rot = qkv.i((.., .., 0, .., ..rotary_dim))?; let q_pass = qkv.i((.., .., 0, .., rotary_dim..))?; let k_rot = qkv.i((.., .., 1, .., ..rotary_dim))?; let k_pass = qkv.i((.., .., 1, .., rotary_dim..))?; let q12 = q_rot.chunk(2, D::Minus1)?; let k12 = k_rot.chunk(2, D::Minus1)?; let (q1, q2) = (&q12[0], &q12[1]); let (k1, k2) = (&k12[0], &k12[1]); let c = self.cos.narrow(0, seqlen_offset, seqlen)?.unsqueeze(1)?; let s = self.sin.narrow(0, seqlen_offset, seqlen)?.unsqueeze(1)?; let q_rot = Tensor::cat( &[ (q1.broadcast_mul(&c)? - q2.broadcast_mul(&s)?)?, (q1.broadcast_mul(&s)? + q2.broadcast_mul(&c)?)?, ], D::Minus1, )?; let k_rot = Tensor::cat( &[ (k1.broadcast_mul(&c)? - k2.broadcast_mul(&s)?)?, (k1.broadcast_mul(&s)? + k2.broadcast_mul(&c)?)?, ], D::Minus1, )?; let q = Tensor::cat(&[&q_rot, &q_pass], D::Minus1)?; let k = Tensor::cat(&[&k_rot, &k_pass], D::Minus1)?; let v = qkv.i((.., .., 2))?; Ok((q, k, v)) } } #[derive(Debug, Clone)] #[allow(clippy::upper_case_acronyms)] struct MLP { fc1: Linear, fc2: Linear, act: Activation, } impl MLP { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let n_inner = cfg.n_inner.unwrap_or(4 * cfg.n_embd); let fc1 = linear(cfg.n_embd, n_inner, vb.pp("fc1"))?; let fc2 = linear(n_inner, cfg.n_embd, vb.pp("fc2"))?; Ok(Self { fc1, fc2, act: cfg.activation_function, }) } } impl Module for MLP { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.fc1)?.apply(&self.act)?.apply(&self.fc2) } } #[derive(Debug, Clone)] struct CausalLMHead { ln: candle_nn::LayerNorm, linear: Linear, } impl CausalLMHead { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let ln = layer_norm(cfg.n_embd, cfg.layer_norm_epsilon, vb.pp("ln"))?; let linear = linear(cfg.n_embd, cfg.vocab_size, vb.pp("linear"))?; Ok(Self { ln, linear }) } } impl Module for CausalLMHead { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.ln)? .apply(&self.linear)? .to_dtype(DType::F32) } } #[derive(Debug, Clone)] #[allow(clippy::upper_case_acronyms)] struct MHA { wqkv: Linear, out_proj: Linear, rotary_emb: RotaryEmbedding, kv_cache: Option<(Tensor, Tensor)>, head_dim: usize, n_head: usize, softmax_scale: f64, span: tracing::Span, } impl MHA { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let head_dim = cfg.n_embd / cfg.n_head; let op_size = cfg.n_embd; let wqkv = linear(cfg.n_embd, 3 * op_size, vb.pp("Wqkv"))?; let out_proj = linear(op_size, cfg.n_embd, vb.pp("out_proj"))?; let rotary_emb = RotaryEmbedding::new(cfg.rotary_dim, MAX_SEQ_LEN, vb.device())?; let softmax_scale = 1f64 / (head_dim as f64).sqrt(); Ok(Self { wqkv, out_proj, head_dim, n_head: cfg.n_head, kv_cache: None, rotary_emb, softmax_scale, span: tracing::span!(tracing::Level::TRACE, "mha"), }) } fn forward(&mut self, xs: &Tensor, mask: Option<&Tensor>) -> Result<Tensor> { let _enter = self.span.enter(); let (b_size, seq_len, _n_embd) = xs.dims3()?; let qkv = self .wqkv .forward(xs)? .reshape((b_size, seq_len, 3, (), self.head_dim))?; let seqlen_offset = match &self.kv_cache { None => 0, Some((prev_k, _)) => prev_k.dim(1)?, }; // In the python implementation, a single tensor is returned with the third axis of size 3. let (q, k, v) = self.rotary_emb.apply_rotary_emb_qkv(&qkv, seqlen_offset)?; let (k, v) = match &self.kv_cache { None => (k, v), Some((prev_k, prev_v)) => { let k = Tensor::cat(&[prev_k, &k], 1)?; let v = Tensor::cat(&[prev_v, &v], 1)?; (k, v) } }; self.kv_cache = Some((k.clone(), v.clone())); // scores = torch.einsum('bthd,bshd->bhts', q, k * softmax_scale) let q = q.transpose(1, 2)?.flatten_to(1)?; // b*h, t, d let k = k.transpose(1, 2)?.flatten_to(1)?; // b*h, s, d let v = v.transpose(1, 2)?.flatten_to(1)?; // b*h, s, d let attn_weights = (q.matmul(&k.t()?)? * self.softmax_scale)?; // b*h, t, s // causal_mask = torch.triu(torch.full((seqlen_q, seqlen_k), -10000.0, device=scores.device), 1) // scores = scores + causal_mask.to(dtype=scores.dtype) let attn_weights = match mask { None => attn_weights, Some(mask) => masked_fill( &attn_weights, &mask.broadcast_left(b_size * self.n_head)?, f32::NEG_INFINITY, )?, }; let attn_weights = candle_nn::ops::softmax_last_dim(&attn_weights)?; // output = torch.einsum('bhts,bshd->bthd', attention_drop, v) // attn_weights: b*h,t,s, v: b*h,s,d let attn_output = attn_weights.matmul(&v)?; // b*h,t,d let attn_output = attn_output .reshape((b_size, (), seq_len, self.head_dim))? .transpose(1, 2)? .flatten_from(D::Minus2)?; attn_output.apply(&self.out_proj) } fn clear_kv_cache(&mut self) { self.kv_cache = None } } #[derive(Debug, Clone)] struct ParallelBlock { ln: candle_nn::LayerNorm, mixer: MHA, mlp: MLP, span: tracing::Span, } impl ParallelBlock { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let ln = layer_norm(cfg.n_embd, cfg.layer_norm_epsilon, vb.pp("ln"))?; let mixer = MHA::new(cfg, vb.pp("mixer"))?; let mlp = MLP::new(cfg, vb.pp("mlp"))?; Ok(Self { ln, mixer, mlp, span: tracing::span!(tracing::Level::TRACE, "block"), }) } fn forward(&mut self, xs: &Tensor, mask: Option<&Tensor>) -> Result<Tensor> { let _enter = self.span.enter(); let residual = xs; let xs = xs.apply(&self.ln)?; let attn_outputs = self.mixer.forward(&xs, mask)?; let feed_forward_hidden_states = self.mlp.forward(&xs)?; attn_outputs + feed_forward_hidden_states + residual } fn clear_kv_cache(&mut self) { self.mixer.clear_kv_cache() } } #[derive(Debug, Clone)] pub struct MixFormerSequentialForCausalLM { embedding: Embedding, blocks: Vec<ParallelBlock>, head: CausalLMHead, span: tracing::Span, } impl MixFormerSequentialForCausalLM { pub fn new_v2(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb_head = vb.pp("lm_head"); let vb = vb.pp("transformer"); let embedding = Embedding::new(cfg, vb.pp("embd"))?; let mut blocks = Vec::new(); for i in 0..cfg.n_layer { let block = ParallelBlock::new(cfg, vb.pp("h").pp(i))?; blocks.push(block) } let head = CausalLMHead::new(cfg, vb_head)?; Ok(Self { embedding, blocks, head, span: tracing::span!(tracing::Level::TRACE, "mixformer"), }) } pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb = vb.pp("layers"); let embedding = Embedding::new(cfg, vb.pp(0))?; let mut blocks = Vec::new(); for i in 0..cfg.n_layer { let block = ParallelBlock::new(cfg, vb.pp(i + 1))?; blocks.push(block); } let head = CausalLMHead::new(cfg, vb.pp(cfg.n_layer + 1))?; Ok(Self { embedding, blocks, head, span: tracing::span!(tracing::Level::TRACE, "mixformer"), }) } pub fn forward(&mut self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let (_b_size, seq_len) = xs.dims2()?; let mut xs = xs.apply(&self.embedding)?; let mask = if seq_len <= 1 { None } else { Some(get_mask(seq_len, xs.device())?) }; for block in self.blocks.iter_mut() { xs = block.forward(&xs, mask.as_ref())?; } xs.narrow(1, seq_len - 1, 1)?.apply(&self.head)?.squeeze(1) } pub fn clear_kv_cache(&mut self) { self.blocks.iter_mut().for_each(|b| b.clear_kv_cache()) } }
candle/candle-transformers/src/models/quantized_mixformer.rs/0
{ "file_path": "candle/candle-transformers/src/models/quantized_mixformer.rs", "repo_id": "candle", "token_count": 5892 }
34
use candle::{DType, IndexOp, Result, Tensor}; use candle_nn::{Module, VarBuilder}; use super::image_encoder::ImageEncoderViT; use super::mask_decoder::MaskDecoder; use super::prompt_encoder::PromptEncoder; use super::tiny_vit::{tiny_vit_5m, TinyViT}; const PROMPT_EMBED_DIM: usize = 256; pub const IMAGE_SIZE: usize = 1024; const VIT_PATCH_SIZE: usize = 16; const PRED_IOU_THRESH: f32 = 0.88; const STABILITY_SCORE_OFFSET: f32 = 1.0; const STABILITY_SCORE_THRESHOLD: f32 = 0.95; const MODEL_MASK_THRESHOLD: f32 = 0.0; const CROP_NMS_THRESH: f32 = 0.7; #[derive(Debug)] enum ImageEncoder { Original(ImageEncoderViT), TinyViT(TinyViT), } impl Module for ImageEncoder { fn forward(&self, xs: &Tensor) -> Result<Tensor> { match self { Self::Original(vit) => vit.forward(xs), Self::TinyViT(vit) => vit.forward(xs), } } } #[derive(Debug)] pub struct Sam { image_encoder: ImageEncoder, prompt_encoder: PromptEncoder, mask_decoder: MaskDecoder, pixel_mean: Tensor, pixel_std: Tensor, } impl Sam { pub fn new( encoder_embed_dim: usize, encoder_depth: usize, encoder_num_heads: usize, encoder_global_attn_indexes: &[usize], vb: VarBuilder, ) -> Result<Self> { let image_embedding_size = IMAGE_SIZE / VIT_PATCH_SIZE; let image_encoder = ImageEncoderViT::new( IMAGE_SIZE, VIT_PATCH_SIZE, 3, encoder_embed_dim, encoder_depth, encoder_num_heads, PROMPT_EMBED_DIM, /* qkv_bias */ true, /* use_rel_pos */ true, /* use_abs_pos */ true, /* window_size */ 14, /* global_attn_indexes */ encoder_global_attn_indexes, vb.pp("image_encoder"), )?; let prompt_encoder = PromptEncoder::new( PROMPT_EMBED_DIM, (image_embedding_size, image_embedding_size), (IMAGE_SIZE, IMAGE_SIZE), 16, vb.pp("prompt_encoder"), )?; let mask_decoder = MaskDecoder::new( PROMPT_EMBED_DIM, /* num_multitask_outputs */ 3, /* iou_head_depth */ 3, /* iou_head_hidden_dim */ 256, vb.pp("mask_decoder"), )?; let pixel_mean = Tensor::new(&[123.675f32, 116.28, 103.53], vb.device())?.reshape((3, 1, 1))?; let pixel_std = Tensor::new(&[58.395f32, 57.12, 57.375], vb.device())?.reshape((3, 1, 1))?; Ok(Self { image_encoder: ImageEncoder::Original(image_encoder), prompt_encoder, mask_decoder, pixel_std, pixel_mean, }) } pub fn new_tiny(vb: VarBuilder) -> Result<Self> { let image_embedding_size = IMAGE_SIZE / VIT_PATCH_SIZE; let image_encoder = tiny_vit_5m(vb.pp("image_encoder"))?; let prompt_encoder = PromptEncoder::new( PROMPT_EMBED_DIM, (image_embedding_size, image_embedding_size), (IMAGE_SIZE, IMAGE_SIZE), 16, vb.pp("prompt_encoder"), )?; let mask_decoder = MaskDecoder::new( PROMPT_EMBED_DIM, /* num_multitask_outputs */ 3, /* iou_head_depth */ 3, /* iou_head_hidden_dim */ 256, vb.pp("mask_decoder"), )?; let pixel_mean = Tensor::new(&[123.675f32, 116.28, 103.53], vb.device())?.reshape((3, 1, 1))?; let pixel_std = Tensor::new(&[58.395f32, 57.12, 57.375], vb.device())?.reshape((3, 1, 1))?; Ok(Self { image_encoder: ImageEncoder::TinyViT(image_encoder), prompt_encoder, mask_decoder, pixel_std, pixel_mean, }) } pub fn embeddings(&self, img: &Tensor) -> Result<Tensor> { let img = self.preprocess(img)?.unsqueeze(0)?; self.image_encoder.forward(&img) } pub fn forward( &self, img: &Tensor, points: &[(f64, f64, bool)], multimask_output: bool, ) -> Result<(Tensor, Tensor)> { let (_c, original_h, original_w) = img.dims3()?; let img = self.preprocess(img)?.unsqueeze(0)?; let img_embeddings = self.image_encoder.forward(&img)?; let (low_res_mask, iou) = self.forward_for_embeddings( &img_embeddings, original_h, original_w, points, multimask_output, )?; let mask = low_res_mask .upsample_nearest2d(IMAGE_SIZE, IMAGE_SIZE)? .get(0)? .i((.., ..original_h, ..original_w))?; Ok((mask, iou)) } /// Generate the mask and IOU predictions from some image embeddings and prompt. /// /// The prompt is specified as a list of points `(x, y, b)`. `x` and `y` are the point /// coordinates (between 0 and 1) and `b` is `true` for points that should be part of the mask /// and `false` for points that should be part of the background and so excluded from the mask. pub fn forward_for_embeddings( &self, img_embeddings: &Tensor, original_h: usize, original_w: usize, points: &[(f64, f64, bool)], multimask_output: bool, ) -> Result<(Tensor, Tensor)> { let image_pe = self.prompt_encoder.get_dense_pe()?; let points = if points.is_empty() { None } else { let n_points = points.len(); let xys = points .iter() .flat_map(|(x, y, _b)| { let x = (*x as f32) * (original_w as f32); let y = (*y as f32) * (original_h as f32); [x, y] }) .collect::<Vec<_>>(); let labels = points .iter() .map(|(_x, _y, b)| if *b { 1f32 } else { 0f32 }) .collect::<Vec<_>>(); let points = Tensor::from_vec(xys, (1, n_points, 2), img_embeddings.device())?; let labels = Tensor::from_vec(labels, (1, n_points), img_embeddings.device())?; Some((points, labels)) }; let points = points.as_ref().map(|xy| (&xy.0, &xy.1)); let (sparse_prompt_embeddings, dense_prompt_embeddings) = self.prompt_encoder.forward(points, None, None)?; self.mask_decoder.forward( img_embeddings, &image_pe, &sparse_prompt_embeddings, &dense_prompt_embeddings, multimask_output, ) } pub fn unpreprocess(&self, img: &Tensor) -> Result<Tensor> { let img = img .broadcast_mul(&self.pixel_std)? .broadcast_add(&self.pixel_mean)?; img.maximum(&img.zeros_like()?)? .minimum(&(img.ones_like()? * 255.)?) } pub fn preprocess(&self, img: &Tensor) -> Result<Tensor> { let (_c, h, w) = img.dims3()?; let img = img .to_dtype(DType::F32)? .broadcast_sub(&self.pixel_mean)? .broadcast_div(&self.pixel_std)?; if h > IMAGE_SIZE || w > IMAGE_SIZE { candle::bail!("image is too large ({w}, {h}), maximum size {IMAGE_SIZE}") } let img = img.pad_with_zeros(1, 0, IMAGE_SIZE - h)?; img.pad_with_zeros(2, 0, IMAGE_SIZE - w) } fn process_crop( &self, img: &Tensor, cb: CropBox, point_grids: &[(f64, f64)], ) -> Result<Vec<crate::object_detection::Bbox<Tensor>>> { // Crop the image and calculate embeddings. let img = img.i((.., cb.y0..cb.y1, cb.x0..cb.x1))?; let img = self.preprocess(&img)?.unsqueeze(0)?; let img_embeddings = self.image_encoder.forward(&img)?; let crop_w = cb.x1 - cb.x0; let crop_h = cb.y1 - cb.y0; // Generate masks for this crop. let image_pe = self.prompt_encoder.get_dense_pe()?; let points = point_grids .iter() .map(|&(x, y)| vec![x as f32 * crop_w as f32, y as f32 * crop_h as f32]) .collect::<Vec<_>>(); let mut bboxes = Vec::new(); for points in points.chunks(64) { // Run the model on this batch. let points_len = points.len(); let in_points = Tensor::new(points.to_vec(), img.device())?.unsqueeze(1)?; let in_labels = Tensor::ones((points_len, 1), DType::F32, img.device())?; let (sparse_prompt_embeddings, dense_prompt_embeddings) = self.prompt_encoder .forward(Some((&in_points, &in_labels)), None, None)?; let (low_res_mask, iou_predictions) = self.mask_decoder.forward( &img_embeddings, &image_pe, &sparse_prompt_embeddings, &dense_prompt_embeddings, /* multimask_output */ true, )?; let low_res_mask = low_res_mask.flatten(0, 1)?; let iou_predictions = iou_predictions.flatten(0, 1)?.to_vec1::<f32>()?; let dev = low_res_mask.device(); for (i, iou) in iou_predictions.iter().enumerate() { // Filter by predicted IoU. if *iou < PRED_IOU_THRESH { continue; } let low_res_mask = low_res_mask.get(i)?; // Calculate stability score. let bound = Tensor::new(MODEL_MASK_THRESHOLD + STABILITY_SCORE_OFFSET, dev)? .broadcast_as(low_res_mask.shape())?; let intersections = low_res_mask .ge(&bound)? .to_dtype(DType::F32)? .sum_all()? .to_vec0::<f32>()?; let bound = Tensor::new(MODEL_MASK_THRESHOLD - STABILITY_SCORE_OFFSET, dev)? .broadcast_as(low_res_mask.shape())?; let unions = low_res_mask .ge(&bound)? .to_dtype(DType::F32)? .sum_all()? .to_vec0::<f32>()?; let stability_score = intersections / unions; if stability_score < STABILITY_SCORE_THRESHOLD { continue; } // Threshold masks and calculate boxes. let low_res_mask = low_res_mask .ge(&Tensor::new(0f32, dev)?.broadcast_as(low_res_mask.shape())?)? .to_dtype(DType::U32)?; let low_res_mask_per_x = low_res_mask.sum(0)?.to_vec1::<u32>()?; let low_res_mask_per_y = low_res_mask.sum(1)?.to_vec1::<u32>()?; let min_max_x = min_max_indexes(&low_res_mask_per_x); let min_max_y = min_max_indexes(&low_res_mask_per_y); if let Some(((x0, x1), (y0, y1))) = min_max_x.zip(min_max_y) { let bbox = crate::object_detection::Bbox { xmin: x0 as f32, ymin: y0 as f32, xmax: x1 as f32, ymax: y1 as f32, confidence: *iou, data: low_res_mask, }; bboxes.push(bbox); } // TODO: // Filter boxes that touch crop boundaries // Compress to RLE. } } let mut bboxes = vec![bboxes]; // Remove duplicates within this crop. crate::object_detection::non_maximum_suppression(&mut bboxes, CROP_NMS_THRESH); // TODO: Return to the original image frame. Ok(bboxes.remove(0)) } pub fn generate_masks( &self, img: &Tensor, points_per_side: usize, crop_n_layer: usize, crop_overlap_ratio: f64, crop_n_points_downscale_factor: usize, ) -> Result<Vec<crate::object_detection::Bbox<Tensor>>> { let (_c, h, w) = img.dims3()?; let point_grids = build_all_layer_point_grids( points_per_side, crop_n_layer, crop_n_points_downscale_factor, ); let crop_boxes = generate_crop_boxes((h, w), crop_n_layer, crop_overlap_ratio); let mut bboxes = Vec::new(); for crop_box in crop_boxes.into_iter() { let layer_idx = crop_box.layer_idx; let b = self.process_crop(img, crop_box, &point_grids[layer_idx])?; bboxes.extend(b) } // TODO: remove duplicates Ok(bboxes) } } // Return the first and last indexes i for which values[i] > 0 fn min_max_indexes(values: &[u32]) -> Option<(usize, usize)> { let (mut min_i, mut max_i) = (usize::MAX, usize::MIN); for (i, &s) in values.iter().enumerate() { if s == 0 { continue; } min_i = usize::min(i, min_i); max_i = usize::max(i, max_i); } if max_i < min_i { None } else { Some((min_i, max_i)) } } #[derive(Debug)] struct CropBox { x0: usize, y0: usize, x1: usize, y1: usize, layer_idx: usize, } impl CropBox { fn new(x0: usize, y0: usize, x1: usize, y1: usize, layer_idx: usize) -> Self { Self { x0, y0, x1, y1, layer_idx, } } } fn generate_crop_boxes( (im_h, im_w): (usize, usize), n_layers: usize, overlap_ratio: f64, ) -> Vec<CropBox> { fn crop_len(orig_len: usize, n_crops: usize, overlap: usize) -> usize { f64::ceil((overlap * (n_crops - 1) + orig_len) as f64 / n_crops as f64) as usize } let short_side = usize::min(im_h, im_w); let mut crop_boxes = Vec::new(); // Original image. crop_boxes.push(CropBox::new(0, 0, im_w, im_h, 0)); for layer_idx in 1..=n_layers { let n_crops_per_side = 1 << layer_idx; let overlap = (overlap_ratio * short_side as f64 * 2. / n_crops_per_side as f64) as usize; let crop_w = crop_len(im_w, n_crops_per_side, overlap); let crop_h = crop_len(im_w, n_crops_per_side, overlap); for i_x in 0..n_crops_per_side { let x0 = (crop_w - overlap) * i_x; for i_y in 0..n_crops_per_side { let y0 = (crop_h - overlap) * i_y; let x1 = usize::min(im_w, x0 + crop_w); let y1 = usize::min(im_h, y0 + crop_h); crop_boxes.push(CropBox::new(x0, y0, x1, y1, layer_idx)); } } } crop_boxes } // Generates a 2D grid of points evenly spaced in [0,1]x[0,1]. fn build_point_grid(n_per_side: usize) -> Vec<(f64, f64)> { let offset = 1f64 / (2 * n_per_side) as f64; let mut points = Vec::with_capacity(n_per_side * n_per_side); for i_x in 0..n_per_side { let x = offset + i_x as f64 / n_per_side as f64; for i_y in 0..n_per_side { let y = offset + i_y as f64 / n_per_side as f64; points.push((x, y)) } } points } fn build_all_layer_point_grids( n_per_side: usize, n_layers: usize, scale_per_layer: usize, ) -> Vec<Vec<(f64, f64)>> { let mut points_by_layer = Vec::with_capacity(n_layers + 1); for i in 0..=n_layers { let n_points = n_per_side / scale_per_layer.pow(i as u32); points_by_layer.push(build_point_grid(n_points)) } points_by_layer }
candle/candle-transformers/src/models/segment_anything/sam.rs/0
{ "file_path": "candle/candle-transformers/src/models/segment_anything/sam.rs", "repo_id": "candle", "token_count": 8444 }
35
use crate::models::with_tracing::{linear, linear_no_bias, Linear}; use candle::{DType, Device, Module, Result, Tensor, D}; use candle_nn::{Activation, LayerNorm, VarBuilder}; use serde::Deserialize; use std::sync::Arc; // https://huggingface.co/stabilityai/stablelm-3b-4e1t/blob/main/configuration_stablelm.py #[derive(Debug, Clone, PartialEq, Deserialize)] pub struct Config { pub(crate) vocab_size: usize, pub(crate) intermediate_size: usize, pub(crate) hidden_size: usize, pub(crate) num_hidden_layers: usize, pub(crate) num_attention_heads: usize, pub(crate) num_key_value_heads: usize, pub(crate) hidden_act: Activation, pub(crate) partial_rotary_factor: f64, pub(crate) rope_theta: f64, pub(crate) max_position_embeddings: usize, pub(crate) layer_norm_eps: f64, pub(crate) use_cache: bool, #[serde(default)] pub(crate) use_qkv_bias: bool, // Used in StableLM-2 #[serde(default)] pub(crate) use_flash_attn: bool, // Not in config.json } impl Config { pub fn stablelm_3b_4e1t(use_flash_attn: bool) -> Self { Self { vocab_size: 50304, intermediate_size: 6912, hidden_size: 2560, num_hidden_layers: 32, num_attention_heads: 32, num_key_value_heads: 32, hidden_act: Activation::Silu, partial_rotary_factor: 0.25, rope_theta: 10_000., max_position_embeddings: 4096, layer_norm_eps: 1e-5, use_qkv_bias: false, use_cache: true, use_flash_attn, } } pub fn head_dim(&self) -> usize { self.hidden_size / self.num_attention_heads } pub fn rotary_ndims(&self) -> usize { (self.head_dim() as f64 * self.partial_rotary_factor) as usize } pub fn num_kv_groups(&self) -> usize { self.num_attention_heads / self.num_key_value_heads } pub fn set_use_flash_attn(&mut self, use_flash_attn: bool) { self.use_flash_attn = use_flash_attn } } #[derive(Debug)] pub(crate) struct RotaryEmbedding { sin: Tensor, cos: Tensor, } fn rotate_half(xs: &Tensor) -> Result<Tensor> { let xs = xs.chunk(2, D::Minus1)?; Tensor::cat(&[&xs[1].neg()?, &xs[0]], D::Minus1) } impl RotaryEmbedding { pub(crate) fn new(dtype: DType, cfg: &Config, dev: &Device) -> Result<Self> { let dim = cfg.rotary_ndims(); let max_seq_len = cfg.max_position_embeddings; let inv_freq: Vec<_> = (0..dim) .step_by(2) .map(|i| 1f32 / cfg.rope_theta.powf(i as f64 / dim as f64) as f32) .collect(); let inv_freq_len = inv_freq.len(); let inv_freq = Tensor::from_vec(inv_freq, (1, inv_freq_len), dev)?.to_dtype(dtype)?; let t = Tensor::arange(0u32, max_seq_len as u32, dev)? .to_dtype(dtype)? .reshape((max_seq_len, 1))?; let freqs = t.matmul(&inv_freq)?; let freqs = Tensor::cat(&[&freqs, &freqs], D::Minus1)?; Ok(Self { sin: freqs.sin()?, cos: freqs.cos()?, }) } pub(crate) fn apply_rotary_emb_qkv( &self, q: &Tensor, k: &Tensor, seqlen_offset: usize, ) -> Result<(Tensor, Tensor)> { let (_b_sz, _h, seq_len, _n_embd) = q.dims4()?; let cos = self.cos.narrow(0, seqlen_offset, seq_len)?; let sin = self.sin.narrow(0, seqlen_offset, seq_len)?; let cos = cos.unsqueeze(0)?.unsqueeze(0)?; // (1, 1, seq_len, dim) let sin = sin.unsqueeze(0)?.unsqueeze(0)?; // (1, 1, seq_len, dim) let q_embed = (q.broadcast_mul(&cos)? + rotate_half(q)?.broadcast_mul(&sin))?; let k_embed = (k.broadcast_mul(&cos)? + rotate_half(k)?.broadcast_mul(&sin))?; Ok((q_embed, k_embed)) } } #[derive(Debug)] #[allow(clippy::upper_case_acronyms)] struct MLP { gate_proj: Linear, up_proj: Linear, down_proj: Linear, act_fn: Activation, span: tracing::Span, } impl MLP { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let hidden_sz = cfg.hidden_size; let intermediate_sz = cfg.intermediate_size; let gate_proj = linear_no_bias(hidden_sz, intermediate_sz, vb.pp("gate_proj"))?; let up_proj = linear_no_bias(hidden_sz, intermediate_sz, vb.pp("up_proj"))?; let down_proj = linear_no_bias(intermediate_sz, hidden_sz, vb.pp("down_proj"))?; Ok(Self { gate_proj, up_proj, down_proj, act_fn: cfg.hidden_act, span: tracing::span!(tracing::Level::TRACE, "mlp"), }) } } impl Module for MLP { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let lhs = xs.apply(&self.gate_proj)?.apply(&self.act_fn)?; let rhs = xs.apply(&self.up_proj)?; (lhs * rhs)?.apply(&self.down_proj) } } #[cfg(feature = "flash-attn")] fn flash_attn( q: &Tensor, k: &Tensor, v: &Tensor, softmax_scale: f32, causal: bool, ) -> Result<Tensor> { candle_flash_attn::flash_attn(q, k, v, softmax_scale, causal) } #[cfg(not(feature = "flash-attn"))] fn flash_attn(_: &Tensor, _: &Tensor, _: &Tensor, _: f32, _: bool) -> Result<Tensor> { unimplemented!("compile with '--features flash-attn'") } #[derive(Debug)] struct Attention { q_proj: Linear, k_proj: Linear, v_proj: Linear, o_proj: Linear, num_heads: usize, num_kv_heads: usize, num_kv_groups: usize, head_dim: usize, hidden_size: usize, rotary_emb: Arc<RotaryEmbedding>, kv_cache: Option<(Tensor, Tensor)>, use_cache: bool, rotary_ndims: usize, use_flash_attn: bool, span: tracing::Span, } impl Attention { fn new(rotary_emb: Arc<RotaryEmbedding>, cfg: &Config, vb: VarBuilder) -> Result<Self> { let hidden_sz = cfg.hidden_size; let head_dim = cfg.head_dim(); let num_heads = cfg.num_attention_heads; let num_kv_heads = cfg.num_key_value_heads; let linear_layer = if cfg.use_qkv_bias { linear } else { linear_no_bias }; let q_proj = linear_layer(hidden_sz, num_heads * head_dim, vb.pp("q_proj"))?; let k_proj = linear_layer(hidden_sz, num_kv_heads * head_dim, vb.pp("k_proj"))?; let v_proj = linear_layer(hidden_sz, num_kv_heads * head_dim, vb.pp("v_proj"))?; let o_proj = linear_no_bias(num_heads * head_dim, hidden_sz, vb.pp("o_proj"))?; Ok(Self { q_proj, k_proj, v_proj, o_proj, num_heads, num_kv_heads, num_kv_groups: cfg.num_kv_groups(), head_dim, hidden_size: hidden_sz, rotary_emb, kv_cache: None, use_cache: cfg.use_cache, rotary_ndims: cfg.rotary_ndims(), use_flash_attn: cfg.use_flash_attn, span: tracing::span!(tracing::Level::TRACE, "attn"), }) } fn repeat_kv(&self, xs: Tensor) -> Result<Tensor> { let n_rep = self.num_kv_groups; if n_rep == 1 { Ok(xs) } else { let (b_sz, num_kv_heads, seq_len, head_dim) = xs.dims4()?; xs.unsqueeze(2)? .expand((b_sz, num_kv_heads, n_rep, seq_len, head_dim))? .reshape((b_sz, num_kv_heads * n_rep, seq_len, head_dim)) } } fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, seqlen_offset: usize, ) -> Result<Tensor> { let _enter = self.span.enter(); let (b_sz, q_len, _) = xs.dims3()?; let query_states = self.q_proj.forward(xs)?; let key_states = self.k_proj.forward(xs)?; let value_states = self.v_proj.forward(xs)?; let query_states = query_states .reshape((b_sz, q_len, self.num_heads, self.head_dim))? .transpose(1, 2)?; let key_states = key_states .reshape((b_sz, q_len, self.num_kv_heads, self.head_dim))? .transpose(1, 2)?; let value_states = value_states .reshape((b_sz, q_len, self.num_kv_heads, self.head_dim))? .transpose(1, 2)?; let (rot_ndims, pass_ndims) = (self.rotary_ndims, self.head_dim - self.rotary_ndims); let query_rot = query_states.narrow(D::Minus1, 0, rot_ndims)?; let query_pass = query_states.narrow(D::Minus1, rot_ndims, pass_ndims)?; let key_rot = key_states.narrow(D::Minus1, 0, rot_ndims)?; let key_pass = key_states.narrow(D::Minus1, rot_ndims, pass_ndims)?; let (query_rot, key_rot) = self.rotary_emb .apply_rotary_emb_qkv(&query_rot, &key_rot, seqlen_offset)?; let query_states = Tensor::cat(&[query_rot, query_pass], D::Minus1)?.contiguous()?; let key_states = Tensor::cat(&[key_rot, key_pass], D::Minus1)?.contiguous()?; let (key_states, value_states) = match &self.kv_cache { None => (key_states, value_states), Some((prev_k, prev_v)) => { let key_states = Tensor::cat(&[prev_k, &key_states], 2)?; let value_states = Tensor::cat(&[prev_v, &value_states], 2)?; (key_states, value_states) } }; if self.use_cache { self.kv_cache = Some((key_states.clone(), value_states.clone())); } let key_states = self.repeat_kv(key_states)?.contiguous()?; let value_states = self.repeat_kv(value_states)?.contiguous()?; let attn_output = if self.use_flash_attn { // flash-attn expects (b_sz, seq_len, nheads, head_dim) let q = query_states.transpose(1, 2)?; let k = key_states.transpose(1, 2)?; let v = value_states.transpose(1, 2)?; let softmax_scale = 1f32 / (self.head_dim as f32).sqrt(); flash_attn(&q, &k, &v, softmax_scale, q_len > 1)?.transpose(1, 2)? } else { let scale = 1f64 / f64::sqrt(self.head_dim as f64); let attn_weights = (query_states.matmul(&key_states.transpose(2, 3)?)? * scale)?; let attn_weights = match attention_mask { None => attn_weights, Some(mask) => attn_weights.broadcast_add(mask)?, }; let attn_weights = candle_nn::ops::softmax_last_dim(&attn_weights)?; attn_weights.matmul(&value_states)? }; attn_output .transpose(1, 2)? .reshape((b_sz, q_len, self.hidden_size))? .apply(&self.o_proj) } } #[derive(Debug)] struct DecoderLayer { self_attn: Attention, mlp: MLP, input_layernorm: LayerNorm, post_attention_layernorm: LayerNorm, span: tracing::Span, } impl DecoderLayer { fn new(rotary_emb: Arc<RotaryEmbedding>, cfg: &Config, vb: VarBuilder) -> Result<Self> { let self_attn = Attention::new(rotary_emb, cfg, vb.pp("self_attn"))?; let mlp = MLP::new(cfg, vb.pp("mlp"))?; let input_layernorm = candle_nn::layer_norm( cfg.hidden_size, cfg.layer_norm_eps, vb.pp("input_layernorm"), )?; let post_attention_layernorm = candle_nn::layer_norm( cfg.hidden_size, cfg.layer_norm_eps, vb.pp("post_attention_layernorm"), )?; Ok(Self { self_attn, mlp, input_layernorm, post_attention_layernorm, span: tracing::span!(tracing::Level::TRACE, "layer"), }) } fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, seqlen_offset: usize, ) -> Result<Tensor> { let _enter = self.span.enter(); let residual = xs; let xs = self.input_layernorm.forward(xs)?; let xs = self.self_attn.forward(&xs, attention_mask, seqlen_offset)?; let xs = (xs + residual)?; let residual = &xs; let xs = xs.apply(&self.post_attention_layernorm)?.apply(&self.mlp)?; residual + xs } } #[derive(Debug)] pub struct Model { embed_tokens: candle_nn::Embedding, layers: Vec<DecoderLayer>, norm: LayerNorm, lm_head: Linear, device: Device, dtype: DType, span: tracing::Span, } impl Model { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb_m = vb.pp("model"); let embed_tokens = candle_nn::embedding(cfg.vocab_size, cfg.hidden_size, vb_m.pp("embed_tokens"))?; let rotary_emb = Arc::new(RotaryEmbedding::new(vb.dtype(), cfg, vb_m.device())?); let mut layers = Vec::with_capacity(cfg.num_hidden_layers); let vb_l = vb_m.pp("layers"); for layer_idx in 0..cfg.num_hidden_layers { let layer = DecoderLayer::new(rotary_emb.clone(), cfg, vb_l.pp(layer_idx))?; layers.push(layer) } let norm = candle_nn::layer_norm(cfg.hidden_size, cfg.layer_norm_eps, vb_m.pp("norm"))?; let lm_head = linear_no_bias(cfg.hidden_size, cfg.vocab_size, vb.pp("lm_head"))?; Ok(Self { embed_tokens, layers, norm, lm_head, device: vb.device().clone(), dtype: vb.dtype(), span: tracing::span!(tracing::Level::TRACE, "model"), }) } fn prepare_decoder_attention_mask( &self, b_size: usize, tgt_len: usize, seqlen_offset: usize, ) -> Result<Tensor> { // Sliding window mask? let mask: Vec<_> = (0..tgt_len) .flat_map(|i| (0..tgt_len).map(move |j| if i < j { f32::NEG_INFINITY } else { 0. })) .collect(); let mask = Tensor::from_slice(&mask, (tgt_len, tgt_len), &self.device)?; let mask = if seqlen_offset > 0 { let mask0 = Tensor::zeros((tgt_len, seqlen_offset), DType::F32, &self.device)?; Tensor::cat(&[&mask0, &mask], D::Minus1)? } else { mask }; mask.expand((b_size, 1, tgt_len, tgt_len + seqlen_offset))? .to_dtype(self.dtype) } pub fn forward(&mut self, input_ids: &Tensor, seqlen_offset: usize) -> Result<Tensor> { let _enter = self.span.enter(); let (b_size, seq_len) = input_ids.dims2()?; let attention_mask = if seq_len <= 1 { None } else { let mask = self.prepare_decoder_attention_mask(b_size, seq_len, seqlen_offset)?; Some(mask) }; let mut xs = self.embed_tokens.forward(input_ids)?; for layer in self.layers.iter_mut() { xs = layer.forward(&xs, attention_mask.as_ref(), seqlen_offset)? } xs.narrow(1, seq_len - 1, 1)? .apply(&self.norm)? .apply(&self.lm_head) } }
candle/candle-transformers/src/models/stable_lm.rs/0
{ "file_path": "candle/candle-transformers/src/models/stable_lm.rs", "repo_id": "candle", "token_count": 7764 }
36
use super::common::LayerNormNoWeights; use candle::{Module, Result, Tensor}; use candle_nn::VarBuilder; #[derive(Debug)] pub struct MixingResidualBlock { norm1: LayerNormNoWeights, depthwise_conv: candle_nn::Conv2d, norm2: LayerNormNoWeights, channelwise_lin1: candle_nn::Linear, channelwise_lin2: candle_nn::Linear, gammas: Vec<f32>, } impl MixingResidualBlock { pub fn new(inp: usize, embed_dim: usize, vb: VarBuilder) -> Result<Self> { let norm1 = LayerNormNoWeights::new(inp)?; let norm2 = LayerNormNoWeights::new(inp)?; let cfg = candle_nn::Conv2dConfig { groups: inp, ..Default::default() }; let depthwise_conv = candle_nn::conv2d(inp, inp, 3, cfg, vb.pp("depthwise.1"))?; let channelwise_lin1 = candle_nn::linear(inp, embed_dim, vb.pp("channelwise.0"))?; let channelwise_lin2 = candle_nn::linear(embed_dim, inp, vb.pp("channelwise.2"))?; let gammas = vb.get(6, "gammas")?.to_vec1::<f32>()?; Ok(Self { norm1, depthwise_conv, norm2, channelwise_lin1, channelwise_lin2, gammas, }) } } impl Module for MixingResidualBlock { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let mods = &self.gammas; let x_temp = xs .permute((0, 2, 3, 1))? .apply(&self.norm1)? .permute((0, 3, 1, 2))? .affine(1. + mods[0] as f64, mods[1] as f64)?; let x_temp = candle_nn::ops::replication_pad2d(&x_temp, 1)?; let xs = (xs + x_temp.apply(&self.depthwise_conv)? * mods[2] as f64)?; let x_temp = xs .permute((0, 2, 3, 1))? .apply(&self.norm2)? .permute((0, 3, 1, 2))? .affine(1. + mods[3] as f64, mods[4] as f64)?; let x_temp = x_temp .permute((0, 2, 3, 1))? .contiguous()? .apply(&self.channelwise_lin1)? .gelu()? .apply(&self.channelwise_lin2)? .permute((0, 3, 1, 2))?; xs + x_temp * mods[5] as f64 } } #[derive(Debug)] pub struct PaellaVQ { in_block_conv: candle_nn::Conv2d, out_block_conv: candle_nn::Conv2d, down_blocks: Vec<(Option<candle_nn::Conv2d>, MixingResidualBlock)>, down_blocks_conv: candle_nn::Conv2d, down_blocks_bn: candle_nn::BatchNorm, up_blocks_conv: candle_nn::Conv2d, up_blocks: Vec<(Vec<MixingResidualBlock>, Option<candle_nn::ConvTranspose2d>)>, } impl PaellaVQ { pub fn new(vb: VarBuilder) -> Result<Self> { const IN_CHANNELS: usize = 3; const OUT_CHANNELS: usize = 3; const LATENT_CHANNELS: usize = 4; const EMBED_DIM: usize = 384; const BOTTLENECK_BLOCKS: usize = 12; const C_LEVELS: [usize; 2] = [EMBED_DIM / 2, EMBED_DIM]; let in_block_conv = candle_nn::conv2d( IN_CHANNELS * 4, C_LEVELS[0], 1, Default::default(), vb.pp("in_block.1"), )?; let out_block_conv = candle_nn::conv2d( C_LEVELS[0], OUT_CHANNELS * 4, 1, Default::default(), vb.pp("out_block.0"), )?; let mut down_blocks = Vec::new(); let vb_d = vb.pp("down_blocks"); let mut d_idx = 0; for (i, &c_level) in C_LEVELS.iter().enumerate() { let conv_block = if i > 0 { let cfg = candle_nn::Conv2dConfig { padding: 1, stride: 2, ..Default::default() }; let block = candle_nn::conv2d(C_LEVELS[i - 1], c_level, 4, cfg, vb_d.pp(d_idx))?; d_idx += 1; Some(block) } else { None }; let res_block = MixingResidualBlock::new(c_level, c_level * 4, vb_d.pp(d_idx))?; d_idx += 1; down_blocks.push((conv_block, res_block)) } let vb_d = vb_d.pp(d_idx); let down_blocks_conv = candle_nn::conv2d_no_bias( C_LEVELS[1], LATENT_CHANNELS, 1, Default::default(), vb_d.pp(0), )?; let down_blocks_bn = candle_nn::batch_norm(LATENT_CHANNELS, 1e-5, vb_d.pp(1))?; let mut up_blocks = Vec::new(); let vb_u = vb.pp("up_blocks"); let mut u_idx = 0; let up_blocks_conv = candle_nn::conv2d( LATENT_CHANNELS, C_LEVELS[1], 1, Default::default(), vb_u.pp(u_idx).pp(0), )?; u_idx += 1; for (i, &c_level) in C_LEVELS.iter().rev().enumerate() { let mut res_blocks = Vec::new(); let n_bottleneck_blocks = if i == 0 { BOTTLENECK_BLOCKS } else { 1 }; for _j in 0..n_bottleneck_blocks { let res_block = MixingResidualBlock::new(c_level, c_level * 4, vb_u.pp(u_idx))?; u_idx += 1; res_blocks.push(res_block) } let conv_block = if i < C_LEVELS.len() - 1 { let cfg = candle_nn::ConvTranspose2dConfig { padding: 1, stride: 2, ..Default::default() }; let block = candle_nn::conv_transpose2d( c_level, C_LEVELS[C_LEVELS.len() - i - 2], 4, cfg, vb_u.pp(u_idx), )?; u_idx += 1; Some(block) } else { None }; up_blocks.push((res_blocks, conv_block)) } Ok(Self { in_block_conv, down_blocks, down_blocks_conv, down_blocks_bn, up_blocks, up_blocks_conv, out_block_conv, }) } pub fn encode(&self, xs: &Tensor) -> Result<Tensor> { let mut xs = candle_nn::ops::pixel_unshuffle(xs, 2)?.apply(&self.in_block_conv)?; for down_block in self.down_blocks.iter() { if let Some(conv) = &down_block.0 { xs = xs.apply(conv)? } xs = xs.apply(&down_block.1)? } xs.apply(&self.down_blocks_conv)? .apply_t(&self.down_blocks_bn, false) } pub fn decode(&self, xs: &Tensor) -> Result<Tensor> { // TODO: quantizer if we want to support `force_not_quantize=False`. let mut xs = xs.apply(&self.up_blocks_conv)?; for up_block in self.up_blocks.iter() { for b in up_block.0.iter() { xs = xs.apply(b)?; } if let Some(conv) = &up_block.1 { xs = xs.apply(conv)? } } xs.apply(&self.out_block_conv)? .apply(&|xs: &_| candle_nn::ops::pixel_shuffle(xs, 2)) } } impl Module for PaellaVQ { fn forward(&self, xs: &Tensor) -> Result<Tensor> { self.decode(&self.encode(xs)?) } }
candle/candle-transformers/src/models/wuerstchen/paella_vq.rs/0
{ "file_path": "candle/candle-transformers/src/models/wuerstchen/paella_vq.rs", "repo_id": "candle", "token_count": 4078 }
37
use candle_transformers::models::bert; use wasm_bindgen::prelude::*; pub use bert::{BertModel, Config, DTYPE}; pub use tokenizers::{PaddingParams, Tokenizer}; #[wasm_bindgen] extern "C" { // Use `js_namespace` here to bind `console.log(..)` instead of just // `log(..)` #[wasm_bindgen(js_namespace = console)] pub fn log(s: &str); } #[macro_export] macro_rules! console_log { // Note that this is using the `log` function imported above during // `bare_bones` ($($t:tt)*) => ($crate::log(&format_args!($($t)*).to_string())) }
candle/candle-wasm-examples/bert/src/lib.rs/0
{ "file_path": "candle/candle-wasm-examples/bert/src/lib.rs", "repo_id": "candle", "token_count": 226 }
38
use crate::console_log; use crate::worker::{ModelData, Worker, WorkerInput, WorkerOutput}; use std::str::FromStr; use wasm_bindgen::prelude::*; use wasm_bindgen_futures::JsFuture; use yew::{html, Component, Context, Html}; use yew_agent::{Bridge, Bridged}; async fn fetch_url(url: &str) -> Result<Vec<u8>, JsValue> { use web_sys::{Request, RequestCache, RequestInit, RequestMode, Response}; let window = web_sys::window().ok_or("window")?; let mut opts = RequestInit::new(); let opts = opts .method("GET") .mode(RequestMode::Cors) .cache(RequestCache::NoCache); let request = Request::new_with_str_and_init(url, opts)?; let resp_value = JsFuture::from(window.fetch_with_request(&request)).await?; // `resp_value` is a `Response` object. assert!(resp_value.is_instance_of::<Response>()); let resp: Response = resp_value.dyn_into()?; let data = JsFuture::from(resp.blob()?).await?; let blob = web_sys::Blob::from(data); let array_buffer = JsFuture::from(blob.array_buffer()).await?; let data = js_sys::Uint8Array::new(&array_buffer).to_vec(); Ok(data) } pub enum Msg { Refresh, Run, UpdateStatus(String), SetModel(ModelData), WorkerIn(WorkerInput), WorkerOut(Result<WorkerOutput, String>), } pub struct CurrentDecode { start_time: Option<f64>, } pub struct App { status: String, loaded: bool, temperature: std::rc::Rc<std::cell::RefCell<f64>>, top_p: std::rc::Rc<std::cell::RefCell<f64>>, prompt: std::rc::Rc<std::cell::RefCell<String>>, generated: String, n_tokens: usize, current_decode: Option<CurrentDecode>, worker: Box<dyn Bridge<Worker>>, } async fn model_data_load() -> Result<ModelData, JsValue> { let tokenizer = fetch_url("tokenizer.json").await?; let model = fetch_url("model.bin").await?; console_log!("{}", model.len()); Ok(ModelData { tokenizer, model }) } fn performance_now() -> Option<f64> { let window = web_sys::window()?; let performance = window.performance()?; Some(performance.now() / 1000.) } impl Component for App { type Message = Msg; type Properties = (); fn create(ctx: &Context<Self>) -> Self { let status = "loading weights".to_string(); let cb = { let link = ctx.link().clone(); move |e| link.send_message(Self::Message::WorkerOut(e)) }; let worker = Worker::bridge(std::rc::Rc::new(cb)); Self { status, n_tokens: 0, temperature: std::rc::Rc::new(std::cell::RefCell::new(0.)), top_p: std::rc::Rc::new(std::cell::RefCell::new(1.0)), prompt: std::rc::Rc::new(std::cell::RefCell::new("".to_string())), generated: String::new(), current_decode: None, worker, loaded: false, } } fn rendered(&mut self, ctx: &Context<Self>, first_render: bool) { if first_render { ctx.link().send_future(async { match model_data_load().await { Err(err) => { let status = format!("{err:?}"); Msg::UpdateStatus(status) } Ok(model_data) => Msg::SetModel(model_data), } }); } } fn update(&mut self, ctx: &Context<Self>, msg: Self::Message) -> bool { match msg { Msg::SetModel(md) => { self.status = "weights loaded successfully!".to_string(); self.loaded = true; console_log!("loaded weights"); self.worker.send(WorkerInput::ModelData(md)); true } Msg::Run => { if self.current_decode.is_some() { self.status = "already generating some sample at the moment".to_string() } else { let start_time = performance_now(); self.current_decode = Some(CurrentDecode { start_time }); self.status = "generating...".to_string(); self.n_tokens = 0; self.generated.clear(); let temp = *self.temperature.borrow(); let top_p = *self.top_p.borrow(); let prompt = self.prompt.borrow().clone(); console_log!("temp: {}, top_p: {}, prompt: {}", temp, top_p, prompt); ctx.link() .send_message(Msg::WorkerIn(WorkerInput::Run(temp, top_p, prompt))) } true } Msg::WorkerOut(output) => { match output { Ok(WorkerOutput::WeightsLoaded) => self.status = "weights loaded!".to_string(), Ok(WorkerOutput::GenerationDone(Err(err))) => { self.status = format!("error in worker process: {err}"); self.current_decode = None } Ok(WorkerOutput::GenerationDone(Ok(()))) => { let dt = self.current_decode.as_ref().and_then(|current_decode| { current_decode.start_time.and_then(|start_time| { performance_now().map(|stop_time| stop_time - start_time) }) }); self.status = match dt { None => "generation succeeded!".to_string(), Some(dt) => format!( "generation succeeded in {:.2}s ({:.1} ms/token)", dt, dt * 1000.0 / (self.n_tokens as f64) ), }; self.current_decode = None } Ok(WorkerOutput::Generated(token)) => { self.n_tokens += 1; self.generated.push_str(&token) } Err(err) => { self.status = format!("error in worker {err:?}"); } } true } Msg::WorkerIn(inp) => { self.worker.send(inp); true } Msg::UpdateStatus(status) => { self.status = status; true } Msg::Refresh => true, } } fn view(&self, ctx: &Context<Self>) -> Html { use yew::TargetCast; let temperature = self.temperature.clone(); let oninput_temperature = ctx.link().callback(move |e: yew::InputEvent| { let input: web_sys::HtmlInputElement = e.target_unchecked_into(); if let Ok(temp) = f64::from_str(&input.value()) { *temperature.borrow_mut() = temp } Msg::Refresh }); let top_p = self.top_p.clone(); let oninput_top_p = ctx.link().callback(move |e: yew::InputEvent| { let input: web_sys::HtmlInputElement = e.target_unchecked_into(); if let Ok(top_p_input) = f64::from_str(&input.value()) { *top_p.borrow_mut() = top_p_input } Msg::Refresh }); let prompt = self.prompt.clone(); let oninput_prompt = ctx.link().callback(move |e: yew::InputEvent| { let input: web_sys::HtmlInputElement = e.target_unchecked_into(); *prompt.borrow_mut() = input.value(); Msg::Refresh }); html! { <div style="margin: 2%;"> <div><p>{"Running "} <a href="https://github.com/karpathy/llama2.c" target="_blank">{"llama2.c"}</a> {" in the browser using rust/wasm with "} <a href="https://github.com/huggingface/candle" target="_blank">{"candle!"}</a> </p> <p>{"Once the weights have loaded, click on the run button to start generating content."} </p> </div> {"temperature \u{00a0} "} <input type="range" min="0." max="1.2" step="0.1" value={self.temperature.borrow().to_string()} oninput={oninput_temperature} id="temp"/> {format!(" \u{00a0} {}", self.temperature.borrow())} <br/ > {"top_p \u{00a0} "} <input type="range" min="0." max="1.0" step="0.05" value={self.top_p.borrow().to_string()} oninput={oninput_top_p} id="top_p"/> {format!(" \u{00a0} {}", self.top_p.borrow())} <br/ > {"prompt: "}<input type="text" value={self.prompt.borrow().to_string()} oninput={oninput_prompt} id="prompt"/> <br/ > { if self.loaded{ html!(<button class="button" onclick={ctx.link().callback(move |_| Msg::Run)}> { "run" }</button>) }else{ html! { <progress id="progress-bar" aria-label="Loading weights..."></progress> } } } <br/ > <h3> {&self.status} </h3> { if self.current_decode.is_some() { html! { <progress id="progress-bar" aria-label="generatingโ€ฆ"></progress> } } else { html! {} } } <blockquote> <p> { self.generated.chars().map(|c| if c == '\r' || c == '\n' { html! { <br/> } } else { html! { {c} } }).collect::<Html>() } </p> </blockquote> </div> } } }
candle/candle-wasm-examples/llama2-c/src/app.rs/0
{ "file_path": "candle/candle-wasm-examples/llama2-c/src/app.rs", "repo_id": "candle", "token_count": 5458 }
39
## Running Whisper Examples Here, we provide two examples of how to run Whisper using a Candle-compiled WASM binary and runtimes. ### Pure Rust UI To build and test the UI made in Rust you will need [Trunk](https://trunkrs.dev/#install) From the `candle-wasm-examples/whisper` directory run: Download assets: ```bash # mel filters wget -c https://huggingface.co/spaces/lmz/candle-whisper/resolve/main/mel_filters.safetensors # Model and tokenizer tiny.en wget -c https://huggingface.co/openai/whisper-tiny.en/resolve/main/model.safetensors -P whisper-tiny.en wget -c https://huggingface.co/openai/whisper-tiny.en/raw/main/tokenizer.json -P whisper-tiny.en wget -c https://huggingface.co/openai/whisper-tiny.en/raw/main/config.json -P whisper-tiny.en # model and tokenizer tiny multilanguage wget -c https://huggingface.co/openai/whisper-tiny/resolve/main/model.safetensors -P whisper-tiny wget -c https://huggingface.co/openai/whisper-tiny/raw/main/tokenizer.json -P whisper-tiny wget -c https://huggingface.co/openai/whisper-tiny/raw/main/config.json -P whisper-tiny #quantized wget -c https://huggingface.co/lmz/candle-whisper/resolve/main/model-tiny-en-q80.gguf -P quantized wget -c https://huggingface.co/lmz/candle-whisper/raw/main/tokenizer-tiny-en.json -P quantized wget -c https://huggingface.co/lmz/candle-whisper/raw/main/config-tiny-en.json -P quantized # Audio samples wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_gb0.wav -P audios wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_a13.wav -P audios wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_gb1.wav -P audios wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_hp0.wav -P audios wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_jfk.wav -P audios wget -c https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_mm0.wav -P audios ``` Run hot reload server: ```bash trunk serve --release --public-url / --port 8080 ``` ### Vanilla JS and WebWorkers To build and test the UI made in Vanilla JS and WebWorkers, first we need to build the WASM library: ```bash sh build-lib.sh ``` This will bundle the library under `./build` and we can import it inside our WebWorker like a normal JS module: ```js import init, { Decoder } from "./build/m.js"; ``` The full example can be found under `./lib-example.html`. All needed assets are fetched from the web, so no need to download anything. Finally, you can preview the example by running a local HTTP server. For example: ```bash python -m http.server ``` Then open `http://localhost:8000/lib-example.html` in your browser.
candle/candle-wasm-examples/whisper/README.md/0
{ "file_path": "candle/candle-wasm-examples/whisper/README.md", "repo_id": "candle", "token_count": 1023 }
40
{ "moz:firefoxOptions": { "prefs": { "media.navigator.streams.fake": true, "media.navigator.permission.disabled": true }, "args": [] }, "goog:chromeOptions": { "args": [ "--use-fake-device-for-media-stream", "--use-fake-ui-for-media-stream" ] } }
candle/candle-wasm-tests/webdriver.json/0
{ "file_path": "candle/candle-wasm-tests/webdriver.json", "repo_id": "candle", "token_count": 143 }
41
# Prompt templates These are the templates used to format the conversation history for different models used in HuggingChat. Set them in your `.env.local` [like so](https://github.com/huggingface/chat-ui#chatprompttemplate). ## Llama 2 ```env <s>[INST] <<SYS>>\n{{preprompt}}\n<</SYS>>\n\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} </s><s>[INST] {{/ifAssistant}}{{/each}} ``` ## CodeLlama ```env <s>[INST] <<SYS>>\n{{preprompt}}\n<</SYS>>\n\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} </s><s>[INST] {{/ifAssistant}}{{/each}} ``` ## Falcon ```env System: {{preprompt}}\nUser:{{#each messages}}{{#ifUser}}{{content}}\nFalcon:{{/ifUser}}{{#ifAssistant}}{{content}}\nUser:{{/ifAssistant}}{{/each}} ``` ## Mistral ```env <s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s> {{/ifAssistant}}{{/each}} ``` ## Zephyr ```env <|system|>\n{{preprompt}}</s>\n{{#each messages}}{{#ifUser}}<|user|>\n{{content}}</s>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}</s>\n{{/ifAssistant}}{{/each}} ``` ## IDEFICS ```env {{#each messages}}{{#ifUser}}User: {{content}}{{/ifUser}}<end_of_utterance>\nAssistant: {{#ifAssistant}}{{content}}\n{{/ifAssistant}}{{/each}} ``` ## OpenChat ```env <s>{{#each messages}}{{#ifUser}}GPT4 User: {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<|end_of_turn|>GPT4 Assistant: {{/ifUser}}{{#ifAssistant}}{{content}}<|end_of_turn|>{{/ifAssistant}}{{/each}} ``` ## Mixtral ```env <s> {{#each messages}}{{#ifUser}}[INST]{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}} {{content}}</s> {{/ifAssistant}}{{/each}} ``` ## ChatML ```env {{#if @root.preprompt}}<|im_start|>system\n{{@root.preprompt}}<|im_end|>\n{{/if}}{{#each messages}}{{#ifUser}}<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n{{/ifUser}}{{#ifAssistant}}{{content}}<|im_end|>\n{{/ifAssistant}}{{/each}} ``` ## CodeLlama 70B ```env <s>{{#if @root.preprompt}}Source: system\n\n {{@root.preprompt}} <step> {{/if}}{{#each messages}}{{#ifUser}}Source: user\n\n {{content}} <step> {{/ifUser}}{{#ifAssistant}}Source: assistant\n\n {{content}} <step> {{/ifAssistant}}{{/each}}Source: assistant\nDestination: user\n\n `` ``` ## Gemma ```env {{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn>\n{{/ifAssistant}}{{/each}} ```
chat-ui/PROMPTS.md/0
{ "file_path": "chat-ui/PROMPTS.md", "repo_id": "chat-ui", "token_count": 1133 }
42
<script lang="ts"> export let title = ""; export let classNames = ""; </script> <div class="flex items-center rounded-xl bg-gray-100 p-1 text-sm dark:bg-gray-800 {classNames}"> <span class="mr-2 inline-flex items-center rounded-lg bg-gradient-to-br from-primary-300 px-2 py-1 text-xxs font-medium uppercase leading-3 text-primary-700 dark:from-primary-900 dark:text-primary-400" >New</span > {title} <div class="ml-auto shrink-0"> <slot /> </div> </div>
chat-ui/src/lib/components/AnnouncementBanner.svelte/0
{ "file_path": "chat-ui/src/lib/components/AnnouncementBanner.svelte", "repo_id": "chat-ui", "token_count": 184 }
43
<script lang="ts"> import { onMount, onDestroy } from "svelte"; let el: HTMLElement; onMount(() => { el.ownerDocument.body.appendChild(el); }); onDestroy(() => { if (el?.parentNode) { el.parentNode.removeChild(el); } }); </script> <div bind:this={el} class="contents" hidden> <slot /> </div>
chat-ui/src/lib/components/Portal.svelte/0
{ "file_path": "chat-ui/src/lib/components/Portal.svelte", "repo_id": "chat-ui", "token_count": 130 }
44
<script lang="ts"> export let classNames = ""; </script> <svg width="1em" height="1em" viewBox="0 0 15 6" class={classNames} fill="none" xmlns="http://www.w3.org/2000/svg" > <path d="M1.67236 1L7.67236 7L13.6724 1" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" /> </svg>
chat-ui/src/lib/components/icons/IconChevron.svelte/0
{ "file_path": "chat-ui/src/lib/components/icons/IconChevron.svelte", "repo_id": "chat-ui", "token_count": 156 }
45
import { MONGODB_URL, MONGODB_DB_NAME, MONGODB_DIRECT_CONNECTION } from "$env/static/private"; import { GridFSBucket, MongoClient } from "mongodb"; import type { Conversation } from "$lib/types/Conversation"; import type { SharedConversation } from "$lib/types/SharedConversation"; import type { AbortedGeneration } from "$lib/types/AbortedGeneration"; import type { Settings } from "$lib/types/Settings"; import type { User } from "$lib/types/User"; import type { MessageEvent } from "$lib/types/MessageEvent"; import type { Session } from "$lib/types/Session"; import type { Assistant } from "$lib/types/Assistant"; import type { Report } from "$lib/types/Report"; import type { ConversationStats } from "$lib/types/ConversationStats"; import type { MigrationResult } from "$lib/types/MigrationResult"; import type { Semaphore } from "$lib/types/Semaphore"; if (!MONGODB_URL) { throw new Error( "Please specify the MONGODB_URL environment variable inside .env.local. Set it to mongodb://localhost:27017 if you are running MongoDB locally, or to a MongoDB Atlas free instance for example." ); } export const CONVERSATION_STATS_COLLECTION = "conversations.stats"; const client = new MongoClient(MONGODB_URL, { directConnection: MONGODB_DIRECT_CONNECTION === "true", }); export const connectPromise = client.connect().catch(console.error); export function getCollections(mongoClient: MongoClient) { const db = mongoClient.db(MONGODB_DB_NAME + (import.meta.env.MODE === "test" ? "-test" : "")); const conversations = db.collection<Conversation>("conversations"); const conversationStats = db.collection<ConversationStats>(CONVERSATION_STATS_COLLECTION); const assistants = db.collection<Assistant>("assistants"); const reports = db.collection<Report>("reports"); const sharedConversations = db.collection<SharedConversation>("sharedConversations"); const abortedGenerations = db.collection<AbortedGeneration>("abortedGenerations"); const settings = db.collection<Settings>("settings"); const users = db.collection<User>("users"); const sessions = db.collection<Session>("sessions"); const messageEvents = db.collection<MessageEvent>("messageEvents"); const bucket = new GridFSBucket(db, { bucketName: "files" }); const migrationResults = db.collection<MigrationResult>("migrationResults"); const semaphores = db.collection<Semaphore>("semaphores"); return { conversations, conversationStats, assistants, reports, sharedConversations, abortedGenerations, settings, users, sessions, messageEvents, bucket, migrationResults, semaphores, }; } const db = client.db(MONGODB_DB_NAME + (import.meta.env.MODE === "test" ? "-test" : "")); const collections = getCollections(client); const { conversations, conversationStats, assistants, reports, sharedConversations, abortedGenerations, settings, users, sessions, messageEvents, semaphores, } = collections; export { client, db, collections }; client.on("open", () => { conversations .createIndex( { sessionId: 1, updatedAt: -1 }, { partialFilterExpression: { sessionId: { $exists: true } } } ) .catch(console.error); conversations .createIndex( { userId: 1, updatedAt: -1 }, { partialFilterExpression: { userId: { $exists: true } } } ) .catch(console.error); conversations .createIndex( { "message.id": 1, "message.ancestors": 1 }, { partialFilterExpression: { userId: { $exists: true } } } ) .catch(console.error); // To do stats on conversations conversations.createIndex({ updatedAt: 1 }).catch(console.error); // Not strictly necessary, could use _id, but more convenient. Also for stats conversations.createIndex({ createdAt: 1 }).catch(console.error); // To do stats on conversation messages conversations.createIndex({ "messages.createdAt": 1 }, { sparse: true }).catch(console.error); // Unique index for stats conversationStats .createIndex( { type: 1, "date.field": 1, "date.span": 1, "date.at": 1, distinct: 1, }, { unique: true } ) .catch(console.error); // Allow easy check of last computed stat for given type/dateField conversationStats .createIndex({ type: 1, "date.field": 1, "date.at": 1, }) .catch(console.error); abortedGenerations.createIndex({ updatedAt: 1 }, { expireAfterSeconds: 30 }).catch(console.error); abortedGenerations.createIndex({ conversationId: 1 }, { unique: true }).catch(console.error); sharedConversations.createIndex({ hash: 1 }, { unique: true }).catch(console.error); settings.createIndex({ sessionId: 1 }, { unique: true, sparse: true }).catch(console.error); settings.createIndex({ userId: 1 }, { unique: true, sparse: true }).catch(console.error); settings.createIndex({ assistants: 1 }).catch(console.error); users.createIndex({ hfUserId: 1 }, { unique: true }).catch(console.error); users.createIndex({ sessionId: 1 }, { unique: true, sparse: true }).catch(console.error); // No unicity because due to renames & outdated info from oauth provider, there may be the same username on different users users.createIndex({ username: 1 }).catch(console.error); messageEvents.createIndex({ createdAt: 1 }, { expireAfterSeconds: 60 }).catch(console.error); sessions.createIndex({ expiresAt: 1 }, { expireAfterSeconds: 0 }).catch(console.error); sessions.createIndex({ sessionId: 1 }, { unique: true }).catch(console.error); assistants.createIndex({ createdById: 1, userCount: -1 }).catch(console.error); assistants.createIndex({ userCount: 1 }).catch(console.error); assistants.createIndex({ featured: 1, userCount: -1 }).catch(console.error); assistants.createIndex({ modelId: 1, userCount: -1 }).catch(console.error); assistants.createIndex({ searchTokens: 1 }).catch(console.error); reports.createIndex({ assistantId: 1 }).catch(console.error); reports.createIndex({ createdBy: 1, assistantId: 1 }).catch(console.error); // Unique index for semaphore and migration results semaphores.createIndex({ key: 1 }, { unique: true }).catch(console.error); semaphores.createIndex({ createdAt: 1 }, { expireAfterSeconds: 60 }).catch(console.error); });
chat-ui/src/lib/server/database.ts/0
{ "file_path": "chat-ui/src/lib/server/database.ts", "repo_id": "chat-ui", "token_count": 1994 }
46
import { smallModel } from "$lib/server/models"; import type { Conversation } from "$lib/types/Conversation"; export async function generateFromDefaultEndpoint({ messages, preprompt, }: { messages: Omit<Conversation["messages"][0], "id">[]; preprompt?: string; }): Promise<string> { const endpoint = await smallModel.getEndpoint(); const tokenStream = await endpoint({ messages, preprompt }); for await (const output of tokenStream) { // if not generated_text is here it means the generation is not done if (output.generated_text) { let generated_text = output.generated_text; for (const stop of [...(smallModel.parameters?.stop ?? []), "<|endoftext|>"]) { if (generated_text.endsWith(stop)) { generated_text = generated_text.slice(0, -stop.length).trimEnd(); } } return generated_text; } } throw new Error("Generation failed"); }
chat-ui/src/lib/server/generateFromDefaultEndpoint.ts/0
{ "file_path": "chat-ui/src/lib/server/generateFromDefaultEndpoint.ts", "repo_id": "chat-ui", "token_count": 289 }
47
import { writable } from "svelte/store"; export const pendingMessage = writable< | { content: string; files: File[]; } | undefined >();
chat-ui/src/lib/stores/pendingMessage.ts/0
{ "file_path": "chat-ui/src/lib/stores/pendingMessage.ts", "repo_id": "chat-ui", "token_count": 56 }
48
import type { Timestamps } from "./Timestamps"; export interface Semaphore extends Timestamps { key: string; }
chat-ui/src/lib/types/Semaphore.ts/0
{ "file_path": "chat-ui/src/lib/types/Semaphore.ts", "repo_id": "chat-ui", "token_count": 35 }
49
export function formatUserCount(userCount: number): string { const userCountRanges: { min: number; max: number; label: string }[] = [ { min: 0, max: 1, label: "1" }, { min: 2, max: 9, label: "1-10" }, { min: 10, max: 49, label: "10+" }, { min: 50, max: 99, label: "50+" }, { min: 100, max: 299, label: "100+" }, { min: 300, max: 499, label: "300+" }, { min: 500, max: 999, label: "500+" }, { min: 1_000, max: 2_999, label: "1k+" }, { min: 3_000, max: 4_999, label: "3k+" }, { min: 5_000, max: 9_999, label: "5k+" }, { min: 10_000, max: Infinity, label: "10k+" }, ]; const range = userCountRanges.find(({ min, max }) => userCount >= min && userCount <= max); return range?.label ?? ""; }
chat-ui/src/lib/utils/formatUserCount.ts/0
{ "file_path": "chat-ui/src/lib/utils/formatUserCount.ts", "repo_id": "chat-ui", "token_count": 308 }
50
import { collections } from "$lib/server/database"; import { ObjectId } from "mongodb"; import { describe, expect, it } from "vitest"; import { insertLegacyConversation, insertSideBranchesConversation } from "./treeHelpers.spec"; import { addChildren } from "./addChildren"; import type { Message } from "$lib/types/Message"; const newMessage: Omit<Message, "id"> = { content: "new message", from: "user", }; Object.freeze(newMessage); describe("addChildren", async () => { it("should let you append on legacy conversations", async () => { const convId = await insertLegacyConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); const convLength = conv.messages.length; addChildren(conv, newMessage, conv.messages[conv.messages.length - 1].id); expect(conv.messages.length).toEqual(convLength + 1); }); it("should not let you create branches on legacy conversations", async () => { const convId = await insertLegacyConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); expect(() => addChildren(conv, newMessage, conv.messages[0].id)).toThrow(); }); it("should not let you create a message that already exists", async () => { const convId = await insertLegacyConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); const messageThatAlreadyExists: Message = { id: conv.messages[0].id, content: "new message", from: "user", }; expect(() => addChildren(conv, messageThatAlreadyExists, conv.messages[0].id)).toThrow(); }); it("should let you create branches on conversations with subtrees", async () => { const convId = await insertSideBranchesConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); const nChildren = conv.messages[0].children?.length; if (!nChildren) throw new Error("No children found"); addChildren(conv, newMessage, conv.messages[0].id); expect(conv.messages[0].children?.length).toEqual(nChildren + 1); }); it("should let you create a new leaf", async () => { const convId = await insertSideBranchesConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); const parentId = conv.messages[conv.messages.length - 1].id; const nChildren = conv.messages[conv.messages.length - 1].children?.length; if (nChildren === undefined) throw new Error("No children found"); expect(nChildren).toEqual(0); addChildren(conv, newMessage, parentId); expect(conv.messages[conv.messages.length - 2].children?.length).toEqual(nChildren + 1); }); it("should let you append to an empty conversation without specifying a parentId", async () => { const conv = { _id: new ObjectId(), rootMessageId: undefined, messages: [] as Message[], }; addChildren(conv, newMessage); expect(conv.messages.length).toEqual(1); expect(conv.rootMessageId).toEqual(conv.messages[0].id); }); it("should throw if you don't specify a parentId in a conversation with messages", async () => { const convId = await insertLegacyConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); expect(() => addChildren(conv, newMessage)).toThrow(); }); it("should return the id of the new message", async () => { const convId = await insertLegacyConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); expect(addChildren(conv, newMessage, conv.messages[conv.messages.length - 1].id)).toEqual( conv.messages[conv.messages.length - 1].id ); }); });
chat-ui/src/lib/utils/tree/addChildren.spec.ts/0
{ "file_path": "chat-ui/src/lib/utils/tree/addChildren.spec.ts", "repo_id": "chat-ui", "token_count": 1301 }
51
import { json } from "@sveltejs/kit"; import type { ConversationStats } from "$lib/types/ConversationStats"; import { CONVERSATION_STATS_COLLECTION, collections } from "$lib/server/database.js"; // Triger like this: // curl -X POST "http://localhost:5173/chat/admin/stats/compute" -H "Authorization: Bearer <ADMIN_API_SECRET>" export async function POST() { for (const span of ["day", "week", "month"] as const) { computeStats({ dateField: "updatedAt", type: "conversation", span }).catch(console.error); computeStats({ dateField: "createdAt", type: "conversation", span }).catch(console.error); computeStats({ dateField: "createdAt", type: "message", span }).catch(console.error); } return json({}, { status: 202 }); } async function computeStats(params: { dateField: ConversationStats["date"]["field"]; span: ConversationStats["date"]["span"]; type: ConversationStats["type"]; }) { const lastComputed = await collections.conversationStats.findOne( { "date.field": params.dateField, "date.span": params.span, type: params.type }, { sort: { "date.at": -1 } } ); // If the last computed week is at the beginning of the last computed month, we need to include some days from the previous month // In those cases we need to compute the stats from before the last month as everything is one aggregation const minDate = lastComputed ? lastComputed.date.at : new Date(0); console.log("Computing stats for", params.type, params.span, params.dateField, "from", minDate); const dateField = params.type === "message" ? "messages." + params.dateField : params.dateField; const pipeline = [ { $match: { [dateField]: { $gte: minDate }, }, }, { $project: { [dateField]: 1, sessionId: 1, userId: 1, }, }, ...(params.type === "message" ? [ { $unwind: "$messages", }, { $match: { [dateField]: { $gte: minDate }, }, }, ] : []), { $sort: { [dateField]: 1, }, }, { $facet: { userId: [ { $match: { userId: { $exists: true }, }, }, { $group: { _id: { at: { $dateTrunc: { date: `$${dateField}`, unit: params.span } }, userId: "$userId", }, }, }, { $group: { _id: "$_id.at", count: { $sum: 1 }, }, }, { $project: { _id: 0, date: { at: "$_id", field: params.dateField, span: params.span, }, distinct: "userId", count: 1, }, }, ], sessionId: [ { $match: { sessionId: { $exists: true }, }, }, { $group: { _id: { at: { $dateTrunc: { date: `$${dateField}`, unit: params.span } }, sessionId: "$sessionId", }, }, }, { $group: { _id: "$_id.at", count: { $sum: 1 }, }, }, { $project: { _id: 0, date: { at: "$_id", field: params.dateField, span: params.span, }, distinct: "sessionId", count: 1, }, }, ], userOrSessionId: [ { $group: { _id: { at: { $dateTrunc: { date: `$${dateField}`, unit: params.span } }, userOrSessionId: { $ifNull: ["$userId", "$sessionId"] }, }, }, }, { $group: { _id: "$_id.at", count: { $sum: 1 }, }, }, { $project: { _id: 0, date: { at: "$_id", field: params.dateField, span: params.span, }, distinct: "userOrSessionId", count: 1, }, }, ], _id: [ { $group: { _id: { $dateTrunc: { date: `$${dateField}`, unit: params.span } }, count: { $sum: 1 }, }, }, { $project: { _id: 0, date: { at: "$_id", field: params.dateField, span: params.span, }, distinct: "_id", count: 1, }, }, ], }, }, { $project: { stats: { $concatArrays: ["$userId", "$sessionId", "$userOrSessionId", "$_id"], }, }, }, { $unwind: "$stats", }, { $replaceRoot: { newRoot: "$stats", }, }, { $set: { type: params.type, }, }, { $merge: { into: CONVERSATION_STATS_COLLECTION, on: ["date.at", "type", "date.span", "date.field", "distinct"], whenMatched: "replace", whenNotMatched: "insert", }, }, ]; await collections.conversations.aggregate(pipeline, { allowDiskUse: true }).next(); console.log("Computed stats for", params.type, params.span, params.dateField); }
chat-ui/src/routes/admin/stats/compute/+server.ts/0
{ "file_path": "chat-ui/src/routes/admin/stats/compute/+server.ts", "repo_id": "chat-ui", "token_count": 2379 }
52
import { authCondition } from "$lib/server/auth"; import { collections } from "$lib/server/database"; import { error } from "@sveltejs/kit"; import { ObjectId } from "mongodb"; import { z } from "zod"; export async function POST({ params, request, locals }) { const { score } = z .object({ score: z.number().int().min(-1).max(1), }) .parse(await request.json()); const conversationId = new ObjectId(params.id); const messageId = params.messageId; const document = await collections.conversations.updateOne( { _id: conversationId, ...authCondition(locals), "messages.id": messageId, }, { ...(score !== 0 ? { $set: { "messages.$.score": score, }, } : { $unset: { "messages.$.score": "" } }), } ); if (!document.matchedCount) { throw error(404, "Message not found"); } return new Response(); }
chat-ui/src/routes/conversation/[id]/message/[messageId]/vote/+server.ts/0
{ "file_path": "chat-ui/src/routes/conversation/[id]/message/[messageId]/vote/+server.ts", "repo_id": "chat-ui", "token_count": 337 }
53
import { redirect } from "@sveltejs/kit"; export const load = async ({ params }) => { throw redirect(302, "../conversation/" + params.id); };
chat-ui/src/routes/r/[id]/+page.ts/0
{ "file_path": "chat-ui/src/routes/r/[id]/+page.ts", "repo_id": "chat-ui", "token_count": 46 }
54
<script lang="ts"> import { base } from "$app/paths"; import { clickOutside } from "$lib/actions/clickOutside"; import { afterNavigate, goto } from "$app/navigation"; import { useSettingsStore } from "$lib/stores/settings"; import CarbonCheckmark from "~icons/carbon/checkmark"; import { fade, fly } from "svelte/transition"; let previousPage: string = base; afterNavigate(({ from }) => { if (!from?.url.pathname.includes("settings")) { previousPage = from?.url.toString() || previousPage; } }); const settings = useSettingsStore(); </script> <div class="fixed inset-0 z-20 flex items-center justify-center bg-black/80 backdrop-blur-sm dark:bg-black/50" in:fade > <dialog in:fly={{ y: 100 }} open use:clickOutside={() => { goto(previousPage); }} class="h-[95dvh] w-[90dvw] overflow-hidden rounded-2xl bg-white shadow-2xl outline-none sm:h-[80dvh] xl:w-[1200px] 2xl:h-[70dvh]" > <slot /> {#if $settings.recentlySaved} <div class="absolute bottom-4 right-4 m-2 flex items-center gap-1.5 rounded-full border border-gray-300 bg-gray-200 px-3 py-1 text-black" > <CarbonCheckmark class="text-green-500" /> Saved </div> {/if} </dialog> </div>
chat-ui/src/routes/settings/+layout.svelte/0
{ "file_path": "chat-ui/src/routes/settings/+layout.svelte", "repo_id": "chat-ui", "token_count": 482 }
55
import adapter from "@sveltejs/adapter-node"; import { vitePreprocess } from "@sveltejs/kit/vite"; import dotenv from "dotenv"; dotenv.config({ path: "./.env.local" }); dotenv.config({ path: "./.env" }); process.env.PUBLIC_VERSION = process.env.npm_package_version; /** @type {import('@sveltejs/kit').Config} */ const config = { // Consult https://kit.svelte.dev/docs/integrations#preprocessors // for more information about preprocessors preprocess: vitePreprocess(), kit: { adapter: adapter(), paths: { base: process.env.APP_BASE || "", }, csrf: { // handled in hooks.server.ts, because we can have multiple valid origins checkOrigin: false, }, }, }; export default config;
chat-ui/svelte.config.js/0
{ "file_path": "chat-ui/svelte.config.js", "repo_id": "chat-ui", "token_count": 253 }
56
import json import os from dataclasses import dataclass import numpy as np import pyarrow as pa import datasets from utils import get_duration SPEED_TEST_N_EXAMPLES = 100_000_000_000 SPEED_TEST_CHUNK_SIZE = 10_000 RESULTS_BASEPATH, RESULTS_FILENAME = os.path.split(__file__) RESULTS_FILE_PATH = os.path.join(RESULTS_BASEPATH, "results", RESULTS_FILENAME.replace(".py", ".json")) def generate_100B_dataset(num_examples: int, chunk_size: int) -> datasets.Dataset: table = pa.Table.from_pydict({"col": [0] * chunk_size}) table = pa.concat_tables([table] * (num_examples // chunk_size)) return datasets.Dataset(table, fingerprint="table_100B") @dataclass class RandIter: low: int high: int size: int seed: int def __post_init__(self): rng = np.random.default_rng(self.seed) self._sampled_values = rng.integers(low=self.low, high=self.high, size=self.size).tolist() def __iter__(self): return iter(self._sampled_values) def __len__(self): return self.size @get_duration def get_first_row(dataset: datasets.Dataset): _ = dataset[0] @get_duration def get_last_row(dataset: datasets.Dataset): _ = dataset[-1] @get_duration def get_batch_of_1024_rows(dataset: datasets.Dataset): _ = dataset[range(len(dataset) // 2, len(dataset) // 2 + 1024)] @get_duration def get_batch_of_1024_random_rows(dataset: datasets.Dataset): _ = dataset[RandIter(0, len(dataset), 1024, seed=42)] def benchmark_table_100B(): times = {"num examples": SPEED_TEST_N_EXAMPLES} functions = (get_first_row, get_last_row, get_batch_of_1024_rows, get_batch_of_1024_random_rows) print("generating dataset") dataset = generate_100B_dataset(num_examples=SPEED_TEST_N_EXAMPLES, chunk_size=SPEED_TEST_CHUNK_SIZE) print("Functions") for func in functions: print(func.__name__) times[func.__name__] = func(dataset) with open(RESULTS_FILE_PATH, "wb") as f: f.write(json.dumps(times).encode("utf-8")) if __name__ == "__main__": # useful to run the profiler benchmark_table_100B()
datasets/benchmarks/benchmark_getitem_100B.py/0
{ "file_path": "datasets/benchmarks/benchmark_getitem_100B.py", "repo_id": "datasets", "token_count": 867 }
57
# Datasets ๐Ÿค Arrow ## What is Arrow? [Arrow](https://arrow.apache.org/) enables large amounts of data to be processed and moved quickly. It is a specific data format that stores data in a columnar memory layout. This provides several significant advantages: * Arrow's standard format allows [zero-copy reads](https://en.wikipedia.org/wiki/Zero-copy) which removes virtually all serialization overhead. * Arrow is language-agnostic so it supports different programming languages. * Arrow is column-oriented so it is faster at querying and processing slices or columns of data. * Arrow allows for copy-free hand-offs to standard machine learning tools such as NumPy, Pandas, PyTorch, and TensorFlow. * Arrow supports many, possibly nested, column types. ## Memory-mapping ๐Ÿค— Datasets uses Arrow for its local caching system. It allows datasets to be backed by an on-disk cache, which is memory-mapped for fast lookup. This architecture allows for large datasets to be used on machines with relatively small device memory. For example, loading the full English Wikipedia dataset only takes a few MB of RAM: ```python >>> import os; import psutil; import timeit >>> from datasets import load_dataset # Process.memory_info is expressed in bytes, so convert to megabytes >>> mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) >>> wiki = load_dataset("wikipedia", "20220301.en", split="train") >>> mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) >>> print(f"RAM memory used: {(mem_after - mem_before)} MB") RAM memory used: 50 MB ``` This is possible because the Arrow data is actually memory-mapped from disk, and not loaded in memory. Memory-mapping allows access to data on disk, and leverages virtual memory capabilities for fast lookups. ## Performance Iterating over a memory-mapped dataset using Arrow is fast. Iterating over Wikipedia on a laptop gives you speeds of 1-3 Gbit/s: ```python >>> s = """batch_size = 1000 ... for batch in wiki.iter(batch_size): ... ... ... """ >>> elapsed_time = timeit.timeit(stmt=s, number=1, globals=globals()) >>> print(f"Time to iterate over the {wiki.dataset_size >> 30} GB dataset: {elapsed_time:.1f} sec, " ... f"ie. {float(wiki.dataset_size >> 27)/elapsed_time:.1f} Gb/s") Time to iterate over the 18 GB dataset: 31.8 sec, ie. 4.8 Gb/s ```
datasets/docs/source/about_arrow.md/0
{ "file_path": "datasets/docs/source/about_arrow.md", "repo_id": "datasets", "token_count": 682 }
58
# Depth estimation Depth estimation datasets are used to train a model to approximate the relative distance of every pixel in an image from the camera, also known as depth. The applications enabled by these datasets primarily lie in areas like visual machine perception and perception in robotics. Example applications include mapping streets for self-driving cars. This guide will show you how to apply transformations to a depth estimation dataset. Before you start, make sure you have up-to-date versions of `albumentations` installed: ```bash pip install -U albumentations ``` [Albumentations](https://albumentations.ai/) is a Python library for performing data augmentation for computer vision. It supports various computer vision tasks such as image classification, object detection, segmentation, and keypoint estimation. This guide uses the [NYU Depth V2](https://huggingface.co/datasets/sayakpaul/nyu_depth_v2) dataset which is comprised of video sequences from various indoor scenes, recorded by RGB and depth cameras. The dataset consists of scenes from 3 cities and provides images along with their depth maps as labels. Load the `train` split of the dataset and take a look at an example: ```py >>> from datasets import load_dataset >>> train_dataset = load_dataset("sayakpaul/nyu_depth_v2", split="train") >>> index = 17 >>> example = train_dataset[index] >>> example {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=640x480>, 'depth_map': <PIL.TiffImagePlugin.TiffImageFile image mode=F size=640x480>} ``` The dataset has two fields: * `image`: a PIL PNG image object with `uint8` data type. * `depth_map`: a PIL Tiff image object with `float32` data type which is the depth map of the image. It is mention-worthy that JPEG/PNG format can only store `uint8` or `uint16` data. As the depth map is `float32` data, it can't be stored using PNG/JPEG. However, we can save the depth map using TIFF format as it supports a wider range of data types, including `float32` data. Next, check out an image with: ```py >>> example["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/depth_est_sample.png"> </div> Before we look at the depth map, we need to first convert its data type to `uint8` using `.convert('RGB')` as PIL can't display `float32` images. Now take a look at its corresponding depth map: ```py >>> example["depth_map"].convert("RGB") ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/depth_est_target.png"> </div> It's all black! You'll need to add some color to the depth map to visualize it properly. To do that, either we can apply color automatically during display using `plt.imshow()` or create a colored depth map using `plt.cm` and then display it. In this example, we have used the latter one, as we can save/write the colored depth map later. (the utility below is taken from the [FastDepth repository](https://github.com/dwofk/fast-depth/blob/master/utils.py)). ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> cmap = plt.cm.viridis >>> def colored_depthmap(depth, d_min=None, d_max=None): ... if d_min is None: ... d_min = np.min(depth) ... if d_max is None: ... d_max = np.max(depth) ... depth_relative = (depth - d_min) / (d_max - d_min) ... return 255 * cmap(depth_relative)[:,:,:3] >>> def show_depthmap(depth_map): ... if not isinstance(depth_map, np.ndarray): ... depth_map = np.array(depth_map) ... if depth_map.ndim == 3: ... depth_map = depth_map.squeeze() ... d_min = np.min(depth_map) ... d_max = np.max(depth_map) ... depth_map = colored_depthmap(depth_map, d_min, d_max) ... plt.imshow(depth_map.astype("uint8")) ... plt.axis("off") ... plt.show() >>> show_depthmap(example["depth_map"]) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/depth_est_target_viz.png"> </div> You can also visualize several different images and their corresponding depth maps. ```py >>> def merge_into_row(input_image, depth_target): ... if not isinstance(input_image, np.ndarray): ... input_image = np.array(input_image) ... ... d_min = np.min(depth_target) ... d_max = np.max(depth_target) ... depth_target_col = colored_depthmap(depth_target, d_min, d_max) ... img_merge = np.hstack([input_image, depth_target_col]) ... ... return img_merge >>> random_indices = np.random.choice(len(train_dataset), 9).tolist() >>> plt.figure(figsize=(15, 6)) >>> for i, idx in enumerate(random_indices): ... example = train_dataset[idx] ... ax = plt.subplot(3, 3, i + 1) ... image_viz = merge_into_row( ... example["image"], example["depth_map"] ... ) ... plt.imshow(image_viz.astype("uint8")) ... plt.axis("off") ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/depth_est_collage.png"> </div> Now apply some augmentations with `albumentations`. The augmentation transformations include: * Random horizontal flipping * Random cropping * Random brightness and contrast * Random gamma correction * Random hue saturation ```py >>> import albumentations as A >>> crop_size = (448, 576) >>> transforms = [ ... A.HorizontalFlip(p=0.5), ... A.RandomCrop(crop_size[0], crop_size[1]), ... A.RandomBrightnessContrast(), ... A.RandomGamma(), ... A.HueSaturationValue() ... ] ``` Additionally, define a mapping to better reflect the target key name. ```py >>> additional_targets = {"depth": "mask"} >>> aug = A.Compose(transforms=transforms, additional_targets=additional_targets) ``` With `additional_targets` defined, you can pass the target depth maps to the `depth` argument of `aug` instead of `mask`. You'll notice this change in the `apply_transforms()` function defined below. Create a function to apply the transformation to the images as well as their depth maps: ```py >>> def apply_transforms(examples): ... transformed_images, transformed_maps = [], [] ... for image, depth_map in zip(examples["image"], examples["depth_map"]): ... image, depth_map = np.array(image), np.array(depth_map) ... transformed = aug(image=image, depth=depth_map) ... transformed_images.append(transformed["image"]) ... transformed_maps.append(transformed["depth"]) ... ... examples["pixel_values"] = transformed_images ... examples["labels"] = transformed_maps ... return examples ``` Use the [`~Dataset.set_transform`] function to apply the transformation on-the-fly to batches of the dataset to consume less disk space: ```py >>> train_dataset.set_transform(apply_transforms) ``` You can verify the transformation worked by indexing into the `pixel_values` and `labels` of an example image: ```py >>> example = train_dataset[index] >>> plt.imshow(example["pixel_values"]) >>> plt.axis("off") >>> plt.show() ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/depth_est_sample_aug.png"> </div> Visualize the same transformation on the image's corresponding depth map: ```py >>> show_depthmap(example["labels"]) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/depth_est_target_aug.png"> </div> You can also visualize multiple training samples reusing the previous `random_indices`: ```py >>> plt.figure(figsize=(15, 6)) >>> for i, idx in enumerate(random_indices): ... ax = plt.subplot(3, 3, i + 1) ... example = train_dataset[idx] ... image_viz = merge_into_row( ... example["pixel_values"], example["labels"] ... ) ... plt.imshow(image_viz.astype("uint8")) ... plt.axis("off") ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/depth_est_aug_collage.png"> </div>
datasets/docs/source/depth_estimation.mdx/0
{ "file_path": "datasets/docs/source/depth_estimation.mdx", "repo_id": "datasets", "token_count": 2848 }
59
# Load text data This guide shows you how to load text datasets. To learn how to load any type of dataset, take a look at the <a class="underline decoration-sky-400 decoration-2 font-semibold" href="./loading">general loading guide</a>. Text files are one of the most common file types for storing a dataset. By default, ๐Ÿค— Datasets samples a text file line by line to build the dataset. ```py >>> from datasets import load_dataset >>> dataset = load_dataset("text", data_files={"train": ["my_text_1.txt", "my_text_2.txt"], "test": "my_test_file.txt"}) # Load from a directory >>> dataset = load_dataset("text", data_dir="path/to/text/dataset") ``` To sample a text file by paragraph or even an entire document, use the `sample_by` parameter: ```py # Sample by paragraph >>> dataset = load_dataset("text", data_files={"train": "my_train_file.txt", "test": "my_test_file.txt"}, sample_by="paragraph") # Sample by document >>> dataset = load_dataset("text", data_files={"train": "my_train_file.txt", "test": "my_test_file.txt"}, sample_by="document") ``` You can also use grep patterns to load specific files: ```py >>> from datasets import load_dataset >>> c4_subset = load_dataset("allenai/c4", data_files="en/c4-train.0000*-of-01024.json.gz") ``` To load remote text files via HTTP, pass the URLs instead: ```py >>> dataset = load_dataset("text", data_files="https://huggingface.co/datasets/lhoestq/test/resolve/main/some_text.txt") ```
datasets/docs/source/nlp_load.mdx/0
{ "file_path": "datasets/docs/source/nlp_load.mdx", "repo_id": "datasets", "token_count": 482 }
60
# Troubleshooting This guide aims to provide you the tools and knowledge required to navigate some common issues. If the suggestions listed in this guide do not cover your such situation, please refer to the [Asking for Help](#asking-for-help) section to learn where to find help with your specific issue. ## Issues when uploading datasets with `push_to_hub` ### Authentication issues If you are experiencing authentication issues when sharing a dataset on ๐Ÿค— Hub using [`Dataset.push_to_hub`] and a Hugging Face access token: * Make sure that the Hugging Face token you're using to authenticate yourself is a token with **write** permission. * On OSX, it may help to clean up all the huggingface.co passwords on your keychain access, as well as reconfigure `git config --global credential.helper osxkeychain`, before using `huggingface-cli login`. Alternatively, you can use SSH keys to authenticate yourself - read more in the [๐Ÿค— Hub documentation](https://huggingface.co/docs/hub/security-git-ssh). ### Lost connection on large dataset upload When uploading large datasets to Hub, if the number of dataset shards is large, it can create too many commits for the Hub in a short period. This will result in a connection error. The connection error can also be caused by a HTTP 500 error returned by AWS S3 bucket that Hub uses internally. In either situation, you can re-run [`Dataset.push_to_hub`] to proceed with the dataset upload. Hub will check the SHAs of already uploaded shards to avoid reuploading them. We are working on making upload process more robust to transient errors, so updating to the latest library version is always a good idea. ### `Too Many Requests` Uploading large datasets via `push_to_hub()` can result in an error: ```bash HfHubHTTPError: 429 Client Error: Too Many Requests for url: ... You have exceeded our hourly quotas for action: commit. We invite you to retry later. ``` If you encounter this issue, you need to upgrade the `datasets` library to the latest version (or at least `2.15.0`). ## Issues when creating datasets from custom data ### Loading images and audio from a folder When creating a dataset from a folder, one of the most common issues is that the file structure does not follow the expected format, or there's an issue with the metadata file. Learn more about required folder structure in corresponding documentation pages: * [AudioFolder](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) * [ImageFolder](https://huggingface.co/docs/datasets/image_dataset#imagefolder) ### Pickling issues #### Pickling issues when using `Dataset.from_generator` When creating a dataset, [`IterableDataset.from_generator`] and [`Dataset.from_generator`] expect a "picklable" generator function. This is required to hash the function using [`pickle`](https://docs.python.org/3/library/pickle.html) to be able to cache the dataset on disk. While generator functions are generally "picklable", note that generator objects are not. So if you're using a generator object, you will encounter a `TypeError` like this: ```bash TypeError: cannot pickle 'generator' object ``` This error can also occur when using a generator function that uses a global object that is not "picklable", such as a DB connection, for example. If that's the case, you can initialize such object directly inside the generator function to avoid this error. #### Pickling issues with `Dataset.map` Pickling errors can also happen in the multiprocess [`Dataset.map`] - objects are pickled to be passed to child processes. If the objects used in the transformation are not picklable, it's not possible to cache the result of `map`, which leads to an error being raised. Here are some ways to address this issue: * A universal solution to pickle issues is to make sure the objects (or generator classes) are pickable manually by implementing `__getstate__` / `__setstate__` / `__reduce__`. * You can also provide your own unique hash in `map` with the `new_fingerprint` argument. * You can also disable caching by calling `datasets.disable_caching()`, however, this is undesirable - [read more about importance of cache](cache) ## Asking for help If the above troubleshooting advice did not help you resolve your issue, reach out for help to the community and the team. ### Forums Ask for help on the Hugging Face forums - post your question in the [๐Ÿค—Datasets category](https://discuss.huggingface.co/c/datasets/10) Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved! ### Discord Post a question on [Discord](http://hf.co/join/discord), and let the team and the community help you. ### Community Discussions on ๐Ÿค— Hub If you are facing issues creating a custom dataset with a script on Hub, you can ask the Hugging Face team for help by opening a discussion in the Community tab of your dataset with this message: ```text # Dataset rewiew request for <Dataset name> ## Description <brief description of the dataset> ## Files to review - file1 - file2 - ... cc @lhoestq @polinaeterna @mariosasko @albertvillanova ``` ### GitHub Issues Finally, if you suspect to have found a bug related to the library itself, create an Issue on the ๐Ÿค— Datasets [GitHub repository](https://github.com/huggingface/datasets/issues). Include context regarding the bug: code snippet to reproduce, details about your environment and data, etc. to help us figure out what's wrong and how we can fix it.
datasets/docs/source/troubleshoot.mdx/0
{ "file_path": "datasets/docs/source/troubleshoot.mdx", "repo_id": "datasets", "token_count": 1470 }
61
# Metric Card for CER ## Metric description Character error rate (CER) is a common metric of the performance of an automatic speech recognition (ASR) system. CER is similar to Word Error Rate (WER), but operates on character instead of word. Character error rate can be computed as: `CER = (S + D + I) / N = (S + D + I) / (S + D + C)` where `S` is the number of substitutions, `D` is the number of deletions, `I` is the number of insertions, `C` is the number of correct characters, `N` is the number of characters in the reference (`N=S+D+C`). ## How to use The metric takes two inputs: references (a list of references for each speech input) and predictions (a list of transcriptions to score). ```python from datasets import load_metric cer = load_metric("cer") cer_score = cer.compute(predictions=predictions, references=references) ``` ## Output values This metric outputs a float representing the character error rate. ``` print(cer_score) 0.34146341463414637 ``` The **lower** the CER value, the **better** the performance of the ASR system, with a CER of 0 being a perfect score. However, CER's output is not always a number between 0 and 1, in particular when there is a high number of insertions (see [Examples](#Examples) below). ### Values from popular papers This metric is highly dependent on the content and quality of the dataset, and therefore users can expect very different values for the same model but on different datasets. Multilingual datasets such as [Common Voice](https://huggingface.co/datasets/common_voice) report different CERs depending on the language, ranging from 0.02-0.03 for languages such as French and Italian, to 0.05-0.07 for English (see [here](https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonVoice/ASR/CTC) for more values). ## Examples Perfect match between prediction and reference: ```python from datasets import load_metric cer = load_metric("cer") predictions = ["hello world", "good night moon"] references = ["hello world", "good night moon"] cer_score = cer.compute(predictions=predictions, references=references) print(cer_score) 0.0 ``` Partial match between prediction and reference: ```python from datasets import load_metric cer = load_metric("cer") predictions = ["this is the prediction", "there is an other sample"] references = ["this is the reference", "there is another one"] cer_score = cer.compute(predictions=predictions, references=references) print(cer_score) 0.34146341463414637 ``` No match between prediction and reference: ```python from datasets import load_metric cer = load_metric("cer") predictions = ["hello"] references = ["gracias"] cer_score = cer.compute(predictions=predictions, references=references) print(cer_score) 1.0 ``` CER above 1 due to insertion errors: ```python from datasets import load_metric cer = load_metric("cer") predictions = ["hello world"] references = ["hello"] cer_score = cer.compute(predictions=predictions, references=references) print(cer_score) 1.2 ``` ## Limitations and bias CER is useful for comparing different models for tasks such as automatic speech recognition (ASR) and optic character recognition (OCR), especially for multilingual datasets where WER is not suitable given the diversity of languages. However, CER provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort. Also, in some cases, instead of reporting the raw CER, a normalized CER is reported where the number of mistakes is divided by the sum of the number of edit operations (`I` + `S` + `D`) and `C` (the number of correct characters), which results in CER values that fall within the range of 0โ€“100%. ## Citation ```bibtex @inproceedings{morris2004, author = {Morris, Andrew and Maier, Viktoria and Green, Phil}, year = {2004}, month = {01}, pages = {}, title = {From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition.} } ``` ## Further References - [Hugging Face Tasks -- Automatic Speech Recognition](https://huggingface.co/tasks/automatic-speech-recognition)
datasets/metrics/cer/README.md/0
{ "file_path": "datasets/metrics/cer/README.md", "repo_id": "datasets", "token_count": 1192 }
62
"""Official evaluation script for CUAD dataset.""" import argparse import json import re import string import sys import numpy as np IOU_THRESH = 0.5 def get_jaccard(prediction, ground_truth): remove_tokens = [".", ",", ";", ":"] for token in remove_tokens: ground_truth = ground_truth.replace(token, "") prediction = prediction.replace(token, "") ground_truth, prediction = ground_truth.lower(), prediction.lower() ground_truth, prediction = ground_truth.replace("/", " "), prediction.replace("/", " ") ground_truth, prediction = set(ground_truth.split(" ")), set(prediction.split(" ")) intersection = ground_truth.intersection(prediction) union = ground_truth.union(prediction) jaccard = len(intersection) / len(union) return jaccard def normalize_answer(s): """Lower text and remove punctuation, articles and extra whitespace.""" def remove_articles(text): return re.sub(r"\b(a|an|the)\b", " ", text) def white_space_fix(text): return " ".join(text.split()) def remove_punc(text): exclude = set(string.punctuation) return "".join(ch for ch in text if ch not in exclude) def lower(text): return text.lower() return white_space_fix(remove_articles(remove_punc(lower(s)))) def compute_precision_recall(predictions, ground_truths, qa_id): tp, fp, fn = 0, 0, 0 substr_ok = "Parties" in qa_id # first check if ground truth is empty if len(ground_truths) == 0: if len(predictions) > 0: fp += len(predictions) # false positive for each one else: for ground_truth in ground_truths: assert len(ground_truth) > 0 # check if there is a match match_found = False for pred in predictions: if substr_ok: is_match = get_jaccard(pred, ground_truth) >= IOU_THRESH or ground_truth in pred else: is_match = get_jaccard(pred, ground_truth) >= IOU_THRESH if is_match: match_found = True if match_found: tp += 1 else: fn += 1 # now also get any fps by looping through preds for pred in predictions: # Check if there's a match. if so, don't count (don't want to double count based on the above) # but if there's no match, then this is a false positive. # (Note: we get the true positives in the above loop instead of this loop so that we don't double count # multiple predictions that are matched with the same answer.) match_found = False for ground_truth in ground_truths: assert len(ground_truth) > 0 if substr_ok: is_match = get_jaccard(pred, ground_truth) >= IOU_THRESH or ground_truth in pred else: is_match = get_jaccard(pred, ground_truth) >= IOU_THRESH if is_match: match_found = True if not match_found: fp += 1 precision = tp / (tp + fp) if tp + fp > 0 else np.nan recall = tp / (tp + fn) if tp + fn > 0 else np.nan return precision, recall def process_precisions(precisions): """ Processes precisions to ensure that precision and recall don't both get worse. Assumes the list precision is sorted in order of recalls """ precision_best = precisions[::-1] for i in range(1, len(precision_best)): precision_best[i] = max(precision_best[i - 1], precision_best[i]) precisions = precision_best[::-1] return precisions def get_aupr(precisions, recalls): processed_precisions = process_precisions(precisions) aupr = np.trapz(processed_precisions, recalls) if np.isnan(aupr): return 0 return aupr def get_prec_at_recall(precisions, recalls, recall_thresh): """Assumes recalls are sorted in increasing order""" processed_precisions = process_precisions(precisions) prec_at_recall = 0 for prec, recall in zip(processed_precisions, recalls): if recall >= recall_thresh: prec_at_recall = prec break return prec_at_recall def exact_match_score(prediction, ground_truth): return normalize_answer(prediction) == normalize_answer(ground_truth) def metric_max_over_ground_truths(metric_fn, predictions, ground_truths): score = 0 for pred in predictions: for ground_truth in ground_truths: score = metric_fn(pred, ground_truth) if score == 1: # break the loop when one prediction matches the ground truth break if score == 1: break return score def evaluate(dataset, predictions): f1 = exact_match = total = 0 precisions = [] recalls = [] for article in dataset: for paragraph in article["paragraphs"]: for qa in paragraph["qas"]: total += 1 if qa["id"] not in predictions: message = "Unanswered question " + qa["id"] + " will receive score 0." print(message, file=sys.stderr) continue ground_truths = [x["text"] for x in qa["answers"]] prediction = predictions[qa["id"]] precision, recall = compute_precision_recall(prediction, ground_truths, qa["id"]) precisions.append(precision) recalls.append(recall) if precision == 0 and recall == 0: f1 += 0 else: f1 += 2 * (precision * recall) / (precision + recall) exact_match += metric_max_over_ground_truths(exact_match_score, prediction, ground_truths) precisions = [x for _, x in sorted(zip(recalls, precisions))] recalls.sort() f1 = 100.0 * f1 / total exact_match = 100.0 * exact_match / total aupr = get_aupr(precisions, recalls) prec_at_90_recall = get_prec_at_recall(precisions, recalls, recall_thresh=0.9) prec_at_80_recall = get_prec_at_recall(precisions, recalls, recall_thresh=0.8) return { "exact_match": exact_match, "f1": f1, "aupr": aupr, "prec_at_80_recall": prec_at_80_recall, "prec_at_90_recall": prec_at_90_recall, } if __name__ == "__main__": parser = argparse.ArgumentParser(description="Evaluation for CUAD") parser.add_argument("dataset_file", help="Dataset file") parser.add_argument("prediction_file", help="Prediction File") args = parser.parse_args() with open(args.dataset_file) as dataset_file: dataset_json = json.load(dataset_file) dataset = dataset_json["data"] with open(args.prediction_file) as prediction_file: predictions = json.load(prediction_file) print(json.dumps(evaluate(dataset, predictions)))
datasets/metrics/cuad/evaluate.py/0
{ "file_path": "datasets/metrics/cuad/evaluate.py", "repo_id": "datasets", "token_count": 3035 }
63
# Copyright 2021 The HuggingFace Datasets Authors and the current dataset script contributor. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Mahalanobis metric.""" import numpy as np import datasets _DESCRIPTION = """ Compute the Mahalanobis Distance Mahalonobis distance is the distance between a point and a distribution. And not between two distinct points. It is effectively a multivariate equivalent of the Euclidean distance. It was introduced by Prof. P. C. Mahalanobis in 1936 and has been used in various statistical applications ever since [source: https://www.machinelearningplus.com/statistics/mahalanobis-distance/] """ _CITATION = """\ @article{de2000mahalanobis, title={The mahalanobis distance}, author={De Maesschalck, Roy and Jouan-Rimbaud, Delphine and Massart, D{\'e}sir{\'e} L}, journal={Chemometrics and intelligent laboratory systems}, volume={50}, number={1}, pages={1--18}, year={2000}, publisher={Elsevier} } """ _KWARGS_DESCRIPTION = """ Args: X: List of datapoints to be compared with the `reference_distribution`. reference_distribution: List of datapoints from the reference distribution we want to compare to. Returns: mahalanobis: The Mahalonobis distance for each datapoint in `X`. Examples: >>> mahalanobis_metric = datasets.load_metric("mahalanobis") >>> results = mahalanobis_metric.compute(reference_distribution=[[0, 1], [1, 0]], X=[[0, 1]]) >>> print(results) {'mahalanobis': array([0.5])} """ @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) class Mahalanobis(datasets.Metric): def _info(self): return datasets.MetricInfo( description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, features=datasets.Features( { "X": datasets.Sequence(datasets.Value("float", id="sequence"), id="X"), } ), ) def _compute(self, X, reference_distribution): # convert to numpy arrays X = np.array(X) reference_distribution = np.array(reference_distribution) # Assert that arrays are 2D if len(X.shape) != 2: raise ValueError("Expected `X` to be a 2D vector") if len(reference_distribution.shape) != 2: raise ValueError("Expected `reference_distribution` to be a 2D vector") if reference_distribution.shape[0] < 2: raise ValueError( "Expected `reference_distribution` to be a 2D vector with more than one element in the first dimension" ) # Get mahalanobis distance for each prediction X_minus_mu = X - np.mean(reference_distribution) cov = np.cov(reference_distribution.T) try: inv_covmat = np.linalg.inv(cov) except np.linalg.LinAlgError: inv_covmat = np.linalg.pinv(cov) left_term = np.dot(X_minus_mu, inv_covmat) mahal_dist = np.dot(left_term, X_minus_mu.T).diagonal() return {"mahalanobis": mahal_dist}
datasets/metrics/mahalanobis/mahalanobis.py/0
{ "file_path": "datasets/metrics/mahalanobis/mahalanobis.py", "repo_id": "datasets", "token_count": 1363 }
64
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Precision metric.""" from sklearn.metrics import precision_score import datasets _DESCRIPTION = """ Precision is the fraction of correctly labeled positive examples out of all of the examples that were labeled as positive. It is computed via the equation: Precision = TP / (TP + FP) where TP is the True positives (i.e. the examples correctly labeled as positive) and FP is the False positive examples (i.e. the examples incorrectly labeled as positive). """ _KWARGS_DESCRIPTION = """ Args: predictions (`list` of `int`): Predicted class labels. references (`list` of `int`): Actual class labels. labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`. If `average` is `None`, it should be the label order. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None. pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1. average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`. - 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary. - 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives. - 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. - 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall. - 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification). sample_weight (`list` of `float`): Sample weights Defaults to None. zero_division (`int` or `string`): Sets the value to return when there is a zero division. Defaults to 'warn'. - 0: Returns 0 when there is a zero division. - 1: Returns 1 when there is a zero division. - 'warn': Raises warnings and then returns 0 when there is a zero division. Returns: precision (`float` or `array` of `float`): Precision score or list of precision scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher values indicate that fewer negative examples were incorrectly labeled as positive, which means that, generally, higher scores are better. Examples: Example 1-A simple binary example >>> precision_metric = datasets.load_metric("precision") >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0]) >>> print(results) {'precision': 0.5} Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`. >>> precision_metric = datasets.load_metric("precision") >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0) >>> print(round(results['precision'], 2)) 0.67 Example 3-The same simple binary example as in Example 1, but with `sample_weight` included. >>> precision_metric = datasets.load_metric("precision") >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3]) >>> print(results) {'precision': 0.23529411764705882} Example 4-A multiclass example, with different values for the `average` input. >>> predictions = [0, 2, 1, 0, 0, 1] >>> references = [0, 1, 2, 0, 1, 2] >>> results = precision_metric.compute(predictions=predictions, references=references, average='macro') >>> print(results) {'precision': 0.2222222222222222} >>> results = precision_metric.compute(predictions=predictions, references=references, average='micro') >>> print(results) {'precision': 0.3333333333333333} >>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted') >>> print(results) {'precision': 0.2222222222222222} >>> results = precision_metric.compute(predictions=predictions, references=references, average=None) >>> print([round(res, 2) for res in results['precision']]) [0.67, 0.0, 0.0] """ _CITATION = """ @article{scikit-learn, title={Scikit-learn: Machine Learning in {P}ython}, author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, journal={Journal of Machine Learning Research}, volume={12}, pages={2825--2830}, year={2011} } """ @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) class Precision(datasets.Metric): def _info(self): return datasets.MetricInfo( description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, features=datasets.Features( { "predictions": datasets.Sequence(datasets.Value("int32")), "references": datasets.Sequence(datasets.Value("int32")), } if self.config_name == "multilabel" else { "predictions": datasets.Value("int32"), "references": datasets.Value("int32"), } ), reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html"], ) def _compute( self, predictions, references, labels=None, pos_label=1, average="binary", sample_weight=None, zero_division="warn", ): score = precision_score( references, predictions, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight, zero_division=zero_division, ) return {"precision": float(score) if score.size == 1 else score}
datasets/metrics/precision/precision.py/0
{ "file_path": "datasets/metrics/precision/precision.py", "repo_id": "datasets", "token_count": 2663 }
65
"""Official evaluation script for v1.1 of the SQuAD dataset.""" import argparse import json import re import string import sys from collections import Counter def normalize_answer(s): """Lower text and remove punctuation, articles and extra whitespace.""" def remove_articles(text): return re.sub(r"\b(a|an|the)\b", " ", text) def white_space_fix(text): return " ".join(text.split()) def remove_punc(text): exclude = set(string.punctuation) return "".join(ch for ch in text if ch not in exclude) def lower(text): return text.lower() return white_space_fix(remove_articles(remove_punc(lower(s)))) def f1_score(prediction, ground_truth): prediction_tokens = normalize_answer(prediction).split() ground_truth_tokens = normalize_answer(ground_truth).split() common = Counter(prediction_tokens) & Counter(ground_truth_tokens) num_same = sum(common.values()) if num_same == 0: return 0 precision = 1.0 * num_same / len(prediction_tokens) recall = 1.0 * num_same / len(ground_truth_tokens) f1 = (2 * precision * recall) / (precision + recall) return f1 def exact_match_score(prediction, ground_truth): return normalize_answer(prediction) == normalize_answer(ground_truth) def metric_max_over_ground_truths(metric_fn, prediction, ground_truths): scores_for_ground_truths = [] for ground_truth in ground_truths: score = metric_fn(prediction, ground_truth) scores_for_ground_truths.append(score) return max(scores_for_ground_truths) def evaluate(dataset, predictions): f1 = exact_match = total = 0 for article in dataset: for paragraph in article["paragraphs"]: for qa in paragraph["qas"]: total += 1 if qa["id"] not in predictions: message = "Unanswered question " + qa["id"] + " will receive score 0." print(message, file=sys.stderr) continue ground_truths = [x["text"] for x in qa["answers"]] prediction = predictions[qa["id"]] exact_match += metric_max_over_ground_truths(exact_match_score, prediction, ground_truths) f1 += metric_max_over_ground_truths(f1_score, prediction, ground_truths) exact_match = 100.0 * exact_match / total f1 = 100.0 * f1 / total return {"exact_match": exact_match, "f1": f1} if __name__ == "__main__": expected_version = "1.1" parser = argparse.ArgumentParser(description="Evaluation for SQuAD " + expected_version) parser.add_argument("dataset_file", help="Dataset file") parser.add_argument("prediction_file", help="Prediction File") args = parser.parse_args() with open(args.dataset_file) as dataset_file: dataset_json = json.load(dataset_file) if dataset_json["version"] != expected_version: print( "Evaluation expects v-" + expected_version + ", but got dataset with v-" + dataset_json["version"], file=sys.stderr, ) dataset = dataset_json["data"] with open(args.prediction_file) as prediction_file: predictions = json.load(prediction_file) print(json.dumps(evaluate(dataset, predictions)))
datasets/metrics/squad/evaluate.py/0
{ "file_path": "datasets/metrics/squad/evaluate.py", "repo_id": "datasets", "token_count": 1337 }
66
# Metric Card for XTREME-S ## Metric Description The XTREME-S metric aims to evaluate model performance on the Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark. This benchmark was designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval. ## How to Use There are two steps: (1) loading the XTREME-S metric relevant to the subset of the benchmark being used for evaluation; and (2) calculating the metric. 1. **Loading the relevant XTREME-S metric** : the subsets of XTREME-S are the following: `mls`, `voxpopuli`, `covost2`, `fleurs-asr`, `fleurs-lang_id`, `minds14` and `babel`. More information about the different subsets can be found on the [XTREME-S benchmark page](https://huggingface.co/datasets/google/xtreme_s). ```python >>> from datasets import load_metric >>> xtreme_s_metric = datasets.load_metric('xtreme_s', 'mls') ``` 2. **Calculating the metric**: the metric takes two inputs : - `predictions`: a list of predictions to score, with each prediction a `str`. - `references`: a list of lists of references for each translation, with each reference a `str`. ```python >>> references = ["it is sunny here", "paper and pen are essentials"] >>> predictions = ["it's sunny", "paper pen are essential"] >>> results = xtreme_s_metric.compute(predictions=predictions, references=references) ``` It also has two optional arguments: - `bleu_kwargs`: a `dict` of keywords to be passed when computing the `bleu` metric for the `covost2` subset. Keywords can be one of `smooth_method`, `smooth_value`, `force`, `lowercase`, `tokenize`, `use_effective_order`. - `wer_kwargs`: optional dict of keywords to be passed when computing `wer` and `cer`, which are computed for the `mls`, `fleurs-asr`, `voxpopuli`, and `babel` subsets. Keywords are `concatenate_texts`. ## Output values The output of the metric depends on the XTREME-S subset chosen, consisting of a dictionary that contains one or several of the following metrics: - `accuracy`: the proportion of correct predictions among the total number of cases processed, with a range between 0 and 1 (see [accuracy](https://huggingface.co/metrics/accuracy) for more information). This is returned for the `fleurs-lang_id` and `minds14` subsets. - `f1`: the harmonic mean of the precision and recall (see [F1 score](https://huggingface.co/metrics/f1) for more information). Its range is 0-1 -- its lowest possible value is 0, if either the precision or the recall is 0, and its highest possible value is 1.0, which means perfect precision and recall. It is returned for the `minds14` subset. - `wer`: Word error rate (WER) is a common metric of the performance of an automatic speech recognition system. The lower the value, the better the performance of the ASR system, with a WER of 0 being a perfect score (see [WER score](https://huggingface.co/metrics/wer) for more information). It is returned for the `mls`, `fleurs-asr`, `voxpopuli` and `babel` subsets of the benchmark. - `cer`: Character error rate (CER) is similar to WER, but operates on character instead of word. The lower the CER value, the better the performance of the ASR system, with a CER of 0 being a perfect score (see [CER score](https://huggingface.co/metrics/cer) for more information). It is returned for the `mls`, `fleurs-asr`, `voxpopuli` and `babel` subsets of the benchmark. - `bleu`: the BLEU score, calculated according to the SacreBLEU metric approach. It can take any value between 0.0 and 100.0, inclusive, with higher values being better (see [SacreBLEU](https://huggingface.co/metrics/sacrebleu) for more details). This is returned for the `covost2` subset. ### Values from popular papers The [original XTREME-S paper](https://arxiv.org/pdf/2203.10752.pdf) reported average WERs ranging from 9.2 to 14.6, a BLEU score of 20.6, an accuracy of 73.3 and F1 score of 86.9, depending on the subsets of the dataset tested on. ## Examples For the `mls` subset (which outputs `wer` and `cer`): ```python >>> from datasets import load_metric >>> xtreme_s_metric = datasets.load_metric('xtreme_s', 'mls') >>> references = ["it is sunny here", "paper and pen are essentials"] >>> predictions = ["it's sunny", "paper pen are essential"] >>> results = xtreme_s_metric.compute(predictions=predictions, references=references) >>> print({k: round(v, 2) for k, v in results.items()}) {'wer': 0.56, 'cer': 0.27} ``` For the `covost2` subset (which outputs `bleu`): ```python >>> from datasets import load_metric >>> xtreme_s_metric = datasets.load_metric('xtreme_s', 'covost2') >>> references = ["bonjour paris", "il est necessaire de faire du sport de temps en temp"] >>> predictions = ["bonjour paris", "il est important de faire du sport souvent"] >>> results = xtreme_s_metric.compute(predictions=predictions, references=references) >>> print({k: round(v, 2) for k, v in results.items()}) {'bleu': 31.65} ``` For the `fleurs-lang_id` subset (which outputs `accuracy`): ```python >>> from datasets import load_metric >>> xtreme_s_metric = datasets.load_metric('xtreme_s', 'fleurs-lang_id') >>> references = [0, 1, 0, 0, 1] >>> predictions = [0, 1, 1, 0, 0] >>> results = xtreme_s_metric.compute(predictions=predictions, references=references) >>> print({k: round(v, 2) for k, v in results.items()}) {'accuracy': 0.6} ``` For the `minds14` subset (which outputs `f1` and `accuracy`): ```python >>> from datasets import load_metric >>> xtreme_s_metric = datasets.load_metric('xtreme_s', 'minds14') >>> references = [0, 1, 0, 0, 1] >>> predictions = [0, 1, 1, 0, 0] >>> results = xtreme_s_metric.compute(predictions=predictions, references=references) >>> print({k: round(v, 2) for k, v in results.items()}) {'f1': 0.58, 'accuracy': 0.6} ``` ## Limitations and bias This metric works only with datasets that have the same format as the [XTREME-S dataset](https://huggingface.co/datasets/google/xtreme_s). While the XTREME-S dataset is meant to represent a variety of languages and tasks, it has inherent biases: it is missing many languages that are important and under-represented in NLP datasets. It also has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech, which results in a mismatch between performance obtained in a read-speech setting and a more noisy setting (in production or live deployment, for instance). ## Citation ```bibtex @article{conneau2022xtreme, title={XTREME-S: Evaluating Cross-lingual Speech Representations}, author={Conneau, Alexis and Bapna, Ankur and Zhang, Yu and Ma, Min and von Platen, Patrick and Lozhkov, Anton and Cherry, Colin and Jia, Ye and Rivera, Clara and Kale, Mihir and others}, journal={arXiv preprint arXiv:2203.10752}, year={2022} } ``` ## Further References - [XTREME-S dataset](https://huggingface.co/datasets/google/xtreme_s) - [XTREME-S github repository](https://github.com/google-research/xtreme)
datasets/metrics/xtreme_s/README.md/0
{ "file_path": "datasets/metrics/xtreme_s/README.md", "repo_id": "datasets", "token_count": 2218 }
67
import platform from argparse import ArgumentParser import fsspec import huggingface_hub import pandas import pyarrow from datasets import __version__ as version from datasets.commands import BaseDatasetsCLICommand def info_command_factory(_): return EnvironmentCommand() class EnvironmentCommand(BaseDatasetsCLICommand): @staticmethod def register_subcommand(parser: ArgumentParser): download_parser = parser.add_parser("env", help="Print relevant system environment info.") download_parser.set_defaults(func=info_command_factory) def run(self): info = { "`datasets` version": version, "Platform": platform.platform(), "Python version": platform.python_version(), "`huggingface_hub` version": huggingface_hub.__version__, "PyArrow version": pyarrow.__version__, "Pandas version": pandas.__version__, "`fsspec` version": fsspec.__version__, } print("\nCopy-and-paste the text below in your GitHub issue.\n") print(self.format_dict(info)) return info @staticmethod def format_dict(d): return "\n".join([f"- {prop}: {val}" for prop, val in d.items()]) + "\n"
datasets/src/datasets/commands/env.py/0
{ "file_path": "datasets/src/datasets/commands/env.py", "repo_id": "datasets", "token_count": 476 }
68
import os import sys import warnings from dataclasses import dataclass, field from io import BytesIO from typing import TYPE_CHECKING, Any, ClassVar, Dict, List, Optional, Union import numpy as np import pyarrow as pa from .. import config from ..download.download_config import DownloadConfig from ..download.streaming_download_manager import xopen from ..table import array_cast from ..utils.file_utils import is_local_path from ..utils.py_utils import first_non_null_value, no_op_if_value_is_null, string_to_dict if TYPE_CHECKING: import PIL.Image from .features import FeatureType _IMAGE_COMPRESSION_FORMATS: Optional[List[str]] = None _NATIVE_BYTEORDER = "<" if sys.byteorder == "little" else ">" # Origin: https://github.com/python-pillow/Pillow/blob/698951e19e19972aeed56df686868f1329981c12/src/PIL/Image.py#L3126 minus "|i1" which values are not preserved correctly when saving and loading an image _VALID_IMAGE_ARRAY_DTPYES = [ np.dtype("|b1"), np.dtype("|u1"), np.dtype("<u2"), np.dtype(">u2"), np.dtype("<i2"), np.dtype(">i2"), np.dtype("<u4"), np.dtype(">u4"), np.dtype("<i4"), np.dtype(">i4"), np.dtype("<f4"), np.dtype(">f4"), np.dtype("<f8"), np.dtype(">f8"), ] @dataclass class Image: """Image [`Feature`] to read image data from an image file. Input: The Image feature accepts as input: - A `str`: Absolute path to the image file (i.e. random access is allowed). - A `dict` with the keys: - `path`: String with relative path of the image file to the archive file. - `bytes`: Bytes of the image file. This is useful for archived files with sequential access. - An `np.ndarray`: NumPy array representing an image. - A `PIL.Image.Image`: PIL image object. Args: decode (`bool`, defaults to `True`): Whether to decode the image data. If `False`, returns the underlying dictionary in the format `{"path": image_path, "bytes": image_bytes}`. Examples: ```py >>> from datasets import load_dataset, Image >>> ds = load_dataset("beans", split="train") >>> ds.features["image"] Image(decode=True, id=None) >>> ds[0]["image"] <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x15E52E7F0> >>> ds = ds.cast_column('image', Image(decode=False)) {'bytes': None, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/b0a21163f78769a2cf11f58dfc767fb458fc7cea5c05dccc0144a2c0f0bc1292/train/healthy/healthy_train.85.jpg'} ``` """ decode: bool = True id: Optional[str] = None # Automatically constructed dtype: ClassVar[str] = "PIL.Image.Image" pa_type: ClassVar[Any] = pa.struct({"bytes": pa.binary(), "path": pa.string()}) _type: str = field(default="Image", init=False, repr=False) def __call__(self): return self.pa_type def encode_example(self, value: Union[str, bytes, dict, np.ndarray, "PIL.Image.Image"]) -> dict: """Encode example into a format for Arrow. Args: value (`str`, `np.ndarray`, `PIL.Image.Image` or `dict`): Data passed as input to Image feature. Returns: `dict` with "path" and "bytes" fields """ if config.PIL_AVAILABLE: import PIL.Image else: raise ImportError("To support encoding images, please install 'Pillow'.") if isinstance(value, list): value = np.array(value) if isinstance(value, str): return {"path": value, "bytes": None} elif isinstance(value, bytes): return {"path": None, "bytes": value} elif isinstance(value, np.ndarray): # convert the image array to PNG/TIFF bytes return encode_np_array(value) elif isinstance(value, PIL.Image.Image): # convert the PIL image to bytes (default format is PNG/TIFF) return encode_pil_image(value) elif value.get("path") is not None and os.path.isfile(value["path"]): # we set "bytes": None to not duplicate the data if they're already available locally return {"bytes": None, "path": value.get("path")} elif value.get("bytes") is not None or value.get("path") is not None: # store the image bytes, and path is used to infer the image format using the file extension return {"bytes": value.get("bytes"), "path": value.get("path")} else: raise ValueError( f"An image sample should have one of 'path' or 'bytes' but they are missing or None in {value}." ) def decode_example(self, value: dict, token_per_repo_id=None) -> "PIL.Image.Image": """Decode example image file into image data. Args: value (`str` or `dict`): A string with the absolute image file path, a dictionary with keys: - `path`: String with absolute or relative image file path. - `bytes`: The bytes of the image file. token_per_repo_id (`dict`, *optional*): To access and decode image files from private repositories on the Hub, you can pass a dictionary repo_id (`str`) -> token (`bool` or `str`). Returns: `PIL.Image.Image` """ if not self.decode: raise RuntimeError("Decoding is disabled for this feature. Please use Image(decode=True) instead.") if config.PIL_AVAILABLE: import PIL.Image else: raise ImportError("To support decoding images, please install 'Pillow'.") if token_per_repo_id is None: token_per_repo_id = {} path, bytes_ = value["path"], value["bytes"] if bytes_ is None: if path is None: raise ValueError(f"An image should have one of 'path' or 'bytes' but both are None in {value}.") else: if is_local_path(path): image = PIL.Image.open(path) else: source_url = path.split("::")[-1] pattern = ( config.HUB_DATASETS_URL if source_url.startswith(config.HF_ENDPOINT) else config.HUB_DATASETS_HFFS_URL ) try: repo_id = string_to_dict(source_url, pattern)["repo_id"] token = token_per_repo_id.get(repo_id) except ValueError: token = None download_config = DownloadConfig(token=token) with xopen(path, "rb", download_config=download_config) as f: bytes_ = BytesIO(f.read()) image = PIL.Image.open(bytes_) else: image = PIL.Image.open(BytesIO(bytes_)) image.load() # to avoid "Too many open files" errors return image def flatten(self) -> Union["FeatureType", Dict[str, "FeatureType"]]: """If in the decodable state, return the feature itself, otherwise flatten the feature into a dictionary.""" from .features import Value return ( self if self.decode else { "bytes": Value("binary"), "path": Value("string"), } ) def cast_storage(self, storage: Union[pa.StringArray, pa.StructArray, pa.ListArray]) -> pa.StructArray: """Cast an Arrow array to the Image arrow storage type. The Arrow types that can be converted to the Image pyarrow storage type are: - `pa.string()` - it must contain the "path" data - `pa.binary()` - it must contain the image bytes - `pa.struct({"bytes": pa.binary()})` - `pa.struct({"path": pa.string()})` - `pa.struct({"bytes": pa.binary(), "path": pa.string()})` - order doesn't matter - `pa.list(*)` - it must contain the image array data Args: storage (`Union[pa.StringArray, pa.StructArray, pa.ListArray]`): PyArrow array to cast. Returns: `pa.StructArray`: Array in the Image arrow storage type, that is `pa.struct({"bytes": pa.binary(), "path": pa.string()})`. """ if pa.types.is_string(storage.type): bytes_array = pa.array([None] * len(storage), type=pa.binary()) storage = pa.StructArray.from_arrays([bytes_array, storage], ["bytes", "path"], mask=storage.is_null()) elif pa.types.is_binary(storage.type): path_array = pa.array([None] * len(storage), type=pa.string()) storage = pa.StructArray.from_arrays([storage, path_array], ["bytes", "path"], mask=storage.is_null()) elif pa.types.is_struct(storage.type): if storage.type.get_field_index("bytes") >= 0: bytes_array = storage.field("bytes") else: bytes_array = pa.array([None] * len(storage), type=pa.binary()) if storage.type.get_field_index("path") >= 0: path_array = storage.field("path") else: path_array = pa.array([None] * len(storage), type=pa.string()) storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=storage.is_null()) elif pa.types.is_list(storage.type): bytes_array = pa.array( [encode_np_array(np.array(arr))["bytes"] if arr is not None else None for arr in storage.to_pylist()], type=pa.binary(), ) path_array = pa.array([None] * len(storage), type=pa.string()) storage = pa.StructArray.from_arrays( [bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null() ) return array_cast(storage, self.pa_type) def embed_storage(self, storage: pa.StructArray) -> pa.StructArray: """Embed image files into the Arrow array. Args: storage (`pa.StructArray`): PyArrow array to embed. Returns: `pa.StructArray`: Array in the Image arrow storage type, that is `pa.struct({"bytes": pa.binary(), "path": pa.string()})`. """ @no_op_if_value_is_null def path_to_bytes(path): with xopen(path, "rb") as f: bytes_ = f.read() return bytes_ bytes_array = pa.array( [ (path_to_bytes(x["path"]) if x["bytes"] is None else x["bytes"]) if x is not None else None for x in storage.to_pylist() ], type=pa.binary(), ) path_array = pa.array( [os.path.basename(path) if path is not None else None for path in storage.field("path").to_pylist()], type=pa.string(), ) storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null()) return array_cast(storage, self.pa_type) def list_image_compression_formats() -> List[str]: if config.PIL_AVAILABLE: import PIL.Image else: raise ImportError("To support encoding images, please install 'Pillow'.") global _IMAGE_COMPRESSION_FORMATS if _IMAGE_COMPRESSION_FORMATS is None: PIL.Image.init() _IMAGE_COMPRESSION_FORMATS = list(set(PIL.Image.OPEN.keys()) & set(PIL.Image.SAVE.keys())) return _IMAGE_COMPRESSION_FORMATS def image_to_bytes(image: "PIL.Image.Image") -> bytes: """Convert a PIL Image object to bytes using native compression if possible, otherwise use PNG/TIFF compression.""" buffer = BytesIO() if image.format in list_image_compression_formats(): format = image.format else: format = "PNG" if image.mode in ["1", "L", "LA", "RGB", "RGBA"] else "TIFF" image.save(buffer, format=format) return buffer.getvalue() def encode_pil_image(image: "PIL.Image.Image") -> dict: if hasattr(image, "filename") and image.filename != "": return {"path": image.filename, "bytes": None} else: return {"path": None, "bytes": image_to_bytes(image)} def encode_np_array(array: np.ndarray) -> dict: if config.PIL_AVAILABLE: import PIL.Image else: raise ImportError("To support encoding images, please install 'Pillow'.") dtype = array.dtype dtype_byteorder = dtype.byteorder if dtype.byteorder != "=" else _NATIVE_BYTEORDER dtype_kind = dtype.kind dtype_itemsize = dtype.itemsize dest_dtype = None # Multi-channel array case (only np.dtype("|u1") is allowed) if array.shape[2:]: if dtype_kind not in ["u", "i"]: raise TypeError( f"Unsupported array dtype {dtype} for image encoding. Only {dest_dtype} is supported for multi-channel arrays." ) dest_dtype = np.dtype("|u1") if dtype != dest_dtype: warnings.warn(f"Downcasting array dtype {dtype} to {dest_dtype} to be compatible with 'Pillow'") # Exact match elif dtype in _VALID_IMAGE_ARRAY_DTPYES: dest_dtype = dtype else: # Downcast the type within the kind (np.can_cast(from_type, to_type, casting="same_kind") doesn't behave as expected, so do it manually) while dtype_itemsize >= 1: dtype_str = dtype_byteorder + dtype_kind + str(dtype_itemsize) if np.dtype(dtype_str) in _VALID_IMAGE_ARRAY_DTPYES: dest_dtype = np.dtype(dtype_str) warnings.warn(f"Downcasting array dtype {dtype} to {dest_dtype} to be compatible with 'Pillow'") break else: dtype_itemsize //= 2 if dest_dtype is None: raise TypeError( f"Cannot downcast dtype {dtype} to a valid image dtype. Valid image dtypes: {_VALID_IMAGE_ARRAY_DTPYES}" ) image = PIL.Image.fromarray(array.astype(dest_dtype)) return {"path": None, "bytes": image_to_bytes(image)} def objects_to_list_of_image_dicts( objs: Union[List[str], List[dict], List[np.ndarray], List["PIL.Image.Image"]], ) -> List[dict]: """Encode a list of objects into a format suitable for creating an extension array of type `ImageExtensionType`.""" if config.PIL_AVAILABLE: import PIL.Image else: raise ImportError("To support encoding images, please install 'Pillow'.") if objs: _, obj = first_non_null_value(objs) if isinstance(obj, str): return [{"path": obj, "bytes": None} if obj is not None else None for obj in objs] if isinstance(obj, np.ndarray): obj_to_image_dict_func = no_op_if_value_is_null(encode_np_array) return [obj_to_image_dict_func(obj) for obj in objs] elif isinstance(obj, PIL.Image.Image): obj_to_image_dict_func = no_op_if_value_is_null(encode_pil_image) return [obj_to_image_dict_func(obj) for obj in objs] else: return objs else: return objs
datasets/src/datasets/features/image.py/0
{ "file_path": "datasets/src/datasets/features/image.py", "repo_id": "datasets", "token_count": 6802 }
69
from abc import ABC, abstractmethod from typing import Optional, Union from .. import Dataset, DatasetDict, Features, IterableDataset, IterableDatasetDict, NamedSplit from ..utils.typing import NestedDataStructureLike, PathLike class AbstractDatasetReader(ABC): def __init__( self, path_or_paths: Optional[NestedDataStructureLike[PathLike]] = None, split: Optional[NamedSplit] = None, features: Optional[Features] = None, cache_dir: str = None, keep_in_memory: bool = False, streaming: bool = False, num_proc: Optional[int] = None, **kwargs, ): self.path_or_paths = path_or_paths self.split = split if split or isinstance(path_or_paths, dict) else "train" self.features = features self.cache_dir = cache_dir self.keep_in_memory = keep_in_memory self.streaming = streaming self.num_proc = num_proc self.kwargs = kwargs @abstractmethod def read(self) -> Union[Dataset, DatasetDict, IterableDataset, IterableDatasetDict]: pass class AbstractDatasetInputStream(ABC): def __init__( self, features: Optional[Features] = None, cache_dir: str = None, keep_in_memory: bool = False, streaming: bool = False, num_proc: Optional[int] = None, **kwargs, ): self.features = features self.cache_dir = cache_dir self.keep_in_memory = keep_in_memory self.streaming = streaming self.num_proc = num_proc self.kwargs = kwargs @abstractmethod def read(self) -> Union[Dataset, IterableDataset]: pass
datasets/src/datasets/io/abc.py/0
{ "file_path": "datasets/src/datasets/io/abc.py", "repo_id": "datasets", "token_count": 721 }
70
import copy import os from functools import partial from itertools import groupby from typing import TYPE_CHECKING, Callable, Iterator, List, Optional, Tuple, TypeVar, Union import numpy as np import pyarrow as pa import pyarrow.compute as pc import pyarrow.types from . import config from .utils.logging import get_logger if TYPE_CHECKING: from .features.features import Features, FeatureType logger = get_logger(__name__) def inject_arrow_table_documentation(arrow_table_method): def wrapper(fn): fn.__doc__ = arrow_table_method.__doc__ + (fn.__doc__ if fn.__doc__ is not None else "") fn.__doc__ = fn.__doc__.replace("pyarrow.Table", "Table") if hasattr(arrow_table_method, "__annotations__"): fn.__annotations__ = arrow_table_method.__annotations__ return fn return wrapper def _in_memory_arrow_table_from_file(filename: str) -> pa.Table: in_memory_stream = pa.input_stream(filename) opened_stream = pa.ipc.open_stream(in_memory_stream) pa_table = opened_stream.read_all() return pa_table def _in_memory_arrow_table_from_buffer(buffer: pa.Buffer) -> pa.Table: stream = pa.BufferReader(buffer) opened_stream = pa.ipc.open_stream(stream) table = opened_stream.read_all() return table def _memory_mapped_record_batch_reader_from_file(filename: str) -> pa.RecordBatchStreamReader: memory_mapped_stream = pa.memory_map(filename) return pa.ipc.open_stream(memory_mapped_stream) def read_schema_from_file(filename: str) -> pa.Schema: """ Infer arrow table schema from file without loading whole file into memory. Usefull especially while having very big files. """ with pa.memory_map(filename) as memory_mapped_stream: schema = pa.ipc.open_stream(memory_mapped_stream).schema return schema def _memory_mapped_arrow_table_from_file(filename: str) -> pa.Table: opened_stream = _memory_mapped_record_batch_reader_from_file(filename) pa_table = opened_stream.read_all() return pa_table def _deepcopy(x, memo: dict): """deepcopy a regular class instance""" cls = x.__class__ result = cls.__new__(cls) memo[id(x)] = result for k, v in x.__dict__.items(): setattr(result, k, copy.deepcopy(v, memo)) return result def _interpolation_search(arr: List[int], x: int) -> int: """ Return the position i of a sorted array so that arr[i] <= x < arr[i+1] Args: arr (`List[int]`): non-empty sorted list of integers x (`int`): query Returns: `int`: the position i so that arr[i] <= x < arr[i+1] Raises: `IndexError`: if the array is empty or if the query is outside the array values """ i, j = 0, len(arr) - 1 while i < j and arr[i] <= x < arr[j]: k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i])) if arr[k] <= x < arr[k + 1]: return k elif arr[k] < x: i, j = k + 1, j else: i, j = i, k raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.") class IndexedTableMixin: def __init__(self, table: pa.Table): self._schema: pa.Schema = table.schema self._batches: List[pa.RecordBatch] = [ recordbatch for recordbatch in table.to_batches() if len(recordbatch) > 0 ] self._offsets: np.ndarray = np.cumsum([0] + [len(b) for b in self._batches], dtype=np.int64) def fast_gather(self, indices: Union[List[int], np.ndarray]) -> pa.Table: """ Create a pa.Table by gathering the records at the records at the specified indices. Should be faster than pa.concat_tables(table.fast_slice(int(i) % table.num_rows, 1) for i in indices) since NumPy can compute the binary searches in parallel, highly optimized C """ if not len(indices): raise ValueError("Indices must be non-empty") batch_indices = np.searchsorted(self._offsets, indices, side="right") - 1 return pa.Table.from_batches( [ self._batches[batch_idx].slice(i - self._offsets[batch_idx], 1) for batch_idx, i in zip(batch_indices, indices) ], schema=self._schema, ) def fast_slice(self, offset=0, length=None) -> pa.Table: """ Slice the Table using interpolation search. The behavior is the same as `pyarrow.Table.slice` but it's significantly faster. Interpolation search is used to find the start and end indexes of the batches we want to keep. The batches to keep are then concatenated to form the sliced Table. """ if offset < 0: raise IndexError("Offset must be non-negative") elif offset >= self._offsets[-1] or (length is not None and length <= 0): return pa.Table.from_batches([], schema=self._schema) i = _interpolation_search(self._offsets, offset) if length is None or length + offset >= self._offsets[-1]: batches = self._batches[i:] batches[0] = batches[0].slice(offset - self._offsets[i]) else: j = _interpolation_search(self._offsets, offset + length - 1) batches = self._batches[i : j + 1] batches[-1] = batches[-1].slice(0, offset + length - self._offsets[j]) batches[0] = batches[0].slice(offset - self._offsets[i]) return pa.Table.from_batches(batches, schema=self._schema) class Table(IndexedTableMixin): """ Wraps a pyarrow Table by using composition. This is the base class for `InMemoryTable`, `MemoryMappedTable` and `ConcatenationTable`. It implements all the basic attributes/methods of the pyarrow Table class except the Table transforms: `slice, filter, flatten, combine_chunks, cast, add_column, append_column, remove_column, set_column, rename_columns` and `drop`. The implementation of these methods differs for the subclasses. """ def __init__(self, table: pa.Table): super().__init__(table) self.table = table def __deepcopy__(self, memo: dict): # arrow tables are immutable, so there's no need to copy self.table # moreover calling deepcopy on a pyarrow table seems to make pa.total_allocated_bytes() decrease for some reason # by adding it to the memo, self.table won't be copied memo[id(self.table)] = self.table # same for the recordbatches used by the index memo[id(self._batches)] = list(self._batches) return _deepcopy(self, memo) def validate(self, *args, **kwargs): """ Perform validation checks. An exception is raised if validation fails. By default only cheap validation checks are run. Pass `full=True` for thorough validation checks (potentially `O(n)`). Args: full (`bool`, defaults to `False`): If `True`, run expensive checks, otherwise cheap checks only. Raises: `pa.lib.ArrowInvalid`: if validation fails """ return self.table.validate(*args, **kwargs) def equals(self, *args, **kwargs): """ Check if contents of two tables are equal. Args: other ([`~datasets.table.Table`]): Table to compare against. check_metadata `bool`, defaults to `False`): Whether schema metadata equality should be checked as well. Returns: `bool` """ args = tuple(arg.table if isinstance(arg, Table) else arg for arg in args) kwargs = {k: v.table if isinstance(v, Table) else v for k, v in kwargs} return self.table.equals(*args, **kwargs) def to_batches(self, *args, **kwargs): """ Convert Table to list of (contiguous) `RecordBatch` objects. Args: max_chunksize (`int`, defaults to `None`): Maximum size for `RecordBatch` chunks. Individual chunks may be smaller depending on the chunk layout of individual columns. Returns: `List[pyarrow.RecordBatch]` """ return self.table.to_batches(*args, **kwargs) def to_pydict(self, *args, **kwargs): """ Convert the Table to a `dict` or `OrderedDict`. Returns: `dict` """ return self.table.to_pydict(*args, **kwargs) def to_pylist(self, *args, **kwargs): """ Convert the Table to a list Returns: `list` """ return self.table.to_pylist(*args, **kwargs) def to_pandas(self, *args, **kwargs): """ Convert to a pandas-compatible NumPy array or DataFrame, as appropriate. Args: memory_pool (`MemoryPool`, defaults to `None`): Arrow MemoryPool to use for allocations. Uses the default memory pool is not passed. strings_to_categorical (`bool`, defaults to `False`): Encode string (UTF8) and binary types to `pandas.Categorical`. categories (`list`, defaults to `empty`): List of fields that should be returned as `pandas.Categorical`. Only applies to table-like data structures. zero_copy_only (`bool`, defaults to `False`): Raise an `ArrowException` if this function call would require copying the underlying data. integer_object_nulls (`bool`, defaults to `False`): Cast integers with nulls to objects. date_as_object (`bool`, defaults to `True`): Cast dates to objects. If `False`, convert to `datetime64[ns]` dtype. timestamp_as_object (`bool`, defaults to `False`): Cast non-nanosecond timestamps (`np.datetime64`) to objects. This is useful if you have timestamps that don't fit in the normal date range of nanosecond timestamps (1678 CE-2262 CE). If `False`, all timestamps are converted to `datetime64[ns]` dtype. use_threads (`bool`, defaults to `True`): Whether to parallelize the conversion using multiple threads. deduplicate_objects (`bool`, defaults to `False`): Do not create multiple copies Python objects when created, to save on memory use. Conversion will be slower. ignore_metadata (`bool`, defaults to `False`): If `True`, do not use the 'pandas' metadata to reconstruct the DataFrame index, if present. safe (`bool`, defaults to `True`): For certain data types, a cast is needed in order to store the data in a pandas DataFrame or Series (e.g. timestamps are always stored as nanoseconds in pandas). This option controls whether it is a safe cast or not. split_blocks (`bool`, defaults to `False`): If `True`, generate one internal "block" for each column when creating a pandas.DataFrame from a `RecordBatch` or `Table`. While this can temporarily reduce memory note that various pandas operations can trigger "consolidation" which may balloon memory use. self_destruct (`bool`, defaults to `False`): EXPERIMENTAL: If `True`, attempt to deallocate the originating Arrow memory while converting the Arrow object to pandas. If you use the object after calling `to_pandas` with this option it will crash your program. types_mapper (`function`, defaults to `None`): A function mapping a pyarrow DataType to a pandas `ExtensionDtype`. This can be used to override the default pandas type for conversion of built-in pyarrow types or in absence of `pandas_metadata` in the Table schema. The function receives a pyarrow DataType and is expected to return a pandas `ExtensionDtype` or `None` if the default conversion should be used for that type. If you have a dictionary mapping, you can pass `dict.get` as function. Returns: `pandas.Series` or `pandas.DataFrame`: `pandas.Series` or `pandas.DataFrame` depending on type of object """ return self.table.to_pandas(*args, **kwargs) def to_string(self, *args, **kwargs): return self.table.to_string(*args, **kwargs) def to_reader(self, max_chunksize: Optional[int] = None): """ Convert the Table to a RecordBatchReader. Note that this method is zero-copy, it merely exposes the same data under a different API. Args: max_chunksize (`int`, defaults to `None`) Maximum size for RecordBatch chunks. Individual chunks may be smaller depending on the chunk layout of individual columns. Returns: `pyarrow.RecordBatchReader` """ return self.table.to_reader(max_chunksize=max_chunksize) def field(self, *args, **kwargs): """ Select a schema field by its column name or numeric index. Args: i (`Union[int, str]`): The index or name of the field to retrieve. Returns: `pyarrow.Field` """ return self.table.field(*args, **kwargs) def column(self, *args, **kwargs): """ Select a column by its column name, or numeric index. Args: i (`Union[int, str]`): The index or name of the column to retrieve. Returns: `pyarrow.ChunkedArray` """ return self.table.column(*args, **kwargs) def itercolumns(self, *args, **kwargs): """ Iterator over all columns in their numerical order. Yields: `pyarrow.ChunkedArray` """ return self.table.itercolumns(*args, **kwargs) @property def schema(self): """ Schema of the table and its columns. Returns: `pyarrow.Schema` """ return self.table.schema @property def columns(self): """ List of all columns in numerical order. Returns: `List[pa.ChunkedArray]` """ return self.table.columns @property def num_columns(self): """ Number of columns in this table. Returns: int """ return self.table.num_columns @property def num_rows(self): """ Number of rows in this table. Due to the definition of a table, all columns have the same number of rows. Returns: int """ return self.table.num_rows @property def shape(self): """ Dimensions of the table: (#rows, #columns). Returns: `(int, int)`: Number of rows and number of columns. """ return self.table.shape @property def nbytes(self): """ Total number of bytes consumed by the elements of the table. """ return self.table.nbytes @property def column_names(self): """ Names of the table's columns. """ return self.table.column_names def __eq__(self, other): return self.equals(other) def __getitem__(self, i): return self.table[i] def __len__(self): return len(self.table) def __repr__(self): return self.table.__repr__().replace("pyarrow.Table", self.__class__.__name__) def __str__(self): return self.table.__str__().replace("pyarrow.Table", self.__class__.__name__) def slice(self, *args, **kwargs): """ Compute zero-copy slice of this Table. Args: offset (`int`, defaults to `0`): Offset from start of table to slice. length (`int`, defaults to `None`): Length of slice (default is until end of table starting from offset). Returns: `datasets.table.Table` """ raise NotImplementedError() def filter(self, *args, **kwargs): """ Select records from a Table. See `pyarrow.compute.filter` for full usage. """ raise NotImplementedError() def flatten(self, *args, **kwargs): """ Flatten this Table. Each column with a struct type is flattened into one column per struct field. Other columns are left unchanged. Args: memory_pool (`MemoryPool`, defaults to `None`): For memory allocations, if required, otherwise use default pool. Returns: `datasets.table.Table` """ raise NotImplementedError() def combine_chunks(self, *args, **kwargs): """ Make a new table by combining the chunks this table has. All the underlying chunks in the `ChunkedArray` of each column are concatenated into zero or one chunk. Args: memory_pool (`MemoryPool`, defaults to `None`): For memory allocations, if required, otherwise use default pool. Returns: `datasets.table.Table` """ raise NotImplementedError() def cast(self, *args, **kwargs): """ Cast table values to another schema. Args: target_schema (`Schema`): Schema to cast to, the names and order of fields must match. safe (`bool`, defaults to `True`): Check for overflows or other unsafe conversions. Returns: `datasets.table.Table` """ raise NotImplementedError() def replace_schema_metadata(self, *args, **kwargs): """ EXPERIMENTAL: Create shallow copy of table by replacing schema key-value metadata with the indicated new metadata (which may be None, which deletes any existing metadata Args: metadata (`dict`, defaults to `None`): Returns: `datasets.table.Table`: shallow_copy """ raise NotImplementedError() def add_column(self, *args, **kwargs): """ Add column to Table at position. A new table is returned with the column added, the original table object is left unchanged. Args: i (`int`): Index to place the column at. field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column added. """ raise NotImplementedError() def append_column(self, *args, **kwargs): """ Append column at end of columns. Args: field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column added. """ raise NotImplementedError() def remove_column(self, *args, **kwargs): """ Create new Table with the indicated column removed. Args: i (`int`): Index of column to remove. Returns: `datasets.table.Table`: New table without the column. """ raise NotImplementedError() def set_column(self, *args, **kwargs): """ Replace column in Table at position. Args: i (`int`): Index to place the column at. field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column set. """ raise NotImplementedError() def rename_columns(self, *args, **kwargs): """ Create new table with columns renamed to provided names. """ raise NotImplementedError() def drop(self, *args, **kwargs): """ Drop one or more columns and return a new table. Args: columns (`List[str]`): List of field names referencing existing columns. Raises: `KeyError` : if any of the passed columns name are not existing. Returns: `datasets.table.Table`: New table without the columns. """ raise NotImplementedError() def select(self, *args, **kwargs): """ Select columns of the table. Returns a new table with the specified columns, and metadata preserved. Args: columns (:obj:`Union[List[str], List[int]]`): The column names or integer indices to select. Returns: `datasets.table.Table`: table with only a subset of the columns """ raise NotImplementedError() class TableBlock(Table): """ `TableBlock` is the allowed class inside a `ConcanetationTable`. Only `MemoryMappedTable` and `InMemoryTable` are `TableBlock`. This is because we don't want a `ConcanetationTable` made out of other `ConcanetationTables`. """ pass class InMemoryTable(TableBlock): """ The table is said in-memory when it is loaded into the user's RAM. Pickling it does copy all the data using memory. Its implementation is simple and uses the underlying pyarrow Table methods directly. This is different from the `MemoryMapped` table, for which pickling doesn't copy all the data in memory. For a `MemoryMapped`, unpickling instead reloads the table from the disk. `InMemoryTable` must be used when data fit in memory, while `MemoryMapped` are reserved for data bigger than memory or when you want the memory footprint of your application to stay low. """ @classmethod def from_file(cls, filename: str): table = _in_memory_arrow_table_from_file(filename) return cls(table) @classmethod def from_buffer(cls, buffer: pa.Buffer): table = _in_memory_arrow_table_from_buffer(buffer) return cls(table) @classmethod def from_pandas(cls, *args, **kwargs): """ Convert pandas.DataFrame to an Arrow Table. The column types in the resulting Arrow Table are inferred from the dtypes of the pandas.Series in the DataFrame. In the case of non-object Series, the NumPy dtype is translated to its Arrow equivalent. In the case of `object`, we need to guess the datatype by looking at the Python objects in this Series. Be aware that Series of the `object` dtype don't carry enough information to always lead to a meaningful Arrow type. In the case that we cannot infer a type, e.g. because the DataFrame is of length 0 or the Series only contains `None/nan` objects, the type is set to null. This behavior can be avoided by constructing an explicit schema and passing it to this function. Args: df (`pandas.DataFrame`): schema (`pyarrow.Schema`, *optional*): The expected schema of the Arrow Table. This can be used to indicate the type of columns if we cannot infer it automatically. If passed, the output will have exactly this schema. Columns specified in the schema that are not found in the DataFrame columns or its index will raise an error. Additional columns or index levels in the DataFrame which are not specified in the schema will be ignored. preserve_index (`bool`, *optional*): Whether to store the index as an additional column in the resulting `Table`. The default of None will store the index as a column, except for RangeIndex which is stored as metadata only. Use `preserve_index=True` to force it to be stored as a column. nthreads (`int`, defaults to `None` (may use up to system CPU count threads)) If greater than 1, convert columns to Arrow in parallel using indicated number of threads. columns (`List[str]`, *optional*): List of column to be converted. If `None`, use all columns. safe (`bool`, defaults to `True`): Check for overflows or other unsafe conversions, Returns: `datasets.table.Table`: Examples: ```python >>> import pandas as pd >>> import pyarrow as pa >>> df = pd.DataFrame({ ... 'int': [1, 2], ... 'str': ['a', 'b'] ... }) >>> pa.Table.from_pandas(df) <pyarrow.lib.Table object at 0x7f05d1fb1b40> ``` """ return cls(pa.Table.from_pandas(*args, **kwargs)) @classmethod def from_arrays(cls, *args, **kwargs): """ Construct a Table from Arrow arrays. Args: arrays (`List[Union[pyarrow.Array, pyarrow.ChunkedArray]]`): Equal-length arrays that should form the table. names (`List[str]`, *optional*): Names for the table columns. If not passed, schema must be passed. schema (`Schema`, defaults to `None`): Schema for the created table. If not passed, names must be passed. metadata (`Union[dict, Mapping]`, defaults to `None`): Optional metadata for the schema (if inferred). Returns: `datasets.table.Table` """ return cls(pa.Table.from_arrays(*args, **kwargs)) @classmethod def from_pydict(cls, *args, **kwargs): """ Construct a Table from Arrow arrays or columns. Args: mapping (`Union[dict, Mapping]`): A mapping of strings to Arrays or Python lists. schema (`Schema`, defaults to `None`): If not passed, will be inferred from the Mapping values metadata (`Union[dict, Mapping]`, defaults to `None`): Optional metadata for the schema (if inferred). Returns: `datasets.table.Table` """ return cls(pa.Table.from_pydict(*args, **kwargs)) @classmethod def from_pylist(cls, mapping, *args, **kwargs): """ Construct a Table from list of rows / dictionaries. Args: mapping (`List[dict]`): A mapping of strings to row values. schema (`Schema`, defaults to `None`): If not passed, will be inferred from the Mapping values metadata (`Union[dict, Mapping]`, defaults to `None`): Optional metadata for the schema (if inferred). Returns: `datasets.table.Table` """ return cls(pa.Table.from_pylist(mapping, *args, **kwargs)) @classmethod def from_batches(cls, *args, **kwargs): """ Construct a Table from a sequence or iterator of Arrow `RecordBatches`. Args: batches (`Union[Sequence[pyarrow.RecordBatch], Iterator[pyarrow.RecordBatch]]`): Sequence of `RecordBatch` to be converted, all schemas must be equal. schema (`Schema`, defaults to `None`): If not passed, will be inferred from the first `RecordBatch`. Returns: `datasets.table.Table`: """ return cls(pa.Table.from_batches(*args, **kwargs)) def slice(self, offset=0, length=None): """ Compute zero-copy slice of this Table. Args: offset (`int`, defaults to `0`): Offset from start of table to slice. length (`int`, defaults to `None`): Length of slice (default is until end of table starting from offset). Returns: `datasets.table.Table` """ # Use fast slicing here return InMemoryTable(self.fast_slice(offset=offset, length=length)) def filter(self, *args, **kwargs): """ Select records from a Table. See `pyarrow.compute.filter` for full usage. """ return InMemoryTable(self.table.filter(*args, **kwargs)) def flatten(self, *args, **kwargs): """ Flatten this Table. Each column with a struct type is flattened into one column per struct field. Other columns are left unchanged. Args: memory_pool (`MemoryPool`, defaults to `None`): For memory allocations, if required, otherwise use default pool. Returns: `datasets.table.Table` """ return InMemoryTable(table_flatten(self.table, *args, **kwargs)) def combine_chunks(self, *args, **kwargs): """ Make a new table by combining the chunks this table has. All the underlying chunks in the `ChunkedArray` of each column are concatenated into zero or one chunk. Args: memory_pool (`MemoryPool`, defaults to `None`): For memory allocations, if required, otherwise use default pool. Returns: `datasets.table.Table` """ return InMemoryTable(self.table.combine_chunks(*args, **kwargs)) def cast(self, *args, **kwargs): """ Cast table values to another schema. Args: target_schema (`Schema`): Schema to cast to, the names and order of fields must match. safe (`bool`, defaults to `True`): Check for overflows or other unsafe conversions. Returns: `datasets.table.Table` """ return InMemoryTable(table_cast(self.table, *args, **kwargs)) def replace_schema_metadata(self, *args, **kwargs): """ EXPERIMENTAL: Create shallow copy of table by replacing schema key-value metadata with the indicated new metadata (which may be `None`, which deletes any existing metadata). Args: metadata (`dict`, defaults to `None`): Returns: `datasets.table.Table`: shallow_copy """ return InMemoryTable(self.table.replace_schema_metadata(*args, **kwargs)) def add_column(self, *args, **kwargs): """ Add column to Table at position. A new table is returned with the column added, the original table object is left unchanged. Args: i (`int`): Index to place the column at. field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column added. """ return InMemoryTable(self.table.add_column(*args, **kwargs)) def append_column(self, *args, **kwargs): """ Append column at end of columns. Args: field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column added. """ return InMemoryTable(self.table.append_column(*args, **kwargs)) def remove_column(self, *args, **kwargs): """ Create new Table with the indicated column removed. Args: i (`int`): Index of column to remove. Returns: `datasets.table.Table`: New table without the column. """ return InMemoryTable(self.table.remove_column(*args, **kwargs)) def set_column(self, *args, **kwargs): """ Replace column in Table at position. Args: i (`int`): Index to place the column at. field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column set. """ return InMemoryTable(self.table.set_column(*args, **kwargs)) def rename_columns(self, *args, **kwargs): """ Create new table with columns renamed to provided names. """ return InMemoryTable(self.table.rename_columns(*args, **kwargs)) def drop(self, *args, **kwargs): """ Drop one or more columns and return a new table. Args: columns (`List[str]`): List of field names referencing existing columns. Raises: `KeyError` : if any of the passed columns name are not existing. Returns: `datasets.table.Table`: New table without the columns. """ return InMemoryTable(self.table.drop(*args, **kwargs)) def select(self, *args, **kwargs): """ Select columns of the table. Returns a new table with the specified columns, and metadata preserved. Args: columns (:obj:`Union[List[str], List[int]]`): The column names or integer indices to select. Returns: :class:`datasets.table.Table`: New table with the specified columns, and metadata preserved. """ return InMemoryTable(self.table.select(*args, **kwargs)) # The MemoryMappedTable needs replays to properly reload tables from the disk Replay = Tuple[str, tuple, dict] class MemoryMappedTable(TableBlock): """ The table is said memory mapped when it doesn't use the user's RAM but loads the data from the disk instead. Pickling it doesn't copy the data into memory. Instead, only the path to the memory mapped arrow file is pickled, as well as the list of transforms to "replay" when reloading the table from the disk. Its implementation requires to store an history of all the transforms that were applied to the underlying pyarrow Table, so that they can be "replayed" when reloading the Table from the disk. This is different from the `InMemoryTable` table, for which pickling does copy all the data in memory. `InMemoryTable` must be used when data fit in memory, while `MemoryMapped` are reserved for data bigger than memory or when you want the memory footprint of your application to stay low. """ def __init__(self, table: pa.Table, path: str, replays: Optional[List[Replay]] = None): super().__init__(table) self.path = os.path.abspath(path) self.replays: List[Replay] = replays if replays is not None else [] @classmethod def from_file(cls, filename: str, replays=None): table = _memory_mapped_arrow_table_from_file(filename) table = cls._apply_replays(table, replays) return cls(table, filename, replays) def __getstate__(self): return {"path": self.path, "replays": self.replays} def __setstate__(self, state): path = state["path"] replays = state["replays"] table = _memory_mapped_arrow_table_from_file(path) table = self._apply_replays(table, replays) MemoryMappedTable.__init__(self, table, path=path, replays=replays) @staticmethod def _apply_replays(table: pa.Table, replays: Optional[List[Replay]] = None) -> pa.Table: if replays is not None: for name, args, kwargs in replays: if name == "cast": table = table_cast(table, *args, **kwargs) elif name == "flatten": table = table_flatten(table, *args, **kwargs) else: table = getattr(table, name)(*args, **kwargs) return table def _append_replay(self, replay: Replay) -> List[Replay]: replays = copy.deepcopy(self.replays) replays.append(replay) return replays def slice(self, offset=0, length=None): """ Compute zero-copy slice of this Table. Args: offset (`int`, defaults to `0`): Offset from start of table to slice. length (`int`, defaults to `None`): Length of slice (default is until end of table starting from offset). Returns: `datasets.table.Table` """ replay = ("slice", (offset, length), {}) replays = self._append_replay(replay) # Use fast slicing here return MemoryMappedTable(self.fast_slice(offset=offset, length=length), self.path, replays) def filter(self, *args, **kwargs): """ Select records from a Table. See `pyarrow.compute.filter` for full usage. """ replay = ("filter", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(self.table.filter(*args, **kwargs), self.path, replays) def flatten(self, *args, **kwargs): """ Flatten this Table. Each column with a struct type is flattened into one column per struct field. Other columns are left unchanged. Args: memory_pool (`MemoryPool`, defaults to `None`): For memory allocations, if required, otherwise use default pool. Returns: `datasets.table.Table` """ replay = ("flatten", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(table_flatten(self.table, *args, **kwargs), self.path, replays) def combine_chunks(self, *args, **kwargs): """ Make a new table by combining the chunks this table has. All the underlying chunks in the ChunkedArray of each column are concatenated into zero or one chunk. Args: memory_pool (`MemoryPool`, defaults to `None`): For memory allocations, if required, otherwise use default pool. Returns: `datasets.table.Table` """ replay = ("combine_chunks", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(self.table.combine_chunks(*args, **kwargs), self.path, replays) def cast(self, *args, **kwargs): """ Cast table values to another schema Args: target_schema (`Schema`): Schema to cast to, the names and order of fields must match. safe (`bool`, defaults to `True`): Check for overflows or other unsafe conversions. Returns: `datasets.table.Table` """ replay = ("cast", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(table_cast(self.table, *args, **kwargs), self.path, replays) def replace_schema_metadata(self, *args, **kwargs): """ EXPERIMENTAL: Create shallow copy of table by replacing schema key-value metadata with the indicated new metadata (which may be None, which deletes any existing metadata. Args: metadata (`dict`, defaults to `None`): Returns: `datasets.table.Table`: shallow_copy """ replay = ("replace_schema_metadata", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(self.table.replace_schema_metadata(*args, **kwargs), self.path, replays) def add_column(self, *args, **kwargs): """ Add column to Table at position. A new table is returned with the column added, the original table object is left unchanged. Args: i (`int`): Index to place the column at. field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column added. """ replay = ("add_column", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(self.table.add_column(*args, **kwargs), self.path, replays) def append_column(self, *args, **kwargs): """ Append column at end of columns. Args: field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column added. """ replay = ("append_column", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(self.table.append_column(*args, **kwargs), self.path, replays) def remove_column(self, *args, **kwargs): """ Create new Table with the indicated column removed. Args: i (`int`): Index of column to remove. Returns: `datasets.table.Table`: New table without the column. """ replay = ("remove_column", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(self.table.remove_column(*args, **kwargs), self.path, replays) def set_column(self, *args, **kwargs): """ Replace column in Table at position. Args: i (`int`): Index to place the column at. field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column set. """ replay = ("set_column", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(self.table.set_column(*args, **kwargs), self.path, replays) def rename_columns(self, *args, **kwargs): """ Create new table with columns renamed to provided names. """ replay = ("rename_columns", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(self.table.rename_columns(*args, **kwargs), self.path, replays) def drop(self, *args, **kwargs): """ Drop one or more columns and return a new table. Args: columns (`List[str]`): List of field names referencing existing columns. Raises: `KeyError` : if any of the passed columns name are not existing. Returns: `datasets.table.Table`: New table without the columns. """ replay = ("drop", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(self.table.drop(*args, **kwargs), self.path, replays) def select(self, *args, **kwargs): """ Select columns of the table. Returns a new table with the specified columns, and metadata preserved. Args: columns (:obj:`Union[List[str], List[int]]`): The column names or integer indices to select. Returns: :class:`datasets.table.Table`: New table with the specified columns, and metadata preserved. """ replay = ("select", copy.deepcopy(args), copy.deepcopy(kwargs)) replays = self._append_replay(replay) return MemoryMappedTable(self.table.select(*args, **kwargs), self.path, replays) # A ConcatenationTable is the concatenation of several tables. # The ``blocks`` attributes stores a list of list of blocks. # The first axis concatenates the tables along the axis 0 (it appends rows), # while the second axis concatenates tables along the axis 1 (it appends columns). TableBlockContainer = TypeVar("TableBlockContainer", TableBlock, List[TableBlock], List[List[TableBlock]]) class ConcatenationTable(Table): """ The table comes from the concatenation of several tables called blocks. It enables concatenation on both axis 0 (append rows) and axis 1 (append columns). The underlying tables are called "blocks" and can be either `InMemoryTable` or `MemoryMappedTable` objects. This allows to combine tables that come from memory or that are memory mapped. When a `ConcatenationTable` is pickled, then each block is pickled: - the `InMemoryTable` objects are pickled by copying all the data in memory. - the MemoryMappedTable objects are pickled without copying the data into memory. Instead, only the path to the memory mapped arrow file is pickled, as well as the list of transforms to "replays" when reloading the table from the disk. Its implementation requires to store each block separately. The `blocks` attributes stores a list of list of blocks. The first axis concatenates the tables along the axis 0 (it appends rows), while the second axis concatenates tables along the axis 1 (it appends columns). If some columns are missing when concatenating on axis 0, they are filled with null values. This is done using `pyarrow.concat_tables(tables, promote=True)`. You can access the fully combined table by accessing the `ConcatenationTable.table` attribute, and the blocks by accessing the `ConcatenationTable.blocks` attribute. """ def __init__(self, table: pa.Table, blocks: List[List[TableBlock]]): super().__init__(table) self.blocks = blocks # Check that all the blocks have the right type. # Only InMemoryTable and MemoryMappedTable are allowed. for subtables in blocks: for subtable in subtables: if not isinstance(subtable, TableBlock): raise TypeError( "The blocks of a ConcatenationTable must be InMemoryTable or MemoryMappedTable objects" f", but got {subtable}." ) def __getstate__(self): return {"blocks": self.blocks, "schema": self.table.schema} def __setstate__(self, state): blocks = state["blocks"] schema = state["schema"] table = self._concat_blocks_horizontally_and_vertically(blocks) if schema is not None and table.schema != schema: # We fix the columns by concatenating with an empty table with the right columns empty_table = pa.Table.from_batches([], schema=schema) # we set promote=True to fill missing columns with null values if config.PYARROW_VERSION.major < 14: table = pa.concat_tables([table, empty_table], promote=True) else: table = pa.concat_tables([table, empty_table], promote_options="default") ConcatenationTable.__init__(self, table, blocks=blocks) @staticmethod def _concat_blocks(blocks: List[Union[TableBlock, pa.Table]], axis: int = 0) -> pa.Table: pa_tables = [table.table if hasattr(table, "table") else table for table in blocks] if axis == 0: # we set promote=True to fill missing columns with null values if config.PYARROW_VERSION.major < 14: return pa.concat_tables(pa_tables, promote=True) else: return pa.concat_tables(pa_tables, promote_options="default") elif axis == 1: for i, table in enumerate(pa_tables): if i == 0: pa_table = table else: for name, col in zip(table.column_names, table.columns): pa_table = pa_table.append_column(name, col) return pa_table else: raise ValueError("'axis' must be either 0 or 1") @classmethod def _concat_blocks_horizontally_and_vertically(cls, blocks: List[List[TableBlock]]) -> pa.Table: pa_tables_to_concat_vertically = [] for i, tables in enumerate(blocks): if not tables: continue pa_table_horizontally_concatenated = cls._concat_blocks(tables, axis=1) pa_tables_to_concat_vertically.append(pa_table_horizontally_concatenated) return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0) @classmethod def _merge_blocks(cls, blocks: TableBlockContainer, axis: Optional[int] = None) -> TableBlockContainer: if axis is not None: merged_blocks = [] for is_in_memory, block_group in groupby(blocks, key=lambda x: isinstance(x, InMemoryTable)): if is_in_memory: block_group = [InMemoryTable(cls._concat_blocks(list(block_group), axis=axis))] merged_blocks += list(block_group) else: # both merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks] if all(len(row_block) == 1 for row_block in merged_blocks): merged_blocks = cls._merge_blocks( [block for row_block in merged_blocks for block in row_block], axis=0 ) return merged_blocks @classmethod def _consolidate_blocks(cls, blocks: TableBlockContainer) -> TableBlockContainer: if isinstance(blocks, TableBlock): return blocks elif isinstance(blocks[0], TableBlock): return cls._merge_blocks(blocks, axis=0) else: return cls._merge_blocks(blocks) @classmethod def from_blocks(cls, blocks: TableBlockContainer) -> "ConcatenationTable": blocks = cls._consolidate_blocks(blocks) if isinstance(blocks, TableBlock): table = blocks return cls(table.table, [[table]]) elif isinstance(blocks[0], TableBlock): table = cls._concat_blocks(blocks, axis=0) blocks = [[t] for t in blocks] return cls(table, blocks) else: table = cls._concat_blocks_horizontally_and_vertically(blocks) return cls(table, blocks) @classmethod def from_tables(cls, tables: List[Union[pa.Table, Table]], axis: int = 0) -> "ConcatenationTable": """Create `ConcatenationTable` from list of tables. Args: tables (list of `Table` or list of `pyarrow.Table`): List of tables. axis (`{0, 1}`, defaults to `0`, meaning over rows): Axis to concatenate over, where `0` means over rows (vertically) and `1` means over columns (horizontally). <Added version="1.6.0"/> """ def to_blocks(table: Union[pa.Table, Table]) -> List[List[TableBlock]]: if isinstance(table, pa.Table): return [[InMemoryTable(table)]] elif isinstance(table, ConcatenationTable): return copy.deepcopy(table.blocks) else: return [[table]] def _slice_row_block(row_block: List[TableBlock], length: int) -> Tuple[List[TableBlock], List[TableBlock]]: sliced = [table.slice(0, length) for table in row_block] remainder = [table.slice(length, len(row_block[0]) - length) for table in row_block] return sliced, remainder def _split_both_like( result: List[List[TableBlock]], blocks: List[List[TableBlock]] ) -> Tuple[List[List[TableBlock]], List[List[TableBlock]]]: """ Make sure each row_block contain the same num_rows to be able to concatenate them on axis=1. To do so, we modify both blocks sets to have the same row_blocks boundaries. For example, if `result` has 2 row_blocks of 3 rows and `blocks` has 3 row_blocks of 2 rows, we modify both to have 4 row_blocks of size 2, 1, 1 and 2: [ x x x | x x x ] + [ y y | y y | y y ] ----------------------------- = [ x x | x | x | x x ] [ y y | y | y | y y ] """ result, blocks = list(result), list(blocks) new_result, new_blocks = [], [] while result and blocks: # we slice the longest row block to save two row blocks of same length # and we replace the long row block by its remainder if necessary if len(result[0][0]) > len(blocks[0][0]): new_blocks.append(blocks[0]) sliced, result[0] = _slice_row_block(result[0], len(blocks.pop(0)[0])) new_result.append(sliced) elif len(result[0][0]) < len(blocks[0][0]): new_result.append(result[0]) sliced, blocks[0] = _slice_row_block(blocks[0], len(result.pop(0)[0])) new_blocks.append(sliced) else: new_result.append(result.pop(0)) new_blocks.append(blocks.pop(0)) if result or blocks: raise ValueError("Failed to concatenate on axis=1 because tables don't have the same number of rows") return new_result, new_blocks def _extend_blocks( result: List[List[TableBlock]], blocks: List[List[TableBlock]], axis: int = 0 ) -> List[List[TableBlock]]: if axis == 0: result.extend(blocks) elif axis == 1: # We make sure each row_block have the same num_rows result, blocks = _split_both_like(result, blocks) for i, row_block in enumerate(blocks): result[i].extend(row_block) return result blocks = to_blocks(tables[0]) for table in tables[1:]: table_blocks = to_blocks(table) blocks = _extend_blocks(blocks, table_blocks, axis=axis) return cls.from_blocks(blocks) @property def _slices(self): offset = 0 for tables in self.blocks: length = len(tables[0]) yield (offset, length) offset += length def slice(self, offset=0, length=None): """ Compute zero-copy slice of this Table. Args: offset (`int`, defaults to `0`): Offset from start of table to slice. length (`int`, defaults to `None`): Length of slice (default is until end of table starting from offset). Returns: `datasets.table.Table` """ table = self.table.slice(offset, length=length) length = length if length is not None else self.num_rows - offset blocks = [] for tables in self.blocks: n_rows = len(tables[0]) if length == 0: break elif n_rows <= offset: offset = offset - n_rows elif n_rows <= offset + length: blocks.append([t.slice(offset) for t in tables]) length, offset = length + offset - n_rows, 0 else: blocks.append([t.slice(offset, length) for t in tables]) length, offset = 0, 0 return ConcatenationTable(table, blocks) def filter(self, mask, *args, **kwargs): """ Select records from a Table. See `pyarrow.compute.filter` for full usage. """ table = self.table.filter(mask, *args, **kwargs) blocks = [] for (offset, length), tables in zip(self._slices, self.blocks): submask = mask.slice(offset, length) blocks.append([t.filter(submask, *args, **kwargs) for t in tables]) return ConcatenationTable(table, blocks) def flatten(self, *args, **kwargs): """ Flatten this Table. Each column with a struct type is flattened into one column per struct field. Other columns are left unchanged. Args: memory_pool (`MemoryPool`, defaults to `None`): For memory allocations, if required, otherwise use default pool. Returns: `datasets.table.Table` """ table = table_flatten(self.table, *args, **kwargs) blocks = [] for tables in self.blocks: blocks.append([t.flatten(*args, **kwargs) for t in tables]) return ConcatenationTable(table, blocks) def combine_chunks(self, *args, **kwargs): """ Make a new table by combining the chunks this table has. All the underlying chunks in the `ChunkedArray` of each column are concatenated into zero or one chunk. Args: memory_pool (`MemoryPool`, defaults to `None`): For memory allocations, if required, otherwise use default pool. Returns: `datasets.table.Table` """ table = self.table.combine_chunks(*args, **kwargs) blocks = [] for tables in self.blocks: blocks.append([t.combine_chunks(*args, **kwargs) for t in tables]) return ConcatenationTable(table, blocks) def cast(self, target_schema, *args, **kwargs): """ Cast table values to another schema. Args: target_schema (`Schema`): Schema to cast to, the names and order of fields must match. safe (`bool`, defaults to `True`): Check for overflows or other unsafe conversions. Returns: `datasets.table.Table` """ from .features import Features table = table_cast(self.table, target_schema, *args, **kwargs) target_features = Features.from_arrow_schema(target_schema) blocks = [] for subtables in self.blocks: new_tables = [] fields = list(target_schema) for subtable in subtables: subfields = [] for name in subtable.column_names: subfields.append(fields.pop(next(i for i, field in enumerate(fields) if field.name == name))) subfeatures = Features({subfield.name: target_features[subfield.name] for subfield in subfields}) subschema = subfeatures.arrow_schema new_tables.append(subtable.cast(subschema, *args, **kwargs)) blocks.append(new_tables) return ConcatenationTable(table, blocks) def replace_schema_metadata(self, *args, **kwargs): """ EXPERIMENTAL: Create shallow copy of table by replacing schema key-value metadata with the indicated new metadata (which may be `None`, which deletes any existing metadata). Args: metadata (`dict`, defaults to `None`): Returns: `datasets.table.Table`: shallow_copy """ table = self.table.replace_schema_metadata(*args, **kwargs) blocks = [] for tables in self.blocks: blocks.append([t.replace_schema_metadata(*args, **kwargs) for t in tables]) return ConcatenationTable(table, self.blocks) def add_column(self, *args, **kwargs): """ Add column to Table at position. A new table is returned with the column added, the original table object is left unchanged. Args: i (`int`): Index to place the column at. field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column added. """ raise NotImplementedError() def append_column(self, *args, **kwargs): """ Append column at end of columns. Args: field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column added. """ raise NotImplementedError() def remove_column(self, i, *args, **kwargs): """ Create new Table with the indicated column removed. Args: i (`int`): Index of column to remove. Returns: `datasets.table.Table`: New table without the column. """ table = self.table.remove_column(i, *args, **kwargs) name = self.table.column_names[i] blocks = [] for tables in self.blocks: blocks.append( [ t.remove_column(t.column_names.index(name), *args, **kwargs) if name in t.column_names else t for t in tables ] ) return ConcatenationTable(table, blocks) def set_column(self, *args, **kwargs): """ Replace column in Table at position. Args: i (`int`): Index to place the column at. field_ (`Union[str, pyarrow.Field]`): If a string is passed then the type is deduced from the column data. column (`Union[pyarrow.Array, List[pyarrow.Array]]`): Column data. Returns: `datasets.table.Table`: New table with the passed column set. """ raise NotImplementedError() def rename_columns(self, names, *args, **kwargs): """ Create new table with columns renamed to provided names. """ table = self.table.rename_columns(names, *args, **kwargs) names = dict(zip(self.table.column_names, names)) blocks = [] for tables in self.blocks: blocks.append( [t.rename_columns([names[name] for name in t.column_names], *args, **kwargs) for t in tables] ) return ConcatenationTable(table, blocks) def drop(self, columns, *args, **kwargs): """ Drop one or more columns and return a new table. Args: columns (`List[str]`): List of field names referencing existing columns. Raises: `KeyError` : if any of the passed columns name are not existing. Returns: `datasets.table.Table`: New table without the columns. """ table = self.table.drop(columns, *args, **kwargs) blocks = [] for tables in self.blocks: blocks.append([t.drop([c for c in columns if c in t.column_names], *args, **kwargs) for t in tables]) return ConcatenationTable(table, blocks) def select(self, columns, *args, **kwargs): """ Select columns of the table. Returns a new table with the specified columns, and metadata preserved. Args: columns (:obj:`Union[List[str], List[int]]`): The column names or integer indices to select. Returns: :class:`datasets.table.Table`: New table with the specified columns, and metadata preserved. """ table = self.table.select(columns, *args, **kwargs) blocks = [] for tables in self.blocks: blocks.append([t.select([c for c in columns if c in t.column_names], *args, **kwargs) for t in tables]) return ConcatenationTable(table, blocks) def concat_tables(tables: List[Table], axis: int = 0) -> Table: """ Concatenate tables. Args: tables (list of `Table`): List of tables to be concatenated. axis (`{0, 1}`, defaults to `0`, meaning over rows): Axis to concatenate over, where `0` means over rows (vertically) and `1` means over columns (horizontally). <Added version="1.6.0"/> Returns: `datasets.table.Table`: If the number of input tables is > 1, then the returned table is a `datasets.table.ConcatenationTable`. Otherwise if there's only one table, it is returned as is. """ tables = list(tables) if len(tables) == 1: return tables[0] return ConcatenationTable.from_tables(tables, axis=axis) def list_table_cache_files(table: Table) -> List[str]: """ Get the cache files that are loaded by the table. Cache file are used when parts of the table come from the disk via memory mapping. Returns: `List[str]`: A list of paths to the cache files loaded by the table. """ if isinstance(table, ConcatenationTable): cache_files = [] for subtables in table.blocks: for subtable in subtables: cache_files += list_table_cache_files(subtable) return cache_files elif isinstance(table, MemoryMappedTable): return [table.path] else: return [] def _wrap_for_chunked_arrays(func): """Apply the function on each chunk of a `pyarrow.ChunkedArray`, or on the array directly""" def wrapper(array, *args, **kwargs): if isinstance(array, pa.ChunkedArray): return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) else: return func(array, *args, **kwargs) return wrapper def _are_list_values_of_length(array: pa.ListArray, length: int) -> bool: """Check if all the sub-lists of a `pa.ListArray` have the specified length.""" return pc.all(pc.equal(array.value_lengths(), length)).as_py() or array.null_count == len(array) def _combine_list_array_offsets_with_mask(array: pa.ListArray) -> pa.Array: """Add the null bitmap to the offsets of a `pa.ListArray`.""" offsets = array.offsets if array.null_count > 0: offsets = pa.concat_arrays( [ pc.replace_with_mask(offsets[:-1], array.is_null(), pa.nulls(len(array), pa.int32())), offsets[-1:], ] ) return offsets def _storage_type(type: pa.DataType) -> pa.DataType: """Convert a (possibly nested) `pa.ExtensionType` to its storage type.""" if isinstance(type, pa.ExtensionType): return _storage_type(type.storage_type) elif isinstance(type, pa.StructType): return pa.struct([pa.field(field.name, _storage_type(field.type)) for field in type]) elif isinstance(type, pa.ListType): return pa.list_(_storage_type(type.value_type)) elif isinstance(type, pa.FixedSizeListType): return pa.list_(_storage_type(type.value_type), type.list_size) return type @_wrap_for_chunked_arrays def array_cast(array: pa.Array, pa_type: pa.DataType, allow_number_to_str=True): """Improved version of `pa.Array.cast` It supports casting `pa.StructArray` objects to re-order the fields. It also let you control certain aspects of the casting, e.g. whether to disable numbers (`floats` or `ints`) to strings. Args: array (`pa.Array`): PyArrow array to cast pa_type (`pa.DataType`): Target PyArrow type allow_number_to_str (`bool`, defaults to `True`): Whether to allow casting numbers to strings. Defaults to `True`. Raises: `pa.ArrowInvalidError`: if the arrow data casting fails `TypeError`: if the target type is not supported according, e.g. - if a field is missing - if casting from numbers to strings and `allow_number_to_str` is `False` Returns: `List[pyarrow.Array]`: the casted array """ _c = partial(array_cast, allow_number_to_str=allow_number_to_str) if isinstance(array, pa.ExtensionArray): array = array.storage if isinstance(pa_type, pa.ExtensionType): return pa_type.wrap_array(_c(array, pa_type.storage_type)) elif array.type == pa_type: return array elif pa.types.is_struct(array.type): if pa.types.is_struct(pa_type) and ({field.name for field in pa_type} == {field.name for field in array.type}): if array.type.num_fields == 0: return array arrays = [_c(array.field(field.name), field.type) for field in pa_type] return pa.StructArray.from_arrays(arrays, fields=list(pa_type), mask=array.is_null()) elif pa.types.is_list(array.type): if pa.types.is_fixed_size_list(pa_type): if _are_list_values_of_length(array, pa_type.list_size): if array.null_count > 0: # Ensure each null value in the array translates to [null] * pa_type.list_size in the array's values array array_type = array.type storage_type = _storage_type(array_type) if array_type != storage_type: # Temporarily convert to the storage type to support extension types in the slice operation array = _c(array, storage_type) array = pc.list_slice(array, 0, pa_type.list_size, return_fixed_size_list=True) array = _c(array, array_type) else: array = pc.list_slice(array, 0, pa_type.list_size, return_fixed_size_list=True) array_values = array.values if config.PYARROW_VERSION.major < 15: return pa.Array.from_buffers( pa_type, len(array), [array.is_valid().buffers()[1]], children=[_c(array_values, pa_type.value_type)], ) else: return pa.FixedSizeListArray.from_arrays( _c(array_values, pa_type.value_type), pa_type.list_size, mask=array.is_null() ) else: array_values = array.values[ array.offset * pa_type.length : (array.offset + len(array)) * pa_type.length ] return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size) elif pa.types.is_list(pa_type): # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError array_offsets = _combine_list_array_offsets_with_mask(array) return pa.ListArray.from_arrays(array_offsets, _c(array.values, pa_type.value_type)) elif pa.types.is_fixed_size_list(array.type): if pa.types.is_fixed_size_list(pa_type): if pa_type.list_size == array.type.list_size: array_values = array.values[ array.offset * array.type.list_size : (array.offset + len(array)) * array.type.list_size ] if config.PYARROW_VERSION.major < 15: return pa.Array.from_buffers( pa_type, len(array), [array.is_valid().buffers()[1]], children=[_c(array_values, pa_type.value_type)], ) else: return pa.FixedSizeListArray.from_arrays( _c(array_values, pa_type.value_type), pa_type.list_size, mask=array.is_null() ) elif pa.types.is_list(pa_type): array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size return pa.ListArray.from_arrays(array_offsets, _c(array.values, pa_type.value_type), mask=array.is_null()) else: if ( not allow_number_to_str and pa.types.is_string(pa_type) and (pa.types.is_floating(array.type) or pa.types.is_integer(array.type)) ): raise TypeError( f"Couldn't cast array of type {array.type} to {pa_type} since allow_number_to_str is set to {allow_number_to_str}" ) if pa.types.is_null(pa_type) and not pa.types.is_null(array.type): raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") return array.cast(pa_type) raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") @_wrap_for_chunked_arrays def cast_array_to_feature(array: pa.Array, feature: "FeatureType", allow_number_to_str=True): """Cast an array to the arrow type that corresponds to the requested feature type. For custom features like [`Audio`] or [`Image`], it takes into account the "cast_storage" methods they defined to enable casting from other arrow types. Args: array (`pa.Array`): The PyArrow array to cast. feature (`datasets.features.FeatureType`): The target feature type. allow_number_to_str (`bool`, defaults to `True`): Whether to allow casting numbers to strings. Defaults to `True`. Raises: `pa.ArrowInvalidError`: if the arrow data casting fails `TypeError`: if the target type is not supported according, e.g. - if a field is missing - if casting from numbers to strings and `allow_number_to_str` is `False` Returns: array (`pyarrow.Array`): the casted array """ from .features.features import Sequence, get_nested_type _c = partial(cast_array_to_feature, allow_number_to_str=allow_number_to_str) if isinstance(array, pa.ExtensionArray): array = array.storage if hasattr(feature, "cast_storage"): return feature.cast_storage(array) elif pa.types.is_struct(array.type): # feature must be a dict or Sequence(subfeatures_dict) if isinstance(feature, Sequence) and isinstance(feature.feature, dict): feature = { name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items() } if isinstance(feature, dict) and {field.name for field in array.type} == set(feature): if array.type.num_fields == 0: return array arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] return pa.StructArray.from_arrays(arrays, names=list(feature), mask=array.is_null()) elif pa.types.is_list(array.type): # feature must be either [subfeature] or Sequence(subfeature) if isinstance(feature, list): casted_array_values = _c(array.values, feature[0]) if casted_array_values.type == array.values.type: return array else: # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError array_offsets = _combine_list_array_offsets_with_mask(array) return pa.ListArray.from_arrays(array_offsets, casted_array_values) elif isinstance(feature, Sequence): if feature.length > -1: if _are_list_values_of_length(array, feature.length): if array.null_count > 0: # Ensure each null value in the array translates to [null] * pa_type.list_size in the array's values array array_type = array.type storage_type = _storage_type(array_type) if array_type != storage_type: # Temporarily convert to the storage type to support extension types in the slice operation array = array_cast(array, storage_type, allow_number_to_str=allow_number_to_str) array = pc.list_slice(array, 0, feature.length, return_fixed_size_list=True) array = array_cast(array, array_type, allow_number_to_str=allow_number_to_str) else: array = pc.list_slice(array, 0, feature.length, return_fixed_size_list=True) array_values = array.values casted_array_values = _c(array_values, feature.feature) if config.PYARROW_VERSION.major < 15: return pa.Array.from_buffers( pa.list_(casted_array_values.type, feature.length), len(array), [array.is_valid().buffers()[1]], children=[casted_array_values], ) else: return pa.FixedSizeListArray.from_arrays( casted_array_values, feature.length, mask=array.is_null() ) else: array_values = array.values[ array.offset * feature.length : (array.offset + len(array)) * feature.length ] return pa.FixedSizeListArray.from_arrays(_c(array_values, feature.feature), feature.length) else: casted_array_values = _c(array.values, feature.feature) if casted_array_values.type == array.values.type: return array else: # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError array_offsets = _combine_list_array_offsets_with_mask(array) return pa.ListArray.from_arrays(array_offsets, casted_array_values) elif pa.types.is_fixed_size_list(array.type): # feature must be either [subfeature] or Sequence(subfeature) if isinstance(feature, list): array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size return pa.ListArray.from_arrays(array_offsets, _c(array.values, feature[0]), mask=array.is_null()) elif isinstance(feature, Sequence): if feature.length > -1: if feature.length == array.type.list_size: array_values = array.values[ array.offset * array.type.list_size : (array.offset + len(array)) * array.type.list_size ] casted_array_values = _c(array_values, feature.feature) if config.PYARROW_VERSION.major < 15: return pa.Array.from_buffers( pa.list_(casted_array_values.type, feature.length), len(array), [array.is_valid().buffers()[1]], children=[casted_array_values], ) else: return pa.FixedSizeListArray.from_arrays( casted_array_values, feature.length, mask=array.is_null() ) else: array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size return pa.ListArray.from_arrays(array_offsets, _c(array.values, feature.feature), mask=array.is_null()) if pa.types.is_null(array.type): return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str) elif not isinstance(feature, (Sequence, dict, list, tuple)): return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") @_wrap_for_chunked_arrays def embed_array_storage(array: pa.Array, feature: "FeatureType"): """Embed data into an arrays's storage. For custom features like Audio or Image, it takes into account the "embed_storage" methods they define to embed external data (e.g. an image file) into an array. <Added version="2.4.0"/> Args: array (`pa.Array`): The PyArrow array in which to embed data. feature (`datasets.features.FeatureType`): Array features. Raises: `TypeError`: if the target type is not supported according, e.g. - if a field is missing Returns: array (`pyarrow.Array`): the casted array """ from .features import Sequence _e = embed_array_storage if isinstance(array, pa.ExtensionArray): array = array.storage if hasattr(feature, "embed_storage"): return feature.embed_storage(array) elif pa.types.is_struct(array.type): # feature must be a dict or Sequence(subfeatures_dict) if isinstance(feature, Sequence) and isinstance(feature.feature, dict): feature = { name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items() } if isinstance(feature, dict): arrays = [_e(array.field(name), subfeature) for name, subfeature in feature.items()] return pa.StructArray.from_arrays(arrays, names=list(feature), mask=array.is_null()) elif pa.types.is_list(array.type): # feature must be either [subfeature] or Sequence(subfeature) # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError array_offsets = _combine_list_array_offsets_with_mask(array) if isinstance(feature, list): return pa.ListArray.from_arrays(array_offsets, _e(array.values, feature[0])) if isinstance(feature, Sequence) and feature.length == -1: return pa.ListArray.from_arrays(array_offsets, _e(array.values, feature.feature)) elif pa.types.is_fixed_size_list(array.type): # feature must be Sequence(subfeature) if isinstance(feature, Sequence) and feature.length > -1: array_values = array.values[ array.offset * array.type.list_size : (array.offset + len(array)) * array.type.list_size ] embedded_array_values = _e(array_values, feature.feature) if config.PYARROW_VERSION.major < 15: return pa.Array.from_buffers( pa.list_(array_values.type, feature.length), len(array), [array.is_valid().buffers()[1]], children=[embedded_array_values], ) else: return pa.FixedSizeListArray.from_arrays(embedded_array_values, feature.length, mask=array.is_null()) if not isinstance(feature, (Sequence, dict, list, tuple)): return array raise TypeError(f"Couldn't embed array of type\n{array.type}\nwith\n{feature}") class CastError(ValueError): """When it's not possible to cast an Arrow table to a specific schema or set of features""" def __init__(self, *args, table_column_names: List[str], requested_column_names: List[str]) -> None: super().__init__(*args) self.table_column_names = table_column_names self.requested_column_names = requested_column_names def __reduce__(self): # Fix unpickling: TypeError: __init__() missing 2 required keyword-only arguments: 'table_column_names' and 'requested_column_names' return partial( CastError, table_column_names=self.table_column_names, requested_column_names=self.requested_column_names ), () def details(self): new_columns = set(self.table_column_names) - set(self.requested_column_names) missing_columns = set(self.requested_column_names) - set(self.table_column_names) if new_columns and missing_columns: return f"there are {len(new_columns)} new columns ({', '.join(new_columns)}) and {len(missing_columns)} missing columns ({', '.join(missing_columns)})." elif new_columns: return f"there are {len(new_columns)} new columns ({new_columns})" else: return f"there are {len(missing_columns)} missing columns ({missing_columns})" def cast_table_to_features(table: pa.Table, features: "Features"): """Cast a table to the arrow schema that corresponds to the requested features. Args: table (`pyarrow.Table`): PyArrow table to cast. features ([`Features`]): Target features. Returns: table (`pyarrow.Table`): the casted table """ if sorted(table.column_names) != sorted(features): raise CastError( f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match", table_column_names=table.column_names, requested_column_names=list(features), ) arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] return pa.Table.from_arrays(arrays, schema=features.arrow_schema) def cast_table_to_schema(table: pa.Table, schema: pa.Schema): """Cast a table to the arrow schema. Different from `cast_table_to_features`, this method can preserve nullability. Args: table (`pa.Table`): PyArrow table to cast. features ([`Features`]): Target features. Returns: `pa.Table`: the casted table """ from .features import Features features = Features.from_arrow_schema(schema) if sorted(table.column_names) != sorted(features): raise CastError( f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match", table_column_names=table.column_names, requested_column_names=list(features), ) arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] return pa.Table.from_arrays(arrays, schema=schema) def embed_table_storage(table: pa.Table): """Embed external data into a table's storage. <Added version="2.4.0"/> Args: table (`pyarrow.Table`): PyArrow table in which to embed data. Returns: table (`pyarrow.Table`): the table with embedded data """ from .features.features import Features, require_storage_embed features = Features.from_arrow_schema(table.schema) arrays = [ embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name] for name, feature in features.items() ] return pa.Table.from_arrays(arrays, schema=features.arrow_schema) def table_cast(table: pa.Table, schema: pa.Schema): """Improved version of `pa.Table.cast`. It supports casting to feature types stored in the schema metadata. Args: table (`pyarrow.Table`): PyArrow table to cast. schema (`pyarrow.Schema`): Target PyArrow schema. Returns: table (`pyarrow.Table`): the casted table """ if table.schema != schema: return cast_table_to_schema(table, schema) elif table.schema.metadata != schema.metadata: return table.replace_schema_metadata(schema.metadata) else: return table def table_flatten(table: pa.Table): """Improved version of `pa.Table.flatten`. It behaves as `pa.Table.flatten` in a sense it does 1-step flatten of the columns with a struct type into one column per struct field, but updates the metadata and skips decodable features unless the `decode` attribute of these features is set to False. Args: table (`pa.Table`): PyArrow table to flatten. Returns: `Table`: the flattened table """ from .features import Features features = Features.from_arrow_schema(table.schema) if any(hasattr(subfeature, "flatten") and subfeature.flatten() == subfeature for subfeature in features.values()): flat_arrays = [] flat_column_names = [] for field in table.schema: array = table.column(field.name) subfeature = features[field.name] if pa.types.is_struct(field.type) and ( not hasattr(subfeature, "flatten") or subfeature.flatten() != subfeature ): flat_arrays.extend(array.flatten()) flat_column_names.extend([f"{field.name}.{subfield.name}" for subfield in field.type]) else: flat_arrays.append(array) flat_column_names.append(field.name) flat_table = pa.Table.from_arrays( flat_arrays, names=flat_column_names, ) else: flat_table = table.flatten() # Preserve complex types in the metadata flat_features = features.flatten(max_depth=2) flat_features = Features({column_name: flat_features[column_name] for column_name in flat_table.column_names}) return flat_table.replace_schema_metadata(flat_features.arrow_schema.metadata) def table_visitor(table: pa.Table, function: Callable[[pa.Array], None]): """Visit all arrays in a table and apply a function to them. Args: table (`pyarrow.Table`): PyArrow table to visit. function (`Callable[[pa.Array], None]`): Function to apply to each array. """ from .features import Features, Sequence features = Features.from_arrow_schema(table.schema) def _visit(array, feature): if isinstance(array, pa.ChunkedArray): for chunk in array.chunks: _visit(chunk, feature) else: if isinstance(array, pa.ExtensionArray): array = array.storage function(array, feature) if pa.types.is_struct(array.type) and not hasattr(feature, "cast_storage"): if isinstance(feature, Sequence) and isinstance(feature.feature, dict): feature = { name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items() } for name, subfeature in feature.items(): _visit(array.field(name), subfeature) elif pa.types.is_list(array.type): if isinstance(feature, list): _visit(array.values, feature[0]) elif isinstance(feature, Sequence): _visit(array.values, feature.feature) for name, feature in features.items(): _visit(table[name], feature) def table_iter(table: Table, batch_size: int, drop_last_batch=False) -> Iterator[pa.Table]: """Iterate over sub-tables of size `batch_size`. Args: table (`pyarrow.Table`): PyArrow table to iterate over. batch_size (`int`): Size of each sub-table to yield. drop_last_batch (`bool`, defaults to `False`): Drop the last batch if it is smaller than `batch_size`. """ chunks_buffer = [] chunks_buffer_size = 0 for chunk in table.to_reader(max_chunksize=batch_size): if len(chunk) == 0: continue elif chunks_buffer_size + len(chunk) < batch_size: chunks_buffer.append(chunk) chunks_buffer_size += len(chunk) continue elif chunks_buffer_size + len(chunk) == batch_size: chunks_buffer.append(chunk) yield pa.Table.from_batches(chunks_buffer) chunks_buffer = [] chunks_buffer_size = 0 else: cropped_chunk_length = batch_size - chunks_buffer_size chunks_buffer.append(chunk.slice(0, cropped_chunk_length)) yield pa.Table.from_batches(chunks_buffer) chunks_buffer = [chunk.slice(cropped_chunk_length, len(chunk) - cropped_chunk_length)] chunks_buffer_size = len(chunk) - cropped_chunk_length if not drop_last_batch and chunks_buffer: yield pa.Table.from_batches(chunks_buffer)
datasets/src/datasets/table.py/0
{ "file_path": "datasets/src/datasets/table.py", "repo_id": "datasets", "token_count": 41071 }
71
from typing import Callable def is_documented_by(function_with_docstring: Callable): """Decorator to share docstrings across common functions. Args: function_with_docstring (`Callable`): Name of the function with the docstring. """ def wrapper(target_function): target_function.__doc__ = function_with_docstring.__doc__ return target_function return wrapper
datasets/src/datasets/utils/doc_utils.py/0
{ "file_path": "datasets/src/datasets/utils/doc_utils.py", "repo_id": "datasets", "token_count": 137 }
72
{ "monolingual": "contains a single language", "multilingual": "contains multiple languages", "translation": "contains translated or aligned text", "other": "other type of language distribution" }
datasets/src/datasets/utils/resources/multilingualities.json/0
{ "file_path": "datasets/src/datasets/utils/resources/multilingualities.json", "repo_id": "datasets", "token_count": 55 }
73
# isort: skip_file # This is the module that test_patching.py uses to test patch_submodule() import os # noqa: F401 - this is just for tests import os as renamed_os # noqa: F401 - this is just for tests from os import path # noqa: F401 - this is just for tests from os import path as renamed_path # noqa: F401 - this is just for tests from os.path import join # noqa: F401 - this is just for tests from os.path import join as renamed_join # noqa: F401 - this is just for tests open = open # noqa we just need to have a builtin inside this module to test it properly
datasets/tests/_test_patching.py/0
{ "file_path": "datasets/tests/_test_patching.py", "repo_id": "datasets", "token_count": 175 }
74
import datetime from typing import List, Tuple from unittest import TestCase from unittest.mock import patch import numpy as np import pandas as pd import pyarrow as pa import pytest from datasets import Array2D from datasets.arrow_dataset import Dataset from datasets.features import Audio, ClassLabel, Features, Image, Sequence, Value from datasets.features.features import ( _align_features, _arrow_to_datasets_dtype, _cast_to_python_objects, _check_if_features_can_be_aligned, cast_to_python_objects, encode_nested_example, generate_from_dict, string_to_arrow, ) from datasets.features.translation import Translation, TranslationVariableLanguages from datasets.info import DatasetInfo from datasets.utils.py_utils import asdict from ..utils import require_jax, require_tf, require_torch class FeaturesTest(TestCase): def test_from_arrow_schema_simple(self): data = {"a": [{"b": {"c": "text"}}] * 10, "foo": [1] * 10} original_features = Features({"a": {"b": {"c": Value("string")}}, "foo": Value("int64")}) dset = Dataset.from_dict(data, features=original_features) new_features = dset.features new_dset = Dataset.from_dict(data, features=new_features) self.assertEqual(original_features.type, new_features.type) self.assertDictEqual(dset[0], new_dset[0]) self.assertDictEqual(dset[:], new_dset[:]) def test_from_arrow_schema_with_sequence(self): data = {"a": [{"b": {"c": ["text"]}}] * 10, "foo": [1] * 10} original_features = Features({"a": {"b": Sequence({"c": Value("string")})}, "foo": Value("int64")}) dset = Dataset.from_dict(data, features=original_features) new_features = dset.features new_dset = Dataset.from_dict(data, features=new_features) self.assertEqual(original_features.type, new_features.type) self.assertDictEqual(dset[0], new_dset[0]) self.assertDictEqual(dset[:], new_dset[:]) def test_string_to_arrow_bijection_for_primitive_types(self): supported_pyarrow_datatypes = [ pa.time32("s"), pa.time64("us"), pa.timestamp("s"), pa.timestamp("ns", tz="America/New_York"), pa.date32(), pa.date64(), pa.duration("s"), pa.decimal128(10, 2), pa.decimal256(40, -3), pa.string(), pa.int32(), pa.float64(), pa.array([datetime.time(1, 1, 1)]).type, # arrow type: DataType(time64[us]) ] for dt in supported_pyarrow_datatypes: self.assertEqual(dt, string_to_arrow(_arrow_to_datasets_dtype(dt))) unsupported_pyarrow_datatypes = [pa.list_(pa.float64())] for dt in unsupported_pyarrow_datatypes: with self.assertRaises(ValueError): string_to_arrow(_arrow_to_datasets_dtype(dt)) supported_datasets_dtypes = [ "time32[s]", "timestamp[ns]", "timestamp[ns, tz=+07:30]", "duration[us]", "decimal128(30, -4)", "int32", "float64", ] for sdt in supported_datasets_dtypes: self.assertEqual(sdt, _arrow_to_datasets_dtype(string_to_arrow(sdt))) unsupported_datasets_dtypes = [ "time32[ns]", "timestamp[blob]", "timestamp[[ns]]", "timestamp[ns, tz=[ns]]", "duration[[us]]", "decimal20(30, -4)", "int", ] for sdt in unsupported_datasets_dtypes: with self.assertRaises(ValueError): string_to_arrow(sdt) def test_feature_named_type(self): """reference: issue #1110""" features = Features({"_type": Value("string")}) ds_info = DatasetInfo(features=features) reloaded_features = Features.from_dict(asdict(ds_info)["features"]) assert features == reloaded_features def test_feature_named_self_as_kwarg(self): """reference: issue #5641""" features = Features(self=Value("string")) ds_info = DatasetInfo(features=features) reloaded_features = Features.from_dict(asdict(ds_info)["features"]) assert features == reloaded_features def test_class_label_feature_with_no_labels(self): """reference: issue #4681""" features = Features({"label": ClassLabel(names=[])}) ds_info = DatasetInfo(features=features) reloaded_features = Features.from_dict(asdict(ds_info)["features"]) assert features == reloaded_features def test_reorder_fields_as(self): features = Features( { "id": Value("string"), "document": { "title": Value("string"), "url": Value("string"), "html": Value("string"), "tokens": Sequence({"token": Value("string"), "is_html": Value("bool")}), }, "question": { "text": Value("string"), "tokens": Sequence(Value("string")), }, "annotations": Sequence( { "id": Value("string"), "long_answer": { "start_token": Value("int64"), "end_token": Value("int64"), "start_byte": Value("int64"), "end_byte": Value("int64"), }, "short_answers": Sequence( { "start_token": Value("int64"), "end_token": Value("int64"), "start_byte": Value("int64"), "end_byte": Value("int64"), "text": Value("string"), } ), "yes_no_answer": ClassLabel(names=["NO", "YES"]), } ), } ) other = Features( # same but with [] instead of sequences, and with a shuffled fields order { "id": Value("string"), "document": { "tokens": Sequence({"token": Value("string"), "is_html": Value("bool")}), "title": Value("string"), "url": Value("string"), "html": Value("string"), }, "question": { "text": Value("string"), "tokens": [Value("string")], }, "annotations": { "yes_no_answer": [ClassLabel(names=["NO", "YES"])], "id": [Value("string")], "long_answer": [ { "end_byte": Value("int64"), "start_token": Value("int64"), "end_token": Value("int64"), "start_byte": Value("int64"), } ], "short_answers": [ Sequence( { "text": Value("string"), "start_token": Value("int64"), "end_token": Value("int64"), "start_byte": Value("int64"), "end_byte": Value("int64"), } ) ], }, } ) expected = Features( { "id": Value("string"), "document": { "tokens": Sequence({"token": Value("string"), "is_html": Value("bool")}), "title": Value("string"), "url": Value("string"), "html": Value("string"), }, "question": { "text": Value("string"), "tokens": Sequence(Value("string")), }, "annotations": Sequence( { "yes_no_answer": ClassLabel(names=["NO", "YES"]), "id": Value("string"), "long_answer": { "end_byte": Value("int64"), "start_token": Value("int64"), "end_token": Value("int64"), "start_byte": Value("int64"), }, "short_answers": Sequence( { "text": Value("string"), "start_token": Value("int64"), "end_token": Value("int64"), "start_byte": Value("int64"), "end_byte": Value("int64"), } ), } ), } ) reordered_features = features.reorder_fields_as(other) self.assertDictEqual(reordered_features, expected) self.assertEqual(reordered_features.type, other.type) self.assertEqual(reordered_features.type, expected.type) self.assertNotEqual(reordered_features.type, features.type) def test_flatten(self): features = Features({"foo": {"bar1": Value("int32"), "bar2": {"foobar": Value("string")}}}) _features = features.copy() flattened_features = features.flatten() assert flattened_features == {"foo.bar1": Value("int32"), "foo.bar2.foobar": Value("string")} assert features == _features, "calling flatten shouldn't alter the current features" def test_flatten_with_sequence(self): features = Features({"foo": Sequence({"bar": {"my_value": Value("int32")}})}) _features = features.copy() flattened_features = features.flatten() assert flattened_features == {"foo.bar": [{"my_value": Value("int32")}]} assert features == _features, "calling flatten shouldn't alter the current features" def test_features_dicts_are_synced(self): def assert_features_dicts_are_synced(features: Features): assert ( hasattr(features, "_column_requires_decoding") and features.keys() == features._column_requires_decoding.keys() ) features = Features({"foo": Sequence({"bar": {"my_value": Value("int32")}})}) assert_features_dicts_are_synced(features) features["barfoo"] = Image() assert_features_dicts_are_synced(features) del features["barfoo"] assert_features_dicts_are_synced(features) features.update({"foobar": Value("string")}) assert_features_dicts_are_synced(features) features.pop("foobar") assert_features_dicts_are_synced(features) features.popitem() assert_features_dicts_are_synced(features) features.setdefault("xyz", Value("bool")) assert_features_dicts_are_synced(features) features.clear() assert_features_dicts_are_synced(features) def test_classlabel_init(tmp_path_factory): names = ["negative", "positive"] names_file = str(tmp_path_factory.mktemp("features") / "labels.txt") with open(names_file, "w", encoding="utf-8") as f: f.write("\n".join(names)) classlabel = ClassLabel(names=names) assert classlabel.names == names and classlabel.num_classes == len(names) classlabel = ClassLabel(names_file=names_file) assert classlabel.names == names and classlabel.num_classes == len(names) classlabel = ClassLabel(num_classes=len(names), names=names) assert classlabel.names == names and classlabel.num_classes == len(names) classlabel = ClassLabel(num_classes=len(names)) assert classlabel.names == [str(i) for i in range(len(names))] and classlabel.num_classes == len(names) with pytest.raises(ValueError): classlabel = ClassLabel(num_classes=len(names) + 1, names=names) with pytest.raises(ValueError): classlabel = ClassLabel(names=names, names_file=names_file) with pytest.raises(ValueError): classlabel = ClassLabel() with pytest.raises(TypeError): classlabel = ClassLabel(names=np.array(names)) def test_classlabel_str2int(): names = ["negative", "positive"] classlabel = ClassLabel(names=names) for label in names: assert classlabel.str2int(label) == names.index(label) with pytest.raises(ValueError): classlabel.str2int("__bad_label_name__") with pytest.raises(ValueError): classlabel.str2int(1) with pytest.raises(ValueError): classlabel.str2int(None) def test_classlabel_int2str(): names = ["negative", "positive"] classlabel = ClassLabel(names=names) for i in range(len(names)): assert classlabel.int2str(i) == names[i] with pytest.raises(ValueError): classlabel.int2str(len(names)) with pytest.raises(ValueError): classlabel.int2str(-1) with pytest.raises(ValueError): classlabel.int2str(None) def test_classlabel_cast_storage(): names = ["negative", "positive"] classlabel = ClassLabel(names=names) # from integers arr = pa.array([0, 1, -1, -100], type=pa.int64()) result = classlabel.cast_storage(arr) assert result.type == pa.int64() assert result.to_pylist() == [0, 1, -1, -100] arr = pa.array([0, 1, -1, -100], type=pa.int32()) result = classlabel.cast_storage(arr) assert result.type == pa.int64() assert result.to_pylist() == [0, 1, -1, -100] arr = pa.array([3]) with pytest.raises(ValueError): classlabel.cast_storage(arr) # from strings arr = pa.array(["negative", "positive"]) result = classlabel.cast_storage(arr) assert result.type == pa.int64() assert result.to_pylist() == [0, 1] arr = pa.array(["__label_that_doesnt_exist__"]) with pytest.raises(ValueError): classlabel.cast_storage(arr) # from nulls arr = pa.array([None]) result = classlabel.cast_storage(arr) assert result.type == pa.int64() assert result.to_pylist() == [None] # from empty arr = pa.array([], pa.int64()) result = classlabel.cast_storage(arr) assert result.type == pa.int64() assert result.to_pylist() == [] arr = pa.array([], pa.string()) result = classlabel.cast_storage(arr) assert result.type == pa.int64() assert result.to_pylist() == [] @pytest.mark.parametrize("class_label_arg", ["names", "names_file"]) def test_class_label_to_and_from_dict(class_label_arg, tmp_path_factory): names = ["negative", "positive"] names_file = str(tmp_path_factory.mktemp("features") / "labels.txt") with open(names_file, "w", encoding="utf-8") as f: f.write("\n".join(names)) if class_label_arg == "names": class_label = ClassLabel(names=names) elif class_label_arg == "names_file": class_label = ClassLabel(names_file=names_file) generated_class_label = generate_from_dict(asdict(class_label)) assert generated_class_label == class_label @pytest.mark.parametrize("inner_type", [Value("int32"), {"subcolumn": Value("int32")}]) def test_encode_nested_example_sequence_with_none(inner_type): schema = Sequence(inner_type) obj = None result = encode_nested_example(schema, obj) assert result is None def test_encode_batch_with_example_with_empty_first_elem(): features = Features( { "x": Sequence(Sequence(ClassLabel(names=["a", "b"]))), } ) encoded_batch = features.encode_batch( { "x": [ [["a"], ["b"]], [[], ["b"]], ] } ) assert encoded_batch == {"x": [[[0], [1]], [[], [1]]]} @pytest.mark.parametrize( "feature", [ Value("int32"), ClassLabel(num_classes=2), Translation(languages=["en", "fr"]), TranslationVariableLanguages(languages=["en", "fr"]), ], ) def test_dataset_feature_with_none(feature): data = {"col": [None]} features = Features({"col": feature}) dset = Dataset.from_dict(data, features=features) item = dset[0] assert item.keys() == {"col"} assert item["col"] is None batch = dset[:1] assert len(batch) == 1 assert batch.keys() == {"col"} assert isinstance(batch["col"], list) and all(item is None for item in batch["col"]) column = dset["col"] assert len(column) == 1 assert isinstance(column, list) and all(item is None for item in column) # nested tests data = {"col": [[None]]} features = Features({"col": Sequence(feature)}) dset = Dataset.from_dict(data, features=features) item = dset[0] assert item.keys() == {"col"} assert all(i is None for i in item["col"]) data = {"nested": [{"col": None}]} features = Features({"nested": {"col": feature}}) dset = Dataset.from_dict(data, features=features) item = dset[0] assert item.keys() == {"nested"} assert item["nested"].keys() == {"col"} assert item["nested"]["col"] is None def iternumpy(key1, value1, value2): if value1.dtype != value2.dtype: # check only for dtype raise AssertionError( f"dtype of '{key1}' key for casted object: {value1.dtype} and expected object: {value2.dtype} not matching" ) def dict_diff(d1: dict, d2: dict): # check if 2 dictionaries are equal np.testing.assert_equal(d1, d2) # sanity check if dict values are equal or not for (k1, v1), (k2, v2) in zip(d1.items(), d2.items()): # check if their values have same dtype or not if isinstance(v1, dict): # nested dictionary case dict_diff(v1, v2) elif isinstance(v1, np.ndarray): # checks if dtype and value of np.ndarray is equal iternumpy(k1, v1, v2) elif isinstance(v1, list): for element1, element2 in zip(v1, v2): # iterates over all elements of list if isinstance(element1, dict): dict_diff(element1, element2) elif isinstance(element1, np.ndarray): iternumpy(k1, element1, element2) class CastToPythonObjectsTest(TestCase): def test_cast_to_python_objects_list(self): obj = {"col_1": [{"vec": [1, 2, 3], "txt": "foo"}] * 3, "col_2": [[1, 2], [3, 4], [5, 6]]} expected_obj = {"col_1": [{"vec": [1, 2, 3], "txt": "foo"}] * 3, "col_2": [[1, 2], [3, 4], [5, 6]]} casted_obj = cast_to_python_objects(obj) self.assertDictEqual(casted_obj, expected_obj) def test_cast_to_python_objects_tuple(self): obj = {"col_1": [{"vec": (1, 2, 3), "txt": "foo"}] * 3, "col_2": [(1, 2), (3, 4), (5, 6)]} expected_obj = {"col_1": [{"vec": (1, 2, 3), "txt": "foo"}] * 3, "col_2": [(1, 2), (3, 4), (5, 6)]} casted_obj = cast_to_python_objects(obj) self.assertDictEqual(casted_obj, expected_obj) def test_cast_to_python_or_numpy(self): obj = {"col_1": [{"vec": np.arange(1, 4), "txt": "foo"}] * 3, "col_2": np.arange(1, 7).reshape(3, 2)} expected_obj = { "col_1": [{"vec": np.array([1, 2, 3]), "txt": "foo"}] * 3, "col_2": np.array([[1, 2], [3, 4], [5, 6]]), } casted_obj = cast_to_python_objects(obj) dict_diff(casted_obj, expected_obj) def test_cast_to_python_objects_series(self): obj = { "col_1": pd.Series([{"vec": [1, 2, 3], "txt": "foo"}] * 3), "col_2": pd.Series([[1, 2], [3, 4], [5, 6]]), } expected_obj = {"col_1": [{"vec": [1, 2, 3], "txt": "foo"}] * 3, "col_2": [[1, 2], [3, 4], [5, 6]]} casted_obj = cast_to_python_objects(obj) self.assertDictEqual(casted_obj, expected_obj) def test_cast_to_python_objects_dataframe(self): obj = pd.DataFrame({"col_1": [{"vec": [1, 2, 3], "txt": "foo"}] * 3, "col_2": [[1, 2], [3, 4], [5, 6]]}) expected_obj = {"col_1": [{"vec": [1, 2, 3], "txt": "foo"}] * 3, "col_2": [[1, 2], [3, 4], [5, 6]]} casted_obj = cast_to_python_objects(obj) self.assertDictEqual(casted_obj, expected_obj) def test_cast_to_python_objects_pandas_timestamp(self): obj = pd.Timestamp(2020, 1, 1) expected_obj = obj.to_pydatetime() casted_obj = cast_to_python_objects(obj) self.assertEqual(casted_obj, expected_obj) casted_obj = cast_to_python_objects(pd.Series([obj])) self.assertListEqual(casted_obj, [expected_obj]) casted_obj = cast_to_python_objects(pd.DataFrame({"a": [obj]})) self.assertDictEqual(casted_obj, {"a": [expected_obj]}) def test_cast_to_python_objects_pandas_timedelta(self): obj = pd.Timedelta(seconds=1) expected_obj = obj.to_pytimedelta() casted_obj = cast_to_python_objects(obj) self.assertEqual(casted_obj, expected_obj) casted_obj = cast_to_python_objects(pd.Series([obj])) self.assertListEqual(casted_obj, [expected_obj]) casted_obj = cast_to_python_objects(pd.DataFrame({"a": [obj]})) self.assertDictEqual(casted_obj, {"a": [expected_obj]}) @require_torch def test_cast_to_python_objects_torch(self): import torch obj = { "col_1": [{"vec": torch.tensor(np.arange(1, 4)), "txt": "foo"}] * 3, "col_2": torch.tensor(np.arange(1, 7).reshape(3, 2)), } expected_obj = { "col_1": [{"vec": np.array([1, 2, 3]), "txt": "foo"}] * 3, "col_2": np.array([[1, 2], [3, 4], [5, 6]]), } casted_obj = cast_to_python_objects(obj) dict_diff(casted_obj, expected_obj) @require_tf def test_cast_to_python_objects_tf(self): import tensorflow as tf obj = { "col_1": [{"vec": tf.constant(np.arange(1, 4)), "txt": "foo"}] * 3, "col_2": tf.constant(np.arange(1, 7).reshape(3, 2)), } expected_obj = { "col_1": [{"vec": np.array([1, 2, 3]), "txt": "foo"}] * 3, "col_2": np.array([[1, 2], [3, 4], [5, 6]]), } casted_obj = cast_to_python_objects(obj) dict_diff(casted_obj, expected_obj) @require_jax def test_cast_to_python_objects_jax(self): import jax.numpy as jnp obj = { "col_1": [{"vec": jnp.array(np.arange(1, 4)), "txt": "foo"}] * 3, "col_2": jnp.array(np.arange(1, 7).reshape(3, 2)), } assert obj["col_2"].dtype == jnp.int32 expected_obj = { "col_1": [{"vec": np.array([1, 2, 3], dtype=np.int32), "txt": "foo"}] * 3, "col_2": np.array([[1, 2], [3, 4], [5, 6]], dtype=np.int32), } casted_obj = cast_to_python_objects(obj) dict_diff(casted_obj, expected_obj) @patch("datasets.features.features._cast_to_python_objects", side_effect=_cast_to_python_objects) def test_dont_iterate_over_each_element_in_a_list(self, mocked_cast): obj = {"col_1": [[1, 2], [3, 4], [5, 6]]} cast_to_python_objects(obj) self.assertEqual(mocked_cast.call_count, 4) # 4 = depth of obj SIMPLE_FEATURES = [ Features(), Features({"a": Value("int32")}), Features({"a": Value("int32", id="my feature")}), Features({"a": Value("int32"), "b": Value("float64"), "c": Value("string")}), ] CUSTOM_FEATURES = [ Features({"label": ClassLabel(names=["negative", "positive"])}), Features({"array": Array2D(dtype="float32", shape=(4, 4))}), Features({"image": Image()}), Features({"audio": Audio()}), Features({"image": Image(decode=False)}), Features({"audio": Audio(decode=False)}), Features({"translation": Translation(["en", "fr"])}), Features({"translation": TranslationVariableLanguages(["en", "fr"])}), ] NESTED_FEATURES = [ Features({"foo": {}}), Features({"foo": {"bar": Value("int32")}}), Features({"foo": {"bar1": Value("int32"), "bar2": Value("float64")}}), Features({"foo": Sequence(Value("int32"))}), Features({"foo": Sequence({})}), Features({"foo": Sequence({"bar": Value("int32")})}), Features({"foo": [Value("int32")]}), Features({"foo": [{"bar": Value("int32")}]}), ] NESTED_CUSTOM_FEATURES = [ Features({"foo": {"bar": ClassLabel(names=["negative", "positive"])}}), Features({"foo": Sequence(ClassLabel(names=["negative", "positive"]))}), Features({"foo": Sequence({"bar": ClassLabel(names=["negative", "positive"])})}), Features({"foo": [ClassLabel(names=["negative", "positive"])]}), Features({"foo": [{"bar": ClassLabel(names=["negative", "positive"])}]}), ] @pytest.mark.parametrize("features", SIMPLE_FEATURES + CUSTOM_FEATURES + NESTED_FEATURES + NESTED_CUSTOM_FEATURES) def test_features_to_dict(features: Features): features_dict = features.to_dict() assert isinstance(features_dict, dict) reloaded = Features.from_dict(features_dict) assert features == reloaded @pytest.mark.parametrize("features", SIMPLE_FEATURES + CUSTOM_FEATURES + NESTED_FEATURES + NESTED_CUSTOM_FEATURES) def test_features_to_yaml_list(features: Features): features_yaml_list = features._to_yaml_list() assert isinstance(features_yaml_list, list) reloaded = Features._from_yaml_list(features_yaml_list) assert features == reloaded @pytest.mark.parametrize("features", SIMPLE_FEATURES + CUSTOM_FEATURES + NESTED_FEATURES + NESTED_CUSTOM_FEATURES) def test_features_to_arrow_schema(features: Features): arrow_schema = features.arrow_schema assert isinstance(arrow_schema, pa.Schema) reloaded = Features.from_arrow_schema(arrow_schema) assert features == reloaded NESTED_COMPARISON = [ [ [Features({"email": Value(dtype="string", id=None)}), Features({"email": Value(dtype="string", id=None)})], [Features({"email": Value(dtype="string", id=None)}), Features({"email": Value(dtype="string", id=None)})], ], [ [Features({"email": Value(dtype="string", id=None)}), Features({"email": Value(dtype="null", id=None)})], [Features({"email": Value(dtype="string", id=None)}), Features({"email": Value(dtype="string", id=None)})], ], [ [ Features({"speaker": {"email": Value(dtype="string", id=None)}}), Features({"speaker": {"email": Value(dtype="string", id=None)}}), ], [ Features({"speaker": {"email": Value(dtype="string", id=None)}}), Features({"speaker": {"email": Value(dtype="string", id=None)}}), ], ], [ [ Features({"speaker": {"email": Value(dtype="string", id=None)}}), Features({"speaker": {"email": Value(dtype="null", id=None)}}), ], [ Features({"speaker": {"email": Value(dtype="string", id=None)}}), Features({"speaker": {"email": Value(dtype="string", id=None)}}), ], ], ] @pytest.mark.parametrize("features", NESTED_COMPARISON) def test_features_alignment(features: Tuple[List[Features], Features]): inputs, expected = features _check_if_features_can_be_aligned(inputs) # Check that we can align, will raise otherwise. assert _align_features(inputs) == expected
datasets/tests/features/test_features.py/0
{ "file_path": "datasets/tests/features/test_features.py", "repo_id": "datasets", "token_count": 13111 }
75
import shutil import textwrap import librosa import numpy as np import pytest import soundfile as sf from datasets import Audio, ClassLabel, Features, Value from datasets.data_files import DataFilesDict, get_data_patterns from datasets.download.streaming_download_manager import StreamingDownloadManager from datasets.packaged_modules.audiofolder.audiofolder import AudioFolder from ..utils import require_sndfile @pytest.fixture def cache_dir(tmp_path): return str(tmp_path / "audiofolder_cache_dir") @pytest.fixture def data_files_with_labels_no_metadata(tmp_path, audio_file): data_dir = tmp_path / "data_files_with_labels_no_metadata" data_dir.mkdir(parents=True, exist_ok=True) subdir_class_0 = data_dir / "fr" subdir_class_0.mkdir(parents=True, exist_ok=True) subdir_class_1 = data_dir / "uk" subdir_class_1.mkdir(parents=True, exist_ok=True) audio_filename = subdir_class_0 / "audio_fr.wav" shutil.copyfile(audio_file, audio_filename) audio_filename2 = subdir_class_1 / "audio_uk.wav" shutil.copyfile(audio_file, audio_filename2) data_files_with_labels_no_metadata = DataFilesDict.from_patterns( get_data_patterns(str(data_dir)), data_dir.as_posix() ) return data_files_with_labels_no_metadata @pytest.fixture def audio_files_with_labels_and_duplicated_label_key_in_metadata(tmp_path, audio_file): data_dir = tmp_path / "audio_files_with_labels_and_label_key_in_metadata" data_dir.mkdir(parents=True, exist_ok=True) subdir_class_0 = data_dir / "fr" subdir_class_0.mkdir(parents=True, exist_ok=True) subdir_class_1 = data_dir / "uk" subdir_class_1.mkdir(parents=True, exist_ok=True) audio_filename = subdir_class_0 / "audio_fr.wav" shutil.copyfile(audio_file, audio_filename) audio_filename2 = subdir_class_1 / "audio_uk.wav" shutil.copyfile(audio_file, audio_filename2) audio_metadata_filename = tmp_path / data_dir / "metadata.jsonl" audio_metadata = textwrap.dedent( """\ {"file_name": "fr/audio_fr.wav", "text": "Audio in French", "label": "Fr"} {"file_name": "uk/audio_uk.wav", "text": "Audio in Ukrainian", "label": "Uk"} """ ) with open(audio_metadata_filename, "w", encoding="utf-8") as f: f.write(audio_metadata) return str(audio_filename), str(audio_filename2), str(audio_metadata_filename) @pytest.fixture def audio_file_with_metadata(tmp_path, audio_file): audio_filename = tmp_path / "audio_file.wav" shutil.copyfile(audio_file, audio_filename) audio_metadata_filename = tmp_path / "metadata.jsonl" audio_metadata = textwrap.dedent( """\ {"file_name": "audio_file.wav", "text": "Audio transcription"} """ ) with open(audio_metadata_filename, "w", encoding="utf-8") as f: f.write(audio_metadata) return str(audio_filename), str(audio_metadata_filename) @pytest.fixture def audio_files_with_metadata_that_misses_one_audio(tmp_path, audio_file): audio_filename = tmp_path / "audio_file.wav" shutil.copyfile(audio_file, audio_filename) audio_filename2 = tmp_path / "audio_file2.wav" shutil.copyfile(audio_file, audio_filename2) audio_metadata_filename = tmp_path / "metadata.jsonl" audio_metadata = textwrap.dedent( """\ {"file_name": "audio_file.wav", "text": "Audio transcription"} """ ) with open(audio_metadata_filename, "w", encoding="utf-8") as f: f.write(audio_metadata) return str(audio_filename), str(audio_filename2), str(audio_metadata_filename) @pytest.fixture def data_files_with_one_split_and_metadata(tmp_path, audio_file): data_dir = tmp_path / "audiofolder_data_dir_with_metadata" data_dir.mkdir(parents=True, exist_ok=True) subdir = data_dir / "subdir" subdir.mkdir(parents=True, exist_ok=True) audio_filename = data_dir / "audio_file.wav" shutil.copyfile(audio_file, audio_filename) audio_filename2 = data_dir / "audio_file2.wav" shutil.copyfile(audio_file, audio_filename2) audio_filename3 = subdir / "audio_file3.wav" # in subdir shutil.copyfile(audio_file, audio_filename3) audio_metadata_filename = data_dir / "metadata.jsonl" audio_metadata = textwrap.dedent( """\ {"file_name": "audio_file.wav", "text": "First audio transcription"} {"file_name": "audio_file2.wav", "text": "Second audio transcription"} {"file_name": "subdir/audio_file3.wav", "text": "Third audio transcription (in subdir)"} """ ) with open(audio_metadata_filename, "w", encoding="utf-8") as f: f.write(audio_metadata) data_files_with_one_split_and_metadata = DataFilesDict.from_patterns( get_data_patterns(str(data_dir)), data_dir.as_posix() ) assert len(data_files_with_one_split_and_metadata) == 1 assert len(data_files_with_one_split_and_metadata["train"]) == 4 return data_files_with_one_split_and_metadata @pytest.fixture(params=["jsonl", "csv"]) def data_files_with_two_splits_and_metadata(request, tmp_path, audio_file): data_dir = tmp_path / "audiofolder_data_dir_with_metadata" data_dir.mkdir(parents=True, exist_ok=True) train_dir = data_dir / "train" train_dir.mkdir(parents=True, exist_ok=True) test_dir = data_dir / "test" test_dir.mkdir(parents=True, exist_ok=True) audio_filename = train_dir / "audio_file.wav" # train audio shutil.copyfile(audio_file, audio_filename) audio_filename2 = train_dir / "audio_file2.wav" # train audio shutil.copyfile(audio_file, audio_filename2) audio_filename3 = test_dir / "audio_file3.wav" # test audio shutil.copyfile(audio_file, audio_filename3) train_audio_metadata_filename = train_dir / f"metadata.{request.param}" audio_metadata = ( textwrap.dedent( """\ {"file_name": "audio_file.wav", "text": "First train audio transcription"} {"file_name": "audio_file2.wav", "text": "Second train audio transcription"} """ ) if request.param == "jsonl" else textwrap.dedent( """\ file_name,text audio_file.wav,First train audio transcription audio_file2.wav,Second train audio transcription """ ) ) with open(train_audio_metadata_filename, "w", encoding="utf-8") as f: f.write(audio_metadata) test_audio_metadata_filename = test_dir / f"metadata.{request.param}" audio_metadata = ( textwrap.dedent( """\ {"file_name": "audio_file3.wav", "text": "Test audio transcription"} """ ) if request.param == "jsonl" else textwrap.dedent( """\ file_name,text audio_file3.wav,Test audio transcription """ ) ) with open(test_audio_metadata_filename, "w", encoding="utf-8") as f: f.write(audio_metadata) data_files_with_two_splits_and_metadata = DataFilesDict.from_patterns( get_data_patterns(str(data_dir)), data_dir.as_posix() ) assert len(data_files_with_two_splits_and_metadata) == 2 assert len(data_files_with_two_splits_and_metadata["train"]) == 3 assert len(data_files_with_two_splits_and_metadata["test"]) == 2 return data_files_with_two_splits_and_metadata @pytest.fixture def data_files_with_zip_archives(tmp_path, audio_file): data_dir = tmp_path / "audiofolder_data_dir_with_zip_archives" data_dir.mkdir(parents=True, exist_ok=True) archive_dir = data_dir / "archive" archive_dir.mkdir(parents=True, exist_ok=True) subdir = archive_dir / "subdir" subdir.mkdir(parents=True, exist_ok=True) audio_filename = archive_dir / "audio_file.wav" shutil.copyfile(audio_file, audio_filename) audio_filename2 = subdir / "audio_file2.wav" # in subdir # make sure they're two different audios # Indeed we won't be able to compare the audio filenames, since the archive is not extracted in streaming mode array, sampling_rate = librosa.load(str(audio_filename), sr=16000) # original sampling rate is 44100 sf.write(str(audio_filename2), array, samplerate=16000) audio_metadata_filename = archive_dir / "metadata.jsonl" audio_metadata = textwrap.dedent( """\ {"file_name": "audio_file.wav", "text": "First audio transcription"} {"file_name": "subdir/audio_file2.wav", "text": "Second audio transcription (in subdir)"} """ ) with open(audio_metadata_filename, "w", encoding="utf-8") as f: f.write(audio_metadata) shutil.make_archive(str(archive_dir), "zip", archive_dir) shutil.rmtree(str(archive_dir)) data_files_with_zip_archives = DataFilesDict.from_patterns(get_data_patterns(str(data_dir)), data_dir.as_posix()) assert len(data_files_with_zip_archives) == 1 assert len(data_files_with_zip_archives["train"]) == 1 return data_files_with_zip_archives @require_sndfile # check that labels are inferred correctly from dir names def test_generate_examples_with_labels(data_files_with_labels_no_metadata, cache_dir): # there are no metadata.jsonl files in this test case audiofolder = AudioFolder(data_files=data_files_with_labels_no_metadata, cache_dir=cache_dir, drop_labels=False) audiofolder.download_and_prepare() assert audiofolder.info.features == Features({"audio": Audio(), "label": ClassLabel(names=["fr", "uk"])}) dataset = list(audiofolder.as_dataset()["train"]) label_feature = audiofolder.info.features["label"] assert dataset[0]["label"] == label_feature._str2int["fr"] assert dataset[1]["label"] == label_feature._str2int["uk"] @require_sndfile @pytest.mark.parametrize("drop_metadata", [None, True, False]) @pytest.mark.parametrize("drop_labels", [None, True, False]) def test_generate_examples_duplicated_label_key( audio_files_with_labels_and_duplicated_label_key_in_metadata, drop_metadata, drop_labels, cache_dir, caplog ): fr_audio_file, uk_audio_file, audio_metadata_file = audio_files_with_labels_and_duplicated_label_key_in_metadata audiofolder = AudioFolder( drop_metadata=drop_metadata, drop_labels=drop_labels, data_files=[fr_audio_file, uk_audio_file, audio_metadata_file], cache_dir=cache_dir, ) if drop_labels is False: # infer labels from directories even if metadata files are found audiofolder.download_and_prepare() warning_in_logs = any("ignoring metadata columns" in record.msg.lower() for record in caplog.records) assert warning_in_logs if drop_metadata is not True else not warning_in_logs dataset = audiofolder.as_dataset()["train"] assert audiofolder.info.features["label"] == ClassLabel(names=["fr", "uk"]) assert all(example["label"] in audiofolder.info.features["label"]._str2int.values() for example in dataset) else: audiofolder.download_and_prepare() dataset = audiofolder.as_dataset()["train"] if drop_metadata is not True: # labels are from metadata assert audiofolder.info.features["label"] == Value("string") assert all(example["label"] in ["Fr", "Uk"] for example in dataset) else: # drop both labels and metadata assert audiofolder.info.features == Features({"audio": Audio()}) assert all(example.keys() == {"audio"} for example in dataset) @require_sndfile @pytest.mark.parametrize("drop_metadata", [None, True, False]) @pytest.mark.parametrize("drop_labels", [None, True, False]) def test_generate_examples_drop_labels(data_files_with_labels_no_metadata, drop_metadata, drop_labels): audiofolder = AudioFolder( drop_metadata=drop_metadata, drop_labels=drop_labels, data_files=data_files_with_labels_no_metadata ) gen_kwargs = audiofolder._split_generators(StreamingDownloadManager())[0].gen_kwargs # removing the labels explicitly requires drop_labels=True assert gen_kwargs["add_labels"] is not bool(drop_labels) assert gen_kwargs["add_metadata"] is False # metadata files is not present in this case generator = audiofolder._generate_examples(**gen_kwargs) if not drop_labels: assert all( example.keys() == {"audio", "label"} and all(val is not None for val in example.values()) for _, example in generator ) else: assert all( example.keys() == {"audio"} and all(val is not None for val in example.values()) for _, example in generator ) @require_sndfile @pytest.mark.parametrize("drop_metadata", [None, True, False]) @pytest.mark.parametrize("drop_labels", [None, True, False]) def test_generate_examples_drop_metadata(audio_file_with_metadata, drop_metadata, drop_labels): audio_file, audio_metadata_file = audio_file_with_metadata audiofolder = AudioFolder( drop_metadata=drop_metadata, drop_labels=drop_labels, data_files={"train": [audio_file, audio_metadata_file]} ) gen_kwargs = audiofolder._split_generators(StreamingDownloadManager())[0].gen_kwargs # since the dataset has metadata, removing the metadata explicitly requires drop_metadata=True assert gen_kwargs["add_metadata"] is not bool(drop_metadata) # since the dataset has metadata, adding the labels explicitly requires drop_labels=False assert gen_kwargs["add_labels"] is (drop_labels is False) generator = audiofolder._generate_examples(**gen_kwargs) expected_columns = {"audio"} if gen_kwargs["add_metadata"]: expected_columns.add("text") if gen_kwargs["add_labels"]: expected_columns.add("label") result = [example for _, example in generator] assert len(result) == 1 example = result[0] assert example.keys() == expected_columns for column in expected_columns: assert example[column] is not None @require_sndfile @pytest.mark.parametrize("drop_metadata", [None, True, False]) def test_generate_examples_with_metadata_in_wrong_location(audio_file, audio_file_with_metadata, drop_metadata): _, audio_metadata_file = audio_file_with_metadata audiofolder = AudioFolder(drop_metadata=drop_metadata, data_files={"train": [audio_file, audio_metadata_file]}) gen_kwargs = audiofolder._split_generators(StreamingDownloadManager())[0].gen_kwargs generator = audiofolder._generate_examples(**gen_kwargs) if not drop_metadata: with pytest.raises(ValueError): list(generator) else: assert all( example.keys() == {"audio"} and all(val is not None for val in example.values()) for _, example in generator ) @require_sndfile @pytest.mark.parametrize("drop_metadata", [None, True, False]) def test_generate_examples_with_metadata_that_misses_one_audio( audio_files_with_metadata_that_misses_one_audio, drop_metadata ): audio_file, audio_file2, audio_metadata_file = audio_files_with_metadata_that_misses_one_audio if not drop_metadata: features = Features({"audio": Audio(), "text": Value("string")}) else: features = Features({"audio": Audio()}) audiofolder = AudioFolder( drop_metadata=drop_metadata, features=features, data_files={"train": [audio_file, audio_file2, audio_metadata_file]}, ) gen_kwargs = audiofolder._split_generators(StreamingDownloadManager())[0].gen_kwargs generator = audiofolder._generate_examples(**gen_kwargs) if not drop_metadata: with pytest.raises(ValueError): _ = list(generator) else: assert all( example.keys() == {"audio"} and all(val is not None for val in example.values()) for _, example in generator ) @require_sndfile @pytest.mark.parametrize("streaming", [False, True]) def test_data_files_with_metadata_and_single_split(streaming, cache_dir, data_files_with_one_split_and_metadata): data_files = data_files_with_one_split_and_metadata audiofolder = AudioFolder(data_files=data_files, cache_dir=cache_dir) audiofolder.download_and_prepare() datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset() for split, data_files in data_files.items(): expected_num_of_audios = len(data_files) - 1 # don't count the metadata file assert split in datasets dataset = list(datasets[split]) assert len(dataset) == expected_num_of_audios # make sure each sample has its own audio and metadata assert len({example["audio"]["path"] for example in dataset}) == expected_num_of_audios assert len({example["text"] for example in dataset}) == expected_num_of_audios assert all(example["text"] is not None for example in dataset) @require_sndfile @pytest.mark.parametrize("streaming", [False, True]) def test_data_files_with_metadata_and_multiple_splits(streaming, cache_dir, data_files_with_two_splits_and_metadata): data_files = data_files_with_two_splits_and_metadata audiofolder = AudioFolder(data_files=data_files, cache_dir=cache_dir) audiofolder.download_and_prepare() datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset() for split, data_files in data_files.items(): expected_num_of_audios = len(data_files) - 1 # don't count the metadata file assert split in datasets dataset = list(datasets[split]) assert len(dataset) == expected_num_of_audios # make sure each sample has its own audio and metadata assert len({example["audio"]["path"] for example in dataset}) == expected_num_of_audios assert len({example["text"] for example in dataset}) == expected_num_of_audios assert all(example["text"] is not None for example in dataset) @require_sndfile @pytest.mark.parametrize("streaming", [False, True]) def test_data_files_with_metadata_and_archives(streaming, cache_dir, data_files_with_zip_archives): audiofolder = AudioFolder(data_files=data_files_with_zip_archives, cache_dir=cache_dir) audiofolder.download_and_prepare() datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset() for split, data_files in data_files_with_zip_archives.items(): num_of_archives = len(data_files) # the metadata file is inside the archive expected_num_of_audios = 2 * num_of_archives assert split in datasets dataset = list(datasets[split]) assert len(dataset) == expected_num_of_audios # make sure each sample has its own audio (all arrays are different) and metadata assert ( sum(np.array_equal(dataset[0]["audio"]["array"], example["audio"]["array"]) for example in dataset[1:]) == 0 ) assert len({example["text"] for example in dataset}) == expected_num_of_audios assert all(example["text"] is not None for example in dataset) @require_sndfile def test_data_files_with_wrong_metadata_file_name(cache_dir, tmp_path, audio_file): data_dir = tmp_path / "data_dir_with_bad_metadata" data_dir.mkdir(parents=True, exist_ok=True) shutil.copyfile(audio_file, data_dir / "audio_file.wav") audio_metadata_filename = data_dir / "bad_metadata.jsonl" # bad file audio_metadata = textwrap.dedent( """\ {"file_name": "audio_file.wav", "text": "Audio transcription"} """ ) with open(audio_metadata_filename, "w", encoding="utf-8") as f: f.write(audio_metadata) data_files_with_bad_metadata = DataFilesDict.from_patterns(get_data_patterns(str(data_dir)), data_dir.as_posix()) audiofolder = AudioFolder(data_files=data_files_with_bad_metadata, cache_dir=cache_dir) audiofolder.download_and_prepare() dataset = audiofolder.as_dataset(split="train") # check that there are no metadata, since the metadata file name doesn't have the right name assert "text" not in dataset.column_names @require_sndfile def test_data_files_with_wrong_audio_file_name_column_in_metadata_file(cache_dir, tmp_path, audio_file): data_dir = tmp_path / "data_dir_with_bad_metadata" data_dir.mkdir(parents=True, exist_ok=True) shutil.copyfile(audio_file, data_dir / "audio_file.wav") audio_metadata_filename = data_dir / "metadata.jsonl" audio_metadata = textwrap.dedent( # with bad column "bad_file_name" instead of "file_name" """\ {"bad_file_name_column": "audio_file.wav", "text": "Audio transcription"} """ ) with open(audio_metadata_filename, "w", encoding="utf-8") as f: f.write(audio_metadata) data_files_with_bad_metadata = DataFilesDict.from_patterns(get_data_patterns(str(data_dir)), data_dir.as_posix()) audiofolder = AudioFolder(data_files=data_files_with_bad_metadata, cache_dir=cache_dir) with pytest.raises(ValueError) as exc_info: audiofolder.download_and_prepare() assert "`file_name` must be present" in str(exc_info.value) @require_sndfile def test_data_files_with_with_metadata_in_different_formats(cache_dir, tmp_path, audio_file): data_dir = tmp_path / "data_dir_with_metadata_in_different_format" data_dir.mkdir(parents=True, exist_ok=True) shutil.copyfile(audio_file, data_dir / "audio_file.wav") audio_metadata_filename_jsonl = data_dir / "metadata.jsonl" audio_metadata_jsonl = textwrap.dedent( """\ {"file_name": "audio_file.wav", "text": "Audio transcription"} """ ) with open(audio_metadata_filename_jsonl, "w", encoding="utf-8") as f: f.write(audio_metadata_jsonl) audio_metadata_filename_csv = data_dir / "metadata.csv" audio_metadata_csv = textwrap.dedent( """\ file_name,text audio_file.wav,Audio transcription """ ) with open(audio_metadata_filename_csv, "w", encoding="utf-8") as f: f.write(audio_metadata_csv) data_files_with_bad_metadata = DataFilesDict.from_patterns(get_data_patterns(str(data_dir)), data_dir.as_posix()) audiofolder = AudioFolder(data_files=data_files_with_bad_metadata, cache_dir=cache_dir) with pytest.raises(ValueError) as exc_info: audiofolder.download_and_prepare() assert "metadata files with different extensions" in str(exc_info.value)
datasets/tests/packaged_modules/test_audiofolder.py/0
{ "file_path": "datasets/tests/packaged_modules/test_audiofolder.py", "repo_id": "datasets", "token_count": 8594 }
76
from unittest import TestCase from datasets import Sequence, Value from datasets.arrow_dataset import Dataset class DatasetListTest(TestCase): def _create_example_records(self): return [ {"col_1": 3, "col_2": "a"}, {"col_1": 2, "col_2": "b"}, {"col_1": 1, "col_2": "c"}, {"col_1": 0, "col_2": "d"}, ] def _create_example_dict(self): data = {"col_1": [3, 2, 1, 0], "col_2": ["a", "b", "c", "d"]} return Dataset.from_dict(data) def test_create(self): example_records = self._create_example_records() dset = Dataset.from_list(example_records) self.assertListEqual(dset.column_names, ["col_1", "col_2"]) for i, r in enumerate(dset): self.assertDictEqual(r, example_records[i]) def test_list_dict_equivalent(self): example_records = self._create_example_records() dset = Dataset.from_list(example_records) dset_from_dict = Dataset.from_dict({k: [r[k] for r in example_records] for k in example_records[0]}) self.assertEqual(dset.info, dset_from_dict.info) def test_uneven_records(self): # checks what happens with missing columns uneven_records = [{"col_1": 1}, {"col_2": "x"}] dset = Dataset.from_list(uneven_records) self.assertDictEqual(dset[0], {"col_1": 1}) self.assertDictEqual(dset[1], {"col_1": None}) # NB: first record is used for columns def test_variable_list_records(self): # checks if the type can be inferred from the second record list_records = [{"col_1": []}, {"col_1": [1, 2]}] dset = Dataset.from_list(list_records) self.assertEqual(dset.info.features["col_1"], Sequence(Value("int64"))) def test_create_empty(self): dset = Dataset.from_list([]) self.assertEqual(len(dset), 0) self.assertListEqual(dset.column_names, [])
datasets/tests/test_dataset_list.py/0
{ "file_path": "datasets/tests/test_dataset_list.py", "repo_id": "datasets", "token_count": 875 }
77
import importlib import os import pickle import shutil import tempfile import time from hashlib import sha256 from multiprocessing import Pool from pathlib import Path from unittest import TestCase from unittest.mock import patch import dill import pyarrow as pa import pytest import requests import datasets from datasets import config, load_dataset, load_from_disk from datasets.arrow_dataset import Dataset from datasets.arrow_writer import ArrowWriter from datasets.builder import DatasetBuilder from datasets.config import METADATA_CONFIGS_FIELD from datasets.data_files import DataFilesDict, DataFilesPatternsDict from datasets.dataset_dict import DatasetDict, IterableDatasetDict from datasets.download.download_config import DownloadConfig from datasets.exceptions import DatasetNotFoundError from datasets.features import Features, Image, Value from datasets.iterable_dataset import IterableDataset from datasets.load import ( CachedDatasetModuleFactory, CachedMetricModuleFactory, GithubMetricModuleFactory, HubDatasetModuleFactoryWithoutScript, HubDatasetModuleFactoryWithParquetExport, HubDatasetModuleFactoryWithScript, LocalDatasetModuleFactoryWithoutScript, LocalDatasetModuleFactoryWithScript, LocalMetricModuleFactory, PackagedDatasetModuleFactory, infer_module_for_data_files_list, infer_module_for_data_files_list_in_archives, load_dataset_builder, resolve_trust_remote_code, ) from datasets.packaged_modules.audiofolder.audiofolder import AudioFolder, AudioFolderConfig from datasets.packaged_modules.imagefolder.imagefolder import ImageFolder, ImageFolderConfig from datasets.packaged_modules.parquet.parquet import ParquetConfig from datasets.utils import _datasets_server from datasets.utils.logging import INFO, get_logger from .utils import ( OfflineSimulationMode, assert_arrow_memory_doesnt_increase, assert_arrow_memory_increases, offline, require_not_windows, require_pil, require_sndfile, set_current_working_directory_to_temp_dir, ) DATASET_LOADING_SCRIPT_NAME = "__dummy_dataset1__" DATASET_LOADING_SCRIPT_CODE = """ import os import datasets from datasets import DatasetInfo, Features, Split, SplitGenerator, Value class __DummyDataset1__(datasets.GeneratorBasedBuilder): def _info(self) -> DatasetInfo: return DatasetInfo(features=Features({"text": Value("string")})) def _split_generators(self, dl_manager): return [ SplitGenerator(Split.TRAIN, gen_kwargs={"filepath": os.path.join(dl_manager.manual_dir, "train.txt")}), SplitGenerator(Split.TEST, gen_kwargs={"filepath": os.path.join(dl_manager.manual_dir, "test.txt")}), ] def _generate_examples(self, filepath, **kwargs): with open(filepath, "r", encoding="utf-8") as f: for i, line in enumerate(f): yield i, {"text": line.strip()} """ SAMPLE_DATASET_IDENTIFIER = "hf-internal-testing/dataset_with_script" # has dataset script and also a parquet export SAMPLE_DATASET_IDENTIFIER2 = "hf-internal-testing/dataset_with_data_files" # only has data files SAMPLE_DATASET_IDENTIFIER3 = "hf-internal-testing/multi_dir_dataset" # has multiple data directories SAMPLE_DATASET_IDENTIFIER4 = "hf-internal-testing/imagefolder_with_metadata" # imagefolder with a metadata file outside of the train/test directories SAMPLE_DATASET_IDENTIFIER5 = "hf-internal-testing/imagefolder_with_metadata_no_splits" # imagefolder with a metadata file and no default split names in data files SAMPLE_NOT_EXISTING_DATASET_IDENTIFIER = "hf-internal-testing/_dummy" SAMPLE_DATASET_NAME_THAT_DOESNT_EXIST = "_dummy" SAMPLE_DATASET_NO_CONFIGS_IN_METADATA = "hf-internal-testing/audiofolder_no_configs_in_metadata" SAMPLE_DATASET_SINGLE_CONFIG_IN_METADATA = "hf-internal-testing/audiofolder_single_config_in_metadata" SAMPLE_DATASET_TWO_CONFIG_IN_METADATA = "hf-internal-testing/audiofolder_two_configs_in_metadata" SAMPLE_DATASET_TWO_CONFIG_IN_METADATA_WITH_DEFAULT = ( "hf-internal-testing/audiofolder_two_configs_in_metadata_with_default" ) METRIC_LOADING_SCRIPT_NAME = "__dummy_metric1__" METRIC_LOADING_SCRIPT_CODE = """ import datasets from datasets import MetricInfo, Features, Value class __DummyMetric1__(datasets.Metric): def _info(self): return MetricInfo(features=Features({"predictions": Value("int"), "references": Value("int")})) def _compute(self, predictions, references): return {"__dummy_metric1__": sum(int(p == r) for p, r in zip(predictions, references))} """ @pytest.fixture def data_dir(tmp_path): data_dir = tmp_path / "data_dir" data_dir.mkdir() with open(data_dir / "train.txt", "w") as f: f.write("foo\n" * 10) with open(data_dir / "test.txt", "w") as f: f.write("bar\n" * 10) return str(data_dir) @pytest.fixture def data_dir_with_arrow(tmp_path): data_dir = tmp_path / "data_dir" data_dir.mkdir() output_train = os.path.join(data_dir, "train.arrow") with ArrowWriter(path=output_train) as writer: writer.write_table(pa.Table.from_pydict({"col_1": ["foo"] * 10})) num_examples, num_bytes = writer.finalize() assert num_examples == 10 assert num_bytes > 0 output_test = os.path.join(data_dir, "test.arrow") with ArrowWriter(path=output_test) as writer: writer.write_table(pa.Table.from_pydict({"col_1": ["bar"] * 10})) num_examples, num_bytes = writer.finalize() assert num_examples == 10 assert num_bytes > 0 return str(data_dir) @pytest.fixture def data_dir_with_metadata(tmp_path): data_dir = tmp_path / "data_dir_with_metadata" data_dir.mkdir() with open(data_dir / "train.jpg", "wb") as f: f.write(b"train_image_bytes") with open(data_dir / "test.jpg", "wb") as f: f.write(b"test_image_bytes") with open(data_dir / "metadata.jsonl", "w") as f: f.write( """\ {"file_name": "train.jpg", "caption": "Cool tran image"} {"file_name": "test.jpg", "caption": "Cool test image"} """ ) return str(data_dir) @pytest.fixture def data_dir_with_single_config_in_metadata(tmp_path): data_dir = tmp_path / "data_dir_with_one_default_config_in_metadata" cats_data_dir = data_dir / "cats" cats_data_dir.mkdir(parents=True) dogs_data_dir = data_dir / "dogs" dogs_data_dir.mkdir(parents=True) with open(cats_data_dir / "cat.jpg", "wb") as f: f.write(b"this_is_a_cat_image_bytes") with open(dogs_data_dir / "dog.jpg", "wb") as f: f.write(b"this_is_a_dog_image_bytes") with open(data_dir / "README.md", "w") as f: f.write( f"""\ --- {METADATA_CONFIGS_FIELD}: - config_name: custom drop_labels: true --- """ ) return str(data_dir) @pytest.fixture def data_dir_with_config_and_data_files(tmp_path): data_dir = tmp_path / "data_dir_with_config_and_data_files" cats_data_dir = data_dir / "data" / "cats" cats_data_dir.mkdir(parents=True) dogs_data_dir = data_dir / "data" / "dogs" dogs_data_dir.mkdir(parents=True) with open(cats_data_dir / "cat.jpg", "wb") as f: f.write(b"this_is_a_cat_image_bytes") with open(dogs_data_dir / "dog.jpg", "wb") as f: f.write(b"this_is_a_dog_image_bytes") with open(data_dir / "README.md", "w") as f: f.write( f"""\ --- {METADATA_CONFIGS_FIELD}: - config_name: custom data_files: "data/**/*.jpg" --- """ ) return str(data_dir) @pytest.fixture def data_dir_with_two_config_in_metadata(tmp_path): data_dir = tmp_path / "data_dir_with_two_configs_in_metadata" cats_data_dir = data_dir / "cats" cats_data_dir.mkdir(parents=True) dogs_data_dir = data_dir / "dogs" dogs_data_dir.mkdir(parents=True) with open(cats_data_dir / "cat.jpg", "wb") as f: f.write(b"this_is_a_cat_image_bytes") with open(dogs_data_dir / "dog.jpg", "wb") as f: f.write(b"this_is_a_dog_image_bytes") with open(data_dir / "README.md", "w") as f: f.write( f"""\ --- {METADATA_CONFIGS_FIELD}: - config_name: "v1" drop_labels: true default: true - config_name: "v2" drop_labels: false --- """ ) return str(data_dir) @pytest.fixture def data_dir_with_data_dir_configs_in_metadata(tmp_path): data_dir = tmp_path / "data_dir_with_two_configs_in_metadata" cats_data_dir = data_dir / "cats" cats_data_dir.mkdir(parents=True) dogs_data_dir = data_dir / "dogs" dogs_data_dir.mkdir(parents=True) with open(cats_data_dir / "cat.jpg", "wb") as f: f.write(b"this_is_a_cat_image_bytes") with open(dogs_data_dir / "dog.jpg", "wb") as f: f.write(b"this_is_a_dog_image_bytes") @pytest.fixture def sub_data_dirs(tmp_path): data_dir2 = tmp_path / "data_dir2" relative_subdir1 = "subdir1" sub_data_dir1 = data_dir2 / relative_subdir1 sub_data_dir1.mkdir(parents=True) with open(sub_data_dir1 / "train.txt", "w") as f: f.write("foo\n" * 10) with open(sub_data_dir1 / "test.txt", "w") as f: f.write("bar\n" * 10) relative_subdir2 = "subdir2" sub_data_dir2 = tmp_path / data_dir2 / relative_subdir2 sub_data_dir2.mkdir(parents=True) with open(sub_data_dir2 / "train.txt", "w") as f: f.write("foo\n" * 10) with open(sub_data_dir2 / "test.txt", "w") as f: f.write("bar\n" * 10) return str(data_dir2), relative_subdir1 @pytest.fixture def complex_data_dir(tmp_path): data_dir = tmp_path / "complex_data_dir" data_dir.mkdir() (data_dir / "data").mkdir() with open(data_dir / "data" / "train.txt", "w") as f: f.write("foo\n" * 10) with open(data_dir / "data" / "test.txt", "w") as f: f.write("bar\n" * 10) with open(data_dir / "README.md", "w") as f: f.write("This is a readme") with open(data_dir / ".dummy", "w") as f: f.write("this is a dummy file that is not a data file") return str(data_dir) @pytest.fixture def dataset_loading_script_dir(tmp_path): script_name = DATASET_LOADING_SCRIPT_NAME script_dir = tmp_path / script_name script_dir.mkdir() script_path = script_dir / f"{script_name}.py" with open(script_path, "w") as f: f.write(DATASET_LOADING_SCRIPT_CODE) return str(script_dir) @pytest.fixture def dataset_loading_script_dir_readonly(tmp_path): script_name = DATASET_LOADING_SCRIPT_NAME script_dir = tmp_path / "readonly" / script_name script_dir.mkdir(parents=True) script_path = script_dir / f"{script_name}.py" with open(script_path, "w") as f: f.write(DATASET_LOADING_SCRIPT_CODE) dataset_loading_script_dir = str(script_dir) # Make this directory readonly os.chmod(dataset_loading_script_dir, 0o555) os.chmod(os.path.join(dataset_loading_script_dir, f"{script_name}.py"), 0o555) return dataset_loading_script_dir @pytest.fixture def metric_loading_script_dir(tmp_path): script_name = METRIC_LOADING_SCRIPT_NAME script_dir = tmp_path / script_name script_dir.mkdir() script_path = script_dir / f"{script_name}.py" with open(script_path, "w") as f: f.write(METRIC_LOADING_SCRIPT_CODE) return str(script_dir) @pytest.mark.parametrize( "data_files, expected_module, expected_builder_kwargs", [ (["train.csv"], "csv", {}), (["train.tsv"], "csv", {"sep": "\t"}), (["train.json"], "json", {}), (["train.jsonl"], "json", {}), (["train.parquet"], "parquet", {}), (["train.geoparquet"], "parquet", {}), (["train.gpq"], "parquet", {}), (["train.arrow"], "arrow", {}), (["train.txt"], "text", {}), (["uppercase.TXT"], "text", {}), (["unsupported.ext"], None, {}), ([""], None, {}), ], ) def test_infer_module_for_data_files(data_files, expected_module, expected_builder_kwargs): module, builder_kwargs = infer_module_for_data_files_list(data_files) assert module == expected_module assert builder_kwargs == expected_builder_kwargs @pytest.mark.parametrize( "data_file, expected_module", [ ("zip_csv_path", "csv"), ("zip_csv_with_dir_path", "csv"), ("zip_uppercase_csv_path", "csv"), ("zip_unsupported_ext_path", None), ], ) def test_infer_module_for_data_files_in_archives( data_file, expected_module, zip_csv_path, zip_csv_with_dir_path, zip_uppercase_csv_path, zip_unsupported_ext_path ): data_file_paths = { "zip_csv_path": zip_csv_path, "zip_csv_with_dir_path": zip_csv_with_dir_path, "zip_uppercase_csv_path": zip_uppercase_csv_path, "zip_unsupported_ext_path": zip_unsupported_ext_path, } data_files = [str(data_file_paths[data_file])] inferred_module, _ = infer_module_for_data_files_list_in_archives(data_files) assert inferred_module == expected_module class ModuleFactoryTest(TestCase): @pytest.fixture(autouse=True) def inject_fixtures( self, jsonl_path, data_dir, data_dir_with_metadata, data_dir_with_single_config_in_metadata, data_dir_with_config_and_data_files, data_dir_with_two_config_in_metadata, sub_data_dirs, dataset_loading_script_dir, metric_loading_script_dir, ): self._jsonl_path = jsonl_path self._data_dir = data_dir self._data_dir_with_metadata = data_dir_with_metadata self._data_dir_with_single_config_in_metadata = data_dir_with_single_config_in_metadata self._data_dir_with_config_and_data_files = data_dir_with_config_and_data_files self._data_dir_with_two_config_in_metadata = data_dir_with_two_config_in_metadata self._data_dir2 = sub_data_dirs[0] self._sub_data_dir = sub_data_dirs[1] self._dataset_loading_script_dir = dataset_loading_script_dir self._metric_loading_script_dir = metric_loading_script_dir def setUp(self): self.hf_modules_cache = tempfile.mkdtemp() self.cache_dir = tempfile.mkdtemp() self.download_config = DownloadConfig(cache_dir=self.cache_dir) self.dynamic_modules_path = datasets.load.init_dynamic_modules( name="test_datasets_modules_" + os.path.basename(self.hf_modules_cache), hf_modules_cache=self.hf_modules_cache, ) def test_HubDatasetModuleFactoryWithScript_dont_trust_remote_code(self): # "lhoestq/test" has a dataset script factory = HubDatasetModuleFactoryWithScript( "lhoestq/test", download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path ) with patch.object(config, "HF_DATASETS_TRUST_REMOTE_CODE", None): # this will be the default soon self.assertRaises(ValueError, factory.get_module) factory = HubDatasetModuleFactoryWithScript( "lhoestq/test", download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path, trust_remote_code=False, ) self.assertRaises(ValueError, factory.get_module) def test_HubDatasetModuleFactoryWithScript_with_github_dataset(self): # "wmt_t2t" has additional imports (internal) factory = HubDatasetModuleFactoryWithScript( "wmt_t2t", download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None assert module_factory_result.builder_kwargs["base_path"].startswith(config.HF_ENDPOINT) def test_GithubMetricModuleFactory_with_internal_import(self): # "squad_v2" requires additional imports (internal) factory = GithubMetricModuleFactory( "squad_v2", download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None @pytest.mark.filterwarnings("ignore:GithubMetricModuleFactory is deprecated:FutureWarning") def test_GithubMetricModuleFactory_with_external_import(self): # "bleu" requires additional imports (external from github) factory = GithubMetricModuleFactory( "bleu", download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None def test_LocalMetricModuleFactory(self): path = os.path.join(self._metric_loading_script_dir, f"{METRIC_LOADING_SCRIPT_NAME}.py") factory = LocalMetricModuleFactory( path, download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None def test_LocalDatasetModuleFactoryWithScript(self): path = os.path.join(self._dataset_loading_script_dir, f"{DATASET_LOADING_SCRIPT_NAME}.py") factory = LocalDatasetModuleFactoryWithScript( path, download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None assert os.path.isdir(module_factory_result.builder_kwargs["base_path"]) def test_LocalDatasetModuleFactoryWithScript_dont_trust_remote_code(self): path = os.path.join(self._dataset_loading_script_dir, f"{DATASET_LOADING_SCRIPT_NAME}.py") factory = LocalDatasetModuleFactoryWithScript( path, download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path ) with patch.object(config, "HF_DATASETS_TRUST_REMOTE_CODE", None): # this will be the default soon self.assertRaises(ValueError, factory.get_module) factory = LocalDatasetModuleFactoryWithScript( path, download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path, trust_remote_code=False, ) self.assertRaises(ValueError, factory.get_module) def test_LocalDatasetModuleFactoryWithoutScript(self): factory = LocalDatasetModuleFactoryWithoutScript(self._data_dir) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None assert os.path.isdir(module_factory_result.builder_kwargs["base_path"]) def test_LocalDatasetModuleFactoryWithoutScript_with_data_dir(self): factory = LocalDatasetModuleFactoryWithoutScript(self._data_dir2, data_dir=self._sub_data_dir) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None builder_config = module_factory_result.builder_configs_parameters.builder_configs[0] assert ( builder_config.data_files is not None and len(builder_config.data_files["train"]) == 1 and len(builder_config.data_files["test"]) == 1 ) assert all( self._sub_data_dir in Path(data_file).parts for data_file in builder_config.data_files["train"] + builder_config.data_files["test"] ) def test_LocalDatasetModuleFactoryWithoutScript_with_metadata(self): factory = LocalDatasetModuleFactoryWithoutScript(self._data_dir_with_metadata) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None builder_config = module_factory_result.builder_configs_parameters.builder_configs[0] assert ( builder_config.data_files is not None and len(builder_config.data_files["train"]) > 0 and len(builder_config.data_files["test"]) > 0 ) assert any(Path(data_file).name == "metadata.jsonl" for data_file in builder_config.data_files["train"]) assert any(Path(data_file).name == "metadata.jsonl" for data_file in builder_config.data_files["test"]) def test_LocalDatasetModuleFactoryWithoutScript_with_single_config_in_metadata(self): factory = LocalDatasetModuleFactoryWithoutScript( self._data_dir_with_single_config_in_metadata, ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None module_metadata_configs = module_factory_result.builder_configs_parameters.metadata_configs assert module_metadata_configs is not None assert len(module_metadata_configs) == 1 assert next(iter(module_metadata_configs)) == "custom" assert "drop_labels" in next(iter(module_metadata_configs.values())) assert next(iter(module_metadata_configs.values()))["drop_labels"] is True module_builder_configs = module_factory_result.builder_configs_parameters.builder_configs assert module_builder_configs is not None assert len(module_builder_configs) == 1 assert isinstance(module_builder_configs[0], ImageFolderConfig) assert module_builder_configs[0].name == "custom" assert module_builder_configs[0].data_files is not None assert isinstance(module_builder_configs[0].data_files, DataFilesPatternsDict) module_builder_configs[0]._resolve_data_files(self._data_dir_with_single_config_in_metadata, DownloadConfig()) assert isinstance(module_builder_configs[0].data_files, DataFilesDict) assert len(module_builder_configs[0].data_files) == 1 # one train split assert len(module_builder_configs[0].data_files["train"]) == 2 # two files assert module_builder_configs[0].drop_labels is True # parameter is passed from metadata # config named "default" is automatically considered to be a default config assert module_factory_result.builder_configs_parameters.default_config_name == "custom" # we don't pass config params to builder in builder_kwargs, they are stored in builder_configs directly assert "drop_labels" not in module_factory_result.builder_kwargs def test_LocalDatasetModuleFactoryWithoutScript_with_config_and_data_files(self): factory = LocalDatasetModuleFactoryWithoutScript( self._data_dir_with_config_and_data_files, ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None module_metadata_configs = module_factory_result.builder_configs_parameters.metadata_configs builder_kwargs = module_factory_result.builder_kwargs assert module_metadata_configs is not None assert len(module_metadata_configs) == 1 assert next(iter(module_metadata_configs)) == "custom" assert "data_files" in next(iter(module_metadata_configs.values())) assert next(iter(module_metadata_configs.values()))["data_files"] == "data/**/*.jpg" assert "data_files" not in builder_kwargs def test_LocalDatasetModuleFactoryWithoutScript_data_dir_with_config_and_data_files(self): factory = LocalDatasetModuleFactoryWithoutScript(self._data_dir_with_config_and_data_files, data_dir="data") module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None module_metadata_configs = module_factory_result.builder_configs_parameters.metadata_configs builder_kwargs = module_factory_result.builder_kwargs assert module_metadata_configs is not None assert len(module_metadata_configs) == 1 assert next(iter(module_metadata_configs)) == "custom" assert "data_files" in next(iter(module_metadata_configs.values())) assert next(iter(module_metadata_configs.values()))["data_files"] == "data/**/*.jpg" assert "data_files" in builder_kwargs assert "train" in builder_kwargs["data_files"] assert len(builder_kwargs["data_files"]["train"]) == 2 def test_LocalDatasetModuleFactoryWithoutScript_with_two_configs_in_metadata(self): factory = LocalDatasetModuleFactoryWithoutScript( self._data_dir_with_two_config_in_metadata, ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None module_metadata_configs = module_factory_result.builder_configs_parameters.metadata_configs assert module_metadata_configs is not None assert len(module_metadata_configs) == 2 assert list(module_metadata_configs) == ["v1", "v2"] assert "drop_labels" in module_metadata_configs["v1"] assert module_metadata_configs["v1"]["drop_labels"] is True assert "drop_labels" in module_metadata_configs["v2"] assert module_metadata_configs["v2"]["drop_labels"] is False module_builder_configs = module_factory_result.builder_configs_parameters.builder_configs assert module_builder_configs is not None assert len(module_builder_configs) == 2 module_builder_config_v1, module_builder_config_v2 = module_builder_configs assert module_builder_config_v1.name == "v1" assert module_builder_config_v2.name == "v2" assert isinstance(module_builder_config_v1, ImageFolderConfig) assert isinstance(module_builder_config_v2, ImageFolderConfig) assert isinstance(module_builder_config_v1.data_files, DataFilesPatternsDict) assert isinstance(module_builder_config_v2.data_files, DataFilesPatternsDict) module_builder_config_v1._resolve_data_files(self._data_dir_with_two_config_in_metadata, DownloadConfig()) module_builder_config_v2._resolve_data_files(self._data_dir_with_two_config_in_metadata, DownloadConfig()) assert isinstance(module_builder_config_v1.data_files, DataFilesDict) assert isinstance(module_builder_config_v2.data_files, DataFilesDict) assert sorted(module_builder_config_v1.data_files) == ["train"] assert len(module_builder_config_v1.data_files["train"]) == 2 assert sorted(module_builder_config_v2.data_files) == ["train"] assert len(module_builder_config_v2.data_files["train"]) == 2 assert module_builder_config_v1.drop_labels is True # parameter is passed from metadata assert module_builder_config_v2.drop_labels is False # parameter is passed from metadata assert ( module_factory_result.builder_configs_parameters.default_config_name == "v1" ) # it's marked as a default one in yaml # we don't pass config params to builder in builder_kwargs, they are stored in builder_configs directly assert "drop_labels" not in module_factory_result.builder_kwargs def test_PackagedDatasetModuleFactory(self): factory = PackagedDatasetModuleFactory( "json", data_files=self._jsonl_path, download_config=self.download_config ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None def test_PackagedDatasetModuleFactory_with_data_dir(self): factory = PackagedDatasetModuleFactory("json", data_dir=self._data_dir, download_config=self.download_config) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None data_files = module_factory_result.builder_kwargs.get("data_files") assert data_files is not None and len(data_files["train"]) > 0 and len(data_files["test"]) > 0 assert Path(data_files["train"][0]).parent.samefile(self._data_dir) assert Path(data_files["test"][0]).parent.samefile(self._data_dir) def test_PackagedDatasetModuleFactory_with_data_dir_and_metadata(self): factory = PackagedDatasetModuleFactory( "imagefolder", data_dir=self._data_dir_with_metadata, download_config=self.download_config ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None data_files = module_factory_result.builder_kwargs.get("data_files") assert data_files is not None and len(data_files["train"]) > 0 and len(data_files["test"]) > 0 assert Path(data_files["train"][0]).parent.samefile(self._data_dir_with_metadata) assert Path(data_files["test"][0]).parent.samefile(self._data_dir_with_metadata) assert any(Path(data_file).name == "metadata.jsonl" for data_file in data_files["train"]) assert any(Path(data_file).name == "metadata.jsonl" for data_file in data_files["test"]) @pytest.mark.integration def test_HubDatasetModuleFactoryWithoutScript(self): factory = HubDatasetModuleFactoryWithoutScript( SAMPLE_DATASET_IDENTIFIER2, download_config=self.download_config ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None assert module_factory_result.builder_kwargs["base_path"].startswith(config.HF_ENDPOINT) @pytest.mark.integration def test_HubDatasetModuleFactoryWithoutScript_with_data_dir(self): data_dir = "data2" factory = HubDatasetModuleFactoryWithoutScript( SAMPLE_DATASET_IDENTIFIER3, data_dir=data_dir, download_config=self.download_config ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None builder_config = module_factory_result.builder_configs_parameters.builder_configs[0] assert module_factory_result.builder_kwargs["base_path"].startswith(config.HF_ENDPOINT) assert ( builder_config.data_files is not None and len(builder_config.data_files["train"]) == 1 and len(builder_config.data_files["test"]) == 1 ) assert all( data_dir in Path(data_file).parts for data_file in builder_config.data_files["train"] + builder_config.data_files["test"] ) @pytest.mark.integration def test_HubDatasetModuleFactoryWithoutScript_with_metadata(self): factory = HubDatasetModuleFactoryWithoutScript( SAMPLE_DATASET_IDENTIFIER4, download_config=self.download_config ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None builder_config = module_factory_result.builder_configs_parameters.builder_configs[0] assert module_factory_result.builder_kwargs["base_path"].startswith(config.HF_ENDPOINT) assert ( builder_config.data_files is not None and len(builder_config.data_files["train"]) > 0 and len(builder_config.data_files["test"]) > 0 ) assert any(Path(data_file).name == "metadata.jsonl" for data_file in builder_config.data_files["train"]) assert any(Path(data_file).name == "metadata.jsonl" for data_file in builder_config.data_files["test"]) factory = HubDatasetModuleFactoryWithoutScript( SAMPLE_DATASET_IDENTIFIER5, download_config=self.download_config ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None builder_config = module_factory_result.builder_configs_parameters.builder_configs[0] assert module_factory_result.builder_kwargs["base_path"].startswith(config.HF_ENDPOINT) assert ( builder_config.data_files is not None and len(builder_config.data_files) == 1 and len(builder_config.data_files["train"]) > 0 ) assert any(Path(data_file).name == "metadata.jsonl" for data_file in builder_config.data_files["train"]) @pytest.mark.integration def test_HubDatasetModuleFactoryWithoutScript_with_one_default_config_in_metadata(self): factory = HubDatasetModuleFactoryWithoutScript( SAMPLE_DATASET_SINGLE_CONFIG_IN_METADATA, download_config=self.download_config, ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None assert module_factory_result.builder_kwargs["base_path"].startswith(config.HF_ENDPOINT) module_metadata_configs = module_factory_result.builder_configs_parameters.metadata_configs assert module_metadata_configs is not None assert len(module_metadata_configs) == 1 assert next(iter(module_metadata_configs)) == "custom" assert "drop_labels" in next(iter(module_metadata_configs.values())) assert next(iter(module_metadata_configs.values()))["drop_labels"] is True module_builder_configs = module_factory_result.builder_configs_parameters.builder_configs assert module_builder_configs is not None assert len(module_builder_configs) == 1 assert isinstance(module_builder_configs[0], AudioFolderConfig) assert module_builder_configs[0].name == "custom" assert module_builder_configs[0].data_files is not None assert isinstance(module_builder_configs[0].data_files, DataFilesPatternsDict) module_builder_configs[0]._resolve_data_files( module_factory_result.builder_kwargs["base_path"], DownloadConfig() ) assert isinstance(module_builder_configs[0].data_files, DataFilesDict) assert sorted(module_builder_configs[0].data_files) == ["test", "train"] assert len(module_builder_configs[0].data_files["train"]) == 3 assert len(module_builder_configs[0].data_files["test"]) == 3 assert module_builder_configs[0].drop_labels is True # parameter is passed from metadata # config named "default" is automatically considered to be a default config assert module_factory_result.builder_configs_parameters.default_config_name == "custom" # we don't pass config params to builder in builder_kwargs, they are stored in builder_configs directly assert "drop_labels" not in module_factory_result.builder_kwargs @pytest.mark.integration def test_HubDatasetModuleFactoryWithoutScript_with_two_configs_in_metadata(self): datasets_names = [SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, SAMPLE_DATASET_TWO_CONFIG_IN_METADATA_WITH_DEFAULT] for dataset_name in datasets_names: factory = HubDatasetModuleFactoryWithoutScript(dataset_name, download_config=self.download_config) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None module_metadata_configs = module_factory_result.builder_configs_parameters.metadata_configs assert module_metadata_configs is not None assert len(module_metadata_configs) == 2 assert list(module_metadata_configs) == ["v1", "v2"] assert "drop_labels" in module_metadata_configs["v1"] assert module_metadata_configs["v1"]["drop_labels"] is True assert "drop_labels" in module_metadata_configs["v2"] assert module_metadata_configs["v2"]["drop_labels"] is False module_builder_configs = module_factory_result.builder_configs_parameters.builder_configs assert module_builder_configs is not None assert len(module_builder_configs) == 2 module_builder_config_v1, module_builder_config_v2 = module_builder_configs assert module_builder_config_v1.name == "v1" assert module_builder_config_v2.name == "v2" assert isinstance(module_builder_config_v1, AudioFolderConfig) assert isinstance(module_builder_config_v2, AudioFolderConfig) assert isinstance(module_builder_config_v1.data_files, DataFilesPatternsDict) assert isinstance(module_builder_config_v2.data_files, DataFilesPatternsDict) module_builder_config_v1._resolve_data_files( module_factory_result.builder_kwargs["base_path"], DownloadConfig() ) module_builder_config_v2._resolve_data_files( module_factory_result.builder_kwargs["base_path"], DownloadConfig() ) assert isinstance(module_builder_config_v1.data_files, DataFilesDict) assert isinstance(module_builder_config_v2.data_files, DataFilesDict) assert sorted(module_builder_config_v1.data_files) == ["test", "train"] assert len(module_builder_config_v1.data_files["train"]) == 3 assert len(module_builder_config_v1.data_files["test"]) == 3 assert sorted(module_builder_config_v2.data_files) == ["test", "train"] assert len(module_builder_config_v2.data_files["train"]) == 2 assert len(module_builder_config_v2.data_files["test"]) == 1 assert module_builder_config_v1.drop_labels is True # parameter is passed from metadata assert module_builder_config_v2.drop_labels is False # parameter is passed from metadata # we don't pass config params to builder in builder_kwargs, they are stored in builder_configs directly assert "drop_labels" not in module_factory_result.builder_kwargs if dataset_name == SAMPLE_DATASET_TWO_CONFIG_IN_METADATA_WITH_DEFAULT: assert module_factory_result.builder_configs_parameters.default_config_name == "v1" else: assert module_factory_result.builder_configs_parameters.default_config_name is None @pytest.mark.integration def test_HubDatasetModuleFactoryWithScript(self): factory = HubDatasetModuleFactoryWithScript( SAMPLE_DATASET_IDENTIFIER, download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path, ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None assert module_factory_result.builder_kwargs["base_path"].startswith(config.HF_ENDPOINT) @pytest.mark.integration def test_HubDatasetModuleFactoryWithParquetExport(self): factory = HubDatasetModuleFactoryWithParquetExport( SAMPLE_DATASET_IDENTIFIER, download_config=self.download_config, ) module_factory_result = factory.get_module() assert module_factory_result.module_path == "datasets.packaged_modules.parquet.parquet" assert module_factory_result.builder_configs_parameters.builder_configs assert isinstance(module_factory_result.builder_configs_parameters.builder_configs[0], ParquetConfig) module_factory_result.builder_configs_parameters.builder_configs[0]._resolve_data_files( base_path="", download_config=self.download_config ) assert module_factory_result.builder_configs_parameters.builder_configs[0].data_files == { "train": [ "hf://datasets/hf-internal-testing/dataset_with_script@8f965694d611974ef8661618ada1b5aeb1072915/default/train/0000.parquet" ], "validation": [ "hf://datasets/hf-internal-testing/dataset_with_script@8f965694d611974ef8661618ada1b5aeb1072915/default/validation/0000.parquet" ], } @pytest.mark.integration def test_HubDatasetModuleFactoryWithParquetExport_errors_on_wrong_sha(self): factory = HubDatasetModuleFactoryWithParquetExport( SAMPLE_DATASET_IDENTIFIER, download_config=self.download_config, revision="1a21ac5846fc3f36ad5f128740c58932d3d7806f", ) factory.get_module() factory = HubDatasetModuleFactoryWithParquetExport( SAMPLE_DATASET_IDENTIFIER, download_config=self.download_config, revision="wrong_sha", ) with self.assertRaises(_datasets_server.DatasetsServerError): factory.get_module() @pytest.mark.integration def test_CachedDatasetModuleFactory(self): name = SAMPLE_DATASET_IDENTIFIER2 load_dataset_builder(name, cache_dir=self.cache_dir).download_and_prepare() for offline_mode in OfflineSimulationMode: with offline(offline_mode): factory = CachedDatasetModuleFactory( name, cache_dir=self.cache_dir, ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None def test_CachedDatasetModuleFactory_with_script(self): path = os.path.join(self._dataset_loading_script_dir, f"{DATASET_LOADING_SCRIPT_NAME}.py") factory = LocalDatasetModuleFactoryWithScript( path, download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path ) module_factory_result = factory.get_module() for offline_mode in OfflineSimulationMode: with offline(offline_mode): factory = CachedDatasetModuleFactory( DATASET_LOADING_SCRIPT_NAME, dynamic_modules_path=self.dynamic_modules_path, ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None @pytest.mark.filterwarnings("ignore:LocalMetricModuleFactory is deprecated:FutureWarning") @pytest.mark.filterwarnings("ignore:CachedMetricModuleFactory is deprecated:FutureWarning") def test_CachedMetricModuleFactory(self): path = os.path.join(self._metric_loading_script_dir, f"{METRIC_LOADING_SCRIPT_NAME}.py") factory = LocalMetricModuleFactory( path, download_config=self.download_config, dynamic_modules_path=self.dynamic_modules_path ) module_factory_result = factory.get_module() for offline_mode in OfflineSimulationMode: with offline(offline_mode): factory = CachedMetricModuleFactory( METRIC_LOADING_SCRIPT_NAME, dynamic_modules_path=self.dynamic_modules_path, ) module_factory_result = factory.get_module() assert importlib.import_module(module_factory_result.module_path) is not None @pytest.mark.parametrize( "factory_class", [ CachedDatasetModuleFactory, CachedMetricModuleFactory, GithubMetricModuleFactory, HubDatasetModuleFactoryWithoutScript, HubDatasetModuleFactoryWithScript, LocalDatasetModuleFactoryWithoutScript, LocalDatasetModuleFactoryWithScript, LocalMetricModuleFactory, PackagedDatasetModuleFactory, ], ) def test_module_factories(factory_class): name = "dummy_name" factory = factory_class(name) assert factory.name == name @pytest.mark.integration class LoadTest(TestCase): @pytest.fixture(autouse=True) def inject_fixtures(self, caplog): self._caplog = caplog def setUp(self): self.hf_modules_cache = tempfile.mkdtemp() self.cache_dir = tempfile.mkdtemp() self.dynamic_modules_path = datasets.load.init_dynamic_modules( name="test_datasets_modules2", hf_modules_cache=self.hf_modules_cache ) def tearDown(self): shutil.rmtree(self.hf_modules_cache) shutil.rmtree(self.cache_dir) def _dummy_module_dir(self, modules_dir, dummy_module_name, dummy_code): assert dummy_module_name.startswith("__") module_dir = os.path.join(modules_dir, dummy_module_name) os.makedirs(module_dir, exist_ok=True) module_path = os.path.join(module_dir, dummy_module_name + ".py") with open(module_path, "w") as f: f.write(dummy_code) return module_dir def test_dataset_module_factory(self): with tempfile.TemporaryDirectory() as tmp_dir: # prepare module from directory path dummy_code = "MY_DUMMY_VARIABLE = 'hello there'" module_dir = self._dummy_module_dir(tmp_dir, "__dummy_module_name1__", dummy_code) dataset_module = datasets.load.dataset_module_factory( module_dir, dynamic_modules_path=self.dynamic_modules_path ) dummy_module = importlib.import_module(dataset_module.module_path) self.assertEqual(dummy_module.MY_DUMMY_VARIABLE, "hello there") self.assertEqual(dataset_module.hash, sha256(dummy_code.encode("utf-8")).hexdigest()) # prepare module from file path + check resolved_file_path dummy_code = "MY_DUMMY_VARIABLE = 'general kenobi'" module_dir = self._dummy_module_dir(tmp_dir, "__dummy_module_name1__", dummy_code) module_path = os.path.join(module_dir, "__dummy_module_name1__.py") dataset_module = datasets.load.dataset_module_factory( module_path, dynamic_modules_path=self.dynamic_modules_path ) dummy_module = importlib.import_module(dataset_module.module_path) self.assertEqual(dummy_module.MY_DUMMY_VARIABLE, "general kenobi") self.assertEqual(dataset_module.hash, sha256(dummy_code.encode("utf-8")).hexdigest()) # missing module for offline_simulation_mode in list(OfflineSimulationMode): with offline(offline_simulation_mode): with self.assertRaises( (DatasetNotFoundError, ConnectionError, requests.exceptions.ConnectionError) ): datasets.load.dataset_module_factory( "__missing_dummy_module_name__", dynamic_modules_path=self.dynamic_modules_path ) @pytest.mark.integration def test_offline_dataset_module_factory(self): repo_id = SAMPLE_DATASET_IDENTIFIER2 builder = load_dataset_builder(repo_id, cache_dir=self.cache_dir) builder.download_and_prepare() for offline_simulation_mode in list(OfflineSimulationMode): with offline(offline_simulation_mode): self._caplog.clear() # allow provide the repo id without an explicit path to remote or local actual file dataset_module = datasets.load.dataset_module_factory(repo_id, cache_dir=self.cache_dir) self.assertEqual(dataset_module.module_path, "datasets.packaged_modules.cache.cache") self.assertIn("Using the latest cached version of the dataset", self._caplog.text) def test_offline_dataset_module_factory_with_script(self): with tempfile.TemporaryDirectory() as tmp_dir: dummy_code = "MY_DUMMY_VARIABLE = 'hello there'" module_dir = self._dummy_module_dir(tmp_dir, "__dummy_module_name2__", dummy_code) dataset_module_1 = datasets.load.dataset_module_factory( module_dir, dynamic_modules_path=self.dynamic_modules_path ) time.sleep(0.1) # make sure there's a difference in the OS update time of the python file dummy_code = "MY_DUMMY_VARIABLE = 'general kenobi'" module_dir = self._dummy_module_dir(tmp_dir, "__dummy_module_name2__", dummy_code) dataset_module_2 = datasets.load.dataset_module_factory( module_dir, dynamic_modules_path=self.dynamic_modules_path ) for offline_simulation_mode in list(OfflineSimulationMode): with offline(offline_simulation_mode): self._caplog.clear() # allow provide the module name without an explicit path to remote or local actual file dataset_module_3 = datasets.load.dataset_module_factory( "__dummy_module_name2__", dynamic_modules_path=self.dynamic_modules_path ) # it loads the most recent version of the module self.assertEqual(dataset_module_2.module_path, dataset_module_3.module_path) self.assertNotEqual(dataset_module_1.module_path, dataset_module_3.module_path) self.assertIn("Using the latest cached version of the module", self._caplog.text) def test_load_dataset_from_hub(self): with self.assertRaises(DatasetNotFoundError) as context: datasets.load_dataset("_dummy") self.assertIn( "Dataset '_dummy' doesn't exist on the Hub", str(context.exception), ) with self.assertRaises(DatasetNotFoundError) as context: datasets.load_dataset("_dummy", revision="0.0.0") self.assertIn( "Dataset '_dummy' doesn't exist on the Hub", str(context.exception), ) self.assertIn( "at revision '0.0.0'", str(context.exception), ) for offline_simulation_mode in list(OfflineSimulationMode): with offline(offline_simulation_mode): with self.assertRaises(ConnectionError) as context: datasets.load_dataset("_dummy") if offline_simulation_mode != OfflineSimulationMode.HF_DATASETS_OFFLINE_SET_TO_1: self.assertIn( "Couldn't reach '_dummy' on the Hub", str(context.exception), ) def test_load_dataset_namespace(self): with self.assertRaises(DatasetNotFoundError) as context: datasets.load_dataset("hf-internal-testing/_dummy") self.assertIn( "hf-internal-testing/_dummy", str(context.exception), ) for offline_simulation_mode in list(OfflineSimulationMode): with offline(offline_simulation_mode): with self.assertRaises(ConnectionError) as context: datasets.load_dataset("hf-internal-testing/_dummy") self.assertIn("hf-internal-testing/_dummy", str(context.exception), msg=offline_simulation_mode) @pytest.mark.integration def test_load_dataset_builder_with_metadata(): builder = datasets.load_dataset_builder(SAMPLE_DATASET_IDENTIFIER4) assert isinstance(builder, ImageFolder) assert builder.config.name == "default" assert builder.config.data_files is not None assert builder.config.drop_metadata is None with pytest.raises(ValueError): builder = datasets.load_dataset_builder(SAMPLE_DATASET_IDENTIFIER4, "non-existing-config") @pytest.mark.integration def test_load_dataset_builder_config_kwargs_passed_as_arguments(): builder_default = datasets.load_dataset_builder(SAMPLE_DATASET_IDENTIFIER4) builder_custom = datasets.load_dataset_builder(SAMPLE_DATASET_IDENTIFIER4, drop_metadata=True) assert builder_custom.config.drop_metadata != builder_default.config.drop_metadata assert builder_custom.config.drop_metadata is True @pytest.mark.integration def test_load_dataset_builder_with_two_configs_in_metadata(): builder = datasets.load_dataset_builder(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, "v1") assert isinstance(builder, AudioFolder) assert builder.config.name == "v1" assert builder.config.data_files is not None with pytest.raises(ValueError): datasets.load_dataset_builder(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA) with pytest.raises(ValueError): datasets.load_dataset_builder(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, "non-existing-config") @pytest.mark.parametrize("serializer", [pickle, dill]) def test_load_dataset_builder_with_metadata_configs_pickable(serializer): builder = datasets.load_dataset_builder(SAMPLE_DATASET_SINGLE_CONFIG_IN_METADATA) builder_unpickled = serializer.loads(serializer.dumps(builder)) assert builder.BUILDER_CONFIGS == builder_unpickled.BUILDER_CONFIGS assert list(builder_unpickled.builder_configs) == ["custom"] assert isinstance(builder_unpickled.builder_configs["custom"], AudioFolderConfig) builder2 = datasets.load_dataset_builder(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, "v1") builder2_unpickled = serializer.loads(serializer.dumps(builder2)) assert builder2.BUILDER_CONFIGS == builder2_unpickled.BUILDER_CONFIGS != builder_unpickled.BUILDER_CONFIGS assert list(builder2_unpickled.builder_configs) == ["v1", "v2"] assert isinstance(builder2_unpickled.builder_configs["v1"], AudioFolderConfig) assert isinstance(builder2_unpickled.builder_configs["v2"], AudioFolderConfig) def test_load_dataset_builder_for_absolute_script_dir(dataset_loading_script_dir, data_dir): builder = datasets.load_dataset_builder(dataset_loading_script_dir, data_dir=data_dir) assert isinstance(builder, DatasetBuilder) assert builder.name == DATASET_LOADING_SCRIPT_NAME assert builder.dataset_name == DATASET_LOADING_SCRIPT_NAME assert builder.info.features == Features({"text": Value("string")}) def test_load_dataset_builder_for_relative_script_dir(dataset_loading_script_dir, data_dir): with set_current_working_directory_to_temp_dir(): relative_script_dir = DATASET_LOADING_SCRIPT_NAME shutil.copytree(dataset_loading_script_dir, relative_script_dir) builder = datasets.load_dataset_builder(relative_script_dir, data_dir=data_dir) assert isinstance(builder, DatasetBuilder) assert builder.name == DATASET_LOADING_SCRIPT_NAME assert builder.dataset_name == DATASET_LOADING_SCRIPT_NAME assert builder.info.features == Features({"text": Value("string")}) def test_load_dataset_builder_for_script_path(dataset_loading_script_dir, data_dir): builder = datasets.load_dataset_builder( os.path.join(dataset_loading_script_dir, DATASET_LOADING_SCRIPT_NAME + ".py"), data_dir=data_dir ) assert isinstance(builder, DatasetBuilder) assert builder.name == DATASET_LOADING_SCRIPT_NAME assert builder.dataset_name == DATASET_LOADING_SCRIPT_NAME assert builder.info.features == Features({"text": Value("string")}) def test_load_dataset_builder_for_absolute_data_dir(complex_data_dir): builder = datasets.load_dataset_builder(complex_data_dir) assert isinstance(builder, DatasetBuilder) assert builder.name == "text" assert builder.dataset_name == Path(complex_data_dir).name assert builder.config.name == "default" assert isinstance(builder.config.data_files, DataFilesDict) assert len(builder.config.data_files["train"]) > 0 assert len(builder.config.data_files["test"]) > 0 def test_load_dataset_builder_for_relative_data_dir(complex_data_dir): with set_current_working_directory_to_temp_dir(): relative_data_dir = "relative_data_dir" shutil.copytree(complex_data_dir, relative_data_dir) builder = datasets.load_dataset_builder(relative_data_dir) assert isinstance(builder, DatasetBuilder) assert builder.name == "text" assert builder.dataset_name == relative_data_dir assert builder.config.name == "default" assert isinstance(builder.config.data_files, DataFilesDict) assert len(builder.config.data_files["train"]) > 0 assert len(builder.config.data_files["test"]) > 0 @pytest.mark.integration def test_load_dataset_builder_for_community_dataset_with_script(): builder = datasets.load_dataset_builder(SAMPLE_DATASET_IDENTIFIER) assert isinstance(builder, DatasetBuilder) assert builder.name == "parquet" assert builder.dataset_name == SAMPLE_DATASET_IDENTIFIER.split("/")[-1] assert builder.config.name == "default" assert builder.info.features == Features({"text": Value("string")}) namespace = SAMPLE_DATASET_IDENTIFIER[: SAMPLE_DATASET_IDENTIFIER.index("/")] assert builder._relative_data_dir().startswith(namespace) assert builder.__module__.startswith("datasets.") @pytest.mark.integration def test_load_dataset_builder_for_community_dataset_with_script_no_parquet_export(): with patch.object(config, "USE_PARQUET_EXPORT", False): builder = datasets.load_dataset_builder(SAMPLE_DATASET_IDENTIFIER) assert isinstance(builder, DatasetBuilder) assert builder.name == SAMPLE_DATASET_IDENTIFIER.split("/")[-1] assert builder.dataset_name == SAMPLE_DATASET_IDENTIFIER.split("/")[-1] assert builder.config.name == "default" assert builder.info.features == Features({"text": Value("string")}) namespace = SAMPLE_DATASET_IDENTIFIER[: SAMPLE_DATASET_IDENTIFIER.index("/")] assert builder._relative_data_dir().startswith(namespace) assert SAMPLE_DATASET_IDENTIFIER.replace("/", "--") in builder.__module__ @pytest.mark.integration def test_load_dataset_builder_use_parquet_export_if_dont_trust_remote_code_keeps_features(): dataset_name = "food101" builder = datasets.load_dataset_builder(dataset_name, trust_remote_code=False) assert isinstance(builder, DatasetBuilder) assert builder.name == "parquet" assert builder.dataset_name == dataset_name assert builder.config.name == "default" assert list(builder.info.features) == ["image", "label"] assert builder.info.features["image"] == Image() @pytest.mark.integration def test_load_dataset_builder_for_community_dataset_without_script(): builder = datasets.load_dataset_builder(SAMPLE_DATASET_IDENTIFIER2) assert isinstance(builder, DatasetBuilder) assert builder.name == "text" assert builder.dataset_name == SAMPLE_DATASET_IDENTIFIER2.split("/")[-1] assert builder.config.name == "default" assert isinstance(builder.config.data_files, DataFilesDict) assert len(builder.config.data_files["train"]) > 0 assert len(builder.config.data_files["test"]) > 0 def test_load_dataset_builder_fail(): with pytest.raises(DatasetNotFoundError): datasets.load_dataset_builder("blabla") @pytest.mark.parametrize("keep_in_memory", [False, True]) def test_load_dataset_local_script(dataset_loading_script_dir, data_dir, keep_in_memory, caplog): with assert_arrow_memory_increases() if keep_in_memory else assert_arrow_memory_doesnt_increase(): dataset = load_dataset(dataset_loading_script_dir, data_dir=data_dir, keep_in_memory=keep_in_memory) assert isinstance(dataset, DatasetDict) assert all(isinstance(d, Dataset) for d in dataset.values()) assert len(dataset) == 2 assert isinstance(next(iter(dataset["train"])), dict) def test_load_dataset_cached_local_script(dataset_loading_script_dir, data_dir, caplog): dataset = load_dataset(dataset_loading_script_dir, data_dir=data_dir) assert isinstance(dataset, DatasetDict) assert all(isinstance(d, Dataset) for d in dataset.values()) assert len(dataset) == 2 assert isinstance(next(iter(dataset["train"])), dict) for offline_simulation_mode in list(OfflineSimulationMode): with offline(offline_simulation_mode): caplog.clear() # Load dataset from cache dataset = datasets.load_dataset(DATASET_LOADING_SCRIPT_NAME, data_dir=data_dir) assert len(dataset) == 2 assert "Using the latest cached version of the module" in caplog.text assert isinstance(next(iter(dataset["train"])), dict) with pytest.raises(DatasetNotFoundError) as exc_info: datasets.load_dataset(SAMPLE_DATASET_NAME_THAT_DOESNT_EXIST) assert f"Dataset '{SAMPLE_DATASET_NAME_THAT_DOESNT_EXIST}' doesn't exist on the Hub" in str(exc_info.value) @pytest.mark.integration @pytest.mark.parametrize("stream_from_cache, ", [False, True]) def test_load_dataset_cached_from_hub(stream_from_cache, caplog): dataset = load_dataset(SAMPLE_DATASET_IDENTIFIER3) assert isinstance(dataset, DatasetDict) assert all(isinstance(d, Dataset) for d in dataset.values()) assert len(dataset) == 2 assert isinstance(next(iter(dataset["train"])), dict) for offline_simulation_mode in list(OfflineSimulationMode): with offline(offline_simulation_mode): caplog.clear() # Load dataset from cache dataset = datasets.load_dataset(SAMPLE_DATASET_IDENTIFIER3, streaming=stream_from_cache) assert len(dataset) == 2 assert "Using the latest cached version of the dataset" in caplog.text assert isinstance(next(iter(dataset["train"])), dict) with pytest.raises(DatasetNotFoundError) as exc_info: datasets.load_dataset(SAMPLE_DATASET_NAME_THAT_DOESNT_EXIST) assert f"Dataset '{SAMPLE_DATASET_NAME_THAT_DOESNT_EXIST}' doesn't exist on the Hub" in str(exc_info.value) def test_load_dataset_streaming(dataset_loading_script_dir, data_dir): dataset = load_dataset(dataset_loading_script_dir, streaming=True, data_dir=data_dir) assert isinstance(dataset, IterableDatasetDict) assert all(isinstance(d, IterableDataset) for d in dataset.values()) assert len(dataset) == 2 assert isinstance(next(iter(dataset["train"])), dict) def test_load_dataset_streaming_gz_json(jsonl_gz_path): data_files = jsonl_gz_path ds = load_dataset("json", split="train", data_files=data_files, streaming=True) assert isinstance(ds, IterableDataset) ds_item = next(iter(ds)) assert ds_item == {"col_1": "0", "col_2": 0, "col_3": 0.0} @pytest.mark.integration @pytest.mark.parametrize( "path", ["sample.jsonl", "sample.jsonl.gz", "sample.tar", "sample.jsonl.xz", "sample.zip", "sample.jsonl.zst"] ) def test_load_dataset_streaming_compressed_files(path): repo_id = "hf-internal-testing/compressed_files" data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{path}" if data_files[-3:] in ("zip", "tar"): # we need to glob "*" inside archives data_files = data_files[-3:] + "://*::" + data_files return # TODO(QL, albert): support re-add support for ZIP and TAR archives streaming ds = load_dataset("json", split="train", data_files=data_files, streaming=True) assert isinstance(ds, IterableDataset) ds_item = next(iter(ds)) assert ds_item == { "tokens": ["Ministeri", "de", "Justรญcia", "d'Espanya"], "ner_tags": [1, 2, 2, 2], "langs": ["ca", "ca", "ca", "ca"], "spans": ["PER: Ministeri de Justรญcia d'Espanya"], } @pytest.mark.parametrize("path_extension", ["csv", "csv.bz2"]) @pytest.mark.parametrize("streaming", [False, True]) def test_load_dataset_streaming_csv(path_extension, streaming, csv_path, bz2_csv_path): paths = {"csv": csv_path, "csv.bz2": bz2_csv_path} data_files = str(paths[path_extension]) features = Features({"col_1": Value("string"), "col_2": Value("int32"), "col_3": Value("float32")}) ds = load_dataset("csv", split="train", data_files=data_files, features=features, streaming=streaming) assert isinstance(ds, IterableDataset if streaming else Dataset) ds_item = next(iter(ds)) assert ds_item == {"col_1": "0", "col_2": 0, "col_3": 0.0} @pytest.mark.parametrize("streaming", [False, True]) @pytest.mark.parametrize("data_file", ["zip_csv_path", "zip_csv_with_dir_path", "csv_path"]) def test_load_dataset_zip_csv(data_file, streaming, zip_csv_path, zip_csv_with_dir_path, csv_path): data_file_paths = { "zip_csv_path": zip_csv_path, "zip_csv_with_dir_path": zip_csv_with_dir_path, "csv_path": csv_path, } data_files = str(data_file_paths[data_file]) expected_size = 8 if data_file.startswith("zip") else 4 features = Features({"col_1": Value("string"), "col_2": Value("int32"), "col_3": Value("float32")}) ds = load_dataset("csv", split="train", data_files=data_files, features=features, streaming=streaming) if streaming: ds_item_counter = 0 for ds_item in ds: if ds_item_counter == 0: assert ds_item == {"col_1": "0", "col_2": 0, "col_3": 0.0} ds_item_counter += 1 assert ds_item_counter == expected_size else: assert ds.shape[0] == expected_size ds_item = next(iter(ds)) assert ds_item == {"col_1": "0", "col_2": 0, "col_3": 0.0} @pytest.mark.parametrize("streaming", [False, True]) @pytest.mark.parametrize("data_file", ["zip_jsonl_path", "zip_jsonl_with_dir_path", "jsonl_path"]) def test_load_dataset_zip_jsonl(data_file, streaming, zip_jsonl_path, zip_jsonl_with_dir_path, jsonl_path): data_file_paths = { "zip_jsonl_path": zip_jsonl_path, "zip_jsonl_with_dir_path": zip_jsonl_with_dir_path, "jsonl_path": jsonl_path, } data_files = str(data_file_paths[data_file]) expected_size = 8 if data_file.startswith("zip") else 4 features = Features({"col_1": Value("string"), "col_2": Value("int32"), "col_3": Value("float32")}) ds = load_dataset("json", split="train", data_files=data_files, features=features, streaming=streaming) if streaming: ds_item_counter = 0 for ds_item in ds: if ds_item_counter == 0: assert ds_item == {"col_1": "0", "col_2": 0, "col_3": 0.0} ds_item_counter += 1 assert ds_item_counter == expected_size else: assert ds.shape[0] == expected_size ds_item = next(iter(ds)) assert ds_item == {"col_1": "0", "col_2": 0, "col_3": 0.0} @pytest.mark.parametrize("streaming", [False, True]) @pytest.mark.parametrize("data_file", ["zip_text_path", "zip_text_with_dir_path", "text_path"]) def test_load_dataset_zip_text(data_file, streaming, zip_text_path, zip_text_with_dir_path, text_path): data_file_paths = { "zip_text_path": zip_text_path, "zip_text_with_dir_path": zip_text_with_dir_path, "text_path": text_path, } data_files = str(data_file_paths[data_file]) expected_size = 8 if data_file.startswith("zip") else 4 ds = load_dataset("text", split="train", data_files=data_files, streaming=streaming) if streaming: ds_item_counter = 0 for ds_item in ds: if ds_item_counter == 0: assert ds_item == {"text": "0"} ds_item_counter += 1 assert ds_item_counter == expected_size else: assert ds.shape[0] == expected_size ds_item = next(iter(ds)) assert ds_item == {"text": "0"} @pytest.mark.parametrize("streaming", [False, True]) def test_load_dataset_arrow(streaming, data_dir_with_arrow): ds = load_dataset("arrow", split="train", data_dir=data_dir_with_arrow, streaming=streaming) expected_size = 10 if streaming: ds_item_counter = 0 for ds_item in ds: if ds_item_counter == 0: assert ds_item == {"col_1": "foo"} ds_item_counter += 1 assert ds_item_counter == 10 else: assert ds.num_rows == 10 assert ds.shape[0] == expected_size ds_item = next(iter(ds)) assert ds_item == {"col_1": "foo"} def test_load_dataset_text_with_unicode_new_lines(text_path_with_unicode_new_lines): data_files = str(text_path_with_unicode_new_lines) ds = load_dataset("text", split="train", data_files=data_files) assert ds.num_rows == 3 def test_load_dataset_with_unsupported_extensions(text_dir_with_unsupported_extension): data_files = str(text_dir_with_unsupported_extension) ds = load_dataset("text", split="train", data_files=data_files) assert ds.num_rows == 4 @pytest.mark.integration def test_loading_from_the_datasets_hub(): with tempfile.TemporaryDirectory() as tmp_dir: with load_dataset(SAMPLE_DATASET_IDENTIFIER, cache_dir=tmp_dir) as dataset: assert len(dataset["train"]) == 2 assert len(dataset["validation"]) == 3 @pytest.mark.integration def test_loading_from_the_datasets_hub_with_token(): true_request = requests.Session().request def assert_auth(method, url, *args, headers, **kwargs): assert headers["authorization"] == "Bearer foo" return true_request(method, url, *args, headers=headers, **kwargs) with patch("requests.Session.request") as mock_request: mock_request.side_effect = assert_auth with tempfile.TemporaryDirectory() as tmp_dir: with offline(): with pytest.raises((ConnectionError, requests.exceptions.ConnectionError)): load_dataset(SAMPLE_NOT_EXISTING_DATASET_IDENTIFIER, cache_dir=tmp_dir, token="foo") mock_request.assert_called() @pytest.mark.integration def test_load_streaming_private_dataset(hf_token, hf_private_dataset_repo_txt_data): ds = load_dataset(hf_private_dataset_repo_txt_data, streaming=True, token=hf_token) assert next(iter(ds)) is not None @pytest.mark.integration def test_load_dataset_builder_private_dataset(hf_token, hf_private_dataset_repo_txt_data): builder = load_dataset_builder(hf_private_dataset_repo_txt_data, token=hf_token) assert isinstance(builder, DatasetBuilder) @pytest.mark.integration def test_load_streaming_private_dataset_with_zipped_data(hf_token, hf_private_dataset_repo_zipped_txt_data): ds = load_dataset(hf_private_dataset_repo_zipped_txt_data, streaming=True, token=hf_token) assert next(iter(ds)) is not None @pytest.mark.integration def test_load_dataset_config_kwargs_passed_as_arguments(): ds_default = load_dataset(SAMPLE_DATASET_IDENTIFIER4) ds_custom = load_dataset(SAMPLE_DATASET_IDENTIFIER4, drop_metadata=True) assert list(ds_default["train"].features) == ["image", "caption"] assert list(ds_custom["train"].features) == ["image"] @require_sndfile @pytest.mark.integration def test_load_hub_dataset_without_script_with_single_config_in_metadata(): # load the same dataset but with no configurations (=with default parameters) ds = load_dataset(SAMPLE_DATASET_NO_CONFIGS_IN_METADATA) assert list(ds["train"].features) == ["audio", "label"] # assert label feature is here as expected by default assert len(ds["train"]) == 5 and len(ds["test"]) == 4 ds2 = load_dataset(SAMPLE_DATASET_SINGLE_CONFIG_IN_METADATA) # single config -> no need to specify it assert list(ds2["train"].features) == ["audio"] # assert param `drop_labels=True` from metadata is passed assert len(ds2["train"]) == 3 and len(ds2["test"]) == 3 ds3 = load_dataset(SAMPLE_DATASET_SINGLE_CONFIG_IN_METADATA, "custom") assert list(ds3["train"].features) == ["audio"] # assert param `drop_labels=True` from metadata is passed assert len(ds3["train"]) == 3 and len(ds3["test"]) == 3 with pytest.raises(ValueError): # no config named "default" _ = load_dataset(SAMPLE_DATASET_SINGLE_CONFIG_IN_METADATA, "default") @require_sndfile @pytest.mark.integration def test_load_hub_dataset_without_script_with_two_config_in_metadata(): ds = load_dataset(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, "v1") assert list(ds["train"].features) == ["audio"] # assert param `drop_labels=True` from metadata is passed assert len(ds["train"]) == 3 and len(ds["test"]) == 3 ds2 = load_dataset(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, "v2") assert list(ds2["train"].features) == [ "audio", "label", ] # assert param `drop_labels=False` from metadata is passed assert len(ds2["train"]) == 2 and len(ds2["test"]) == 1 with pytest.raises(ValueError): # config is required but not specified _ = load_dataset(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA) with pytest.raises(ValueError): # no config named "default" _ = load_dataset(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, "default") ds_with_default = load_dataset(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA_WITH_DEFAULT) # it's a dataset with the same data but "v1" config is marked as a default one assert list(ds_with_default["train"].features) == list(ds["train"].features) assert len(ds_with_default["train"]) == len(ds["train"]) and len(ds_with_default["test"]) == len(ds["test"]) @require_sndfile @pytest.mark.integration def test_load_hub_dataset_without_script_with_metadata_config_in_parallel(): # assert it doesn't fail (pickling of dynamically created class works) ds = load_dataset(SAMPLE_DATASET_SINGLE_CONFIG_IN_METADATA, num_proc=2) assert "label" not in ds["train"].features # assert param `drop_labels=True` from metadata is passed assert len(ds["train"]) == 3 and len(ds["test"]) == 3 ds = load_dataset(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, "v1", num_proc=2) assert "label" not in ds["train"].features # assert param `drop_labels=True` from metadata is passed assert len(ds["train"]) == 3 and len(ds["test"]) == 3 ds = load_dataset(SAMPLE_DATASET_TWO_CONFIG_IN_METADATA, "v2", num_proc=2) assert "label" in ds["train"].features assert len(ds["train"]) == 2 and len(ds["test"]) == 1 @require_pil @pytest.mark.integration @pytest.mark.parametrize("streaming", [True]) def test_load_dataset_private_zipped_images(hf_private_dataset_repo_zipped_img_data, hf_token, streaming): ds = load_dataset(hf_private_dataset_repo_zipped_img_data, split="train", streaming=streaming, token=hf_token) assert isinstance(ds, IterableDataset if streaming else Dataset) ds_items = list(ds) assert len(ds_items) == 2 def test_load_dataset_then_move_then_reload(dataset_loading_script_dir, data_dir, tmp_path, caplog): cache_dir1 = tmp_path / "cache1" cache_dir2 = tmp_path / "cache2" dataset = load_dataset(dataset_loading_script_dir, data_dir=data_dir, split="train", cache_dir=cache_dir1) fingerprint1 = dataset._fingerprint del dataset os.rename(cache_dir1, cache_dir2) caplog.clear() with caplog.at_level(INFO, logger=get_logger().name): dataset = load_dataset(dataset_loading_script_dir, data_dir=data_dir, split="train", cache_dir=cache_dir2) assert "Found cached dataset" in caplog.text assert dataset._fingerprint == fingerprint1, "for the caching mechanism to work, fingerprint should stay the same" dataset = load_dataset(dataset_loading_script_dir, data_dir=data_dir, split="test", cache_dir=cache_dir2) assert dataset._fingerprint != fingerprint1 def test_load_dataset_builder_then_edit_then_load_again(tmp_path: Path): dataset_dir = tmp_path / "test_load_dataset_then_edit_then_load_again" dataset_dir.mkdir() with open(dataset_dir / "train.txt", "w") as f: f.write("Hello there") dataset_builder = load_dataset_builder(str(dataset_dir)) with open(dataset_dir / "train.txt", "w") as f: f.write("General Kenobi !") edited_dataset_builder = load_dataset_builder(str(dataset_dir)) assert dataset_builder.cache_dir != edited_dataset_builder.cache_dir def test_load_dataset_readonly(dataset_loading_script_dir, dataset_loading_script_dir_readonly, data_dir, tmp_path): cache_dir1 = tmp_path / "cache1" cache_dir2 = tmp_path / "cache2" dataset = load_dataset(dataset_loading_script_dir, data_dir=data_dir, split="train", cache_dir=cache_dir1) fingerprint1 = dataset._fingerprint del dataset # Load readonly dataset and check that the fingerprint is the same. dataset = load_dataset(dataset_loading_script_dir_readonly, data_dir=data_dir, split="train", cache_dir=cache_dir2) assert dataset._fingerprint == fingerprint1, "Cannot load a dataset in a readonly folder." @pytest.mark.parametrize("max_in_memory_dataset_size", ["default", 0, 50, 500]) def test_load_dataset_local_with_default_in_memory( max_in_memory_dataset_size, dataset_loading_script_dir, data_dir, monkeypatch ): current_dataset_size = 148 if max_in_memory_dataset_size == "default": max_in_memory_dataset_size = 0 # default else: monkeypatch.setattr(datasets.config, "IN_MEMORY_MAX_SIZE", max_in_memory_dataset_size) if max_in_memory_dataset_size: expected_in_memory = current_dataset_size < max_in_memory_dataset_size else: expected_in_memory = False with assert_arrow_memory_increases() if expected_in_memory else assert_arrow_memory_doesnt_increase(): dataset = load_dataset(dataset_loading_script_dir, data_dir=data_dir) assert (dataset["train"].dataset_size < max_in_memory_dataset_size) is expected_in_memory @pytest.mark.parametrize("max_in_memory_dataset_size", ["default", 0, 100, 1000]) def test_load_from_disk_with_default_in_memory( max_in_memory_dataset_size, dataset_loading_script_dir, data_dir, tmp_path, monkeypatch ): current_dataset_size = 512 # arrow file size = 512, in-memory dataset size = 148 if max_in_memory_dataset_size == "default": max_in_memory_dataset_size = 0 # default else: monkeypatch.setattr(datasets.config, "IN_MEMORY_MAX_SIZE", max_in_memory_dataset_size) if max_in_memory_dataset_size: expected_in_memory = current_dataset_size < max_in_memory_dataset_size else: expected_in_memory = False dset = load_dataset(dataset_loading_script_dir, data_dir=data_dir, keep_in_memory=True) dataset_path = os.path.join(tmp_path, "saved_dataset") dset.save_to_disk(dataset_path) with assert_arrow_memory_increases() if expected_in_memory else assert_arrow_memory_doesnt_increase(): _ = load_from_disk(dataset_path) @pytest.mark.integration def test_remote_data_files(): repo_id = "hf-internal-testing/raw_jsonl" filename = "wikiann-bn-validation.jsonl" data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{filename}" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) assert isinstance(ds, IterableDataset) ds_item = next(iter(ds)) assert ds_item.keys() == {"langs", "ner_tags", "spans", "tokens"} @pytest.mark.parametrize("deleted", [False, True]) def test_load_dataset_deletes_extracted_files(deleted, jsonl_gz_path, tmp_path): data_files = jsonl_gz_path cache_dir = tmp_path / "cache" if deleted: download_config = DownloadConfig(delete_extracted=True, cache_dir=cache_dir / "downloads") ds = load_dataset( "json", split="train", data_files=data_files, cache_dir=cache_dir, download_config=download_config ) else: # default ds = load_dataset("json", split="train", data_files=data_files, cache_dir=cache_dir) assert ds[0] == {"col_1": "0", "col_2": 0, "col_3": 0.0} assert ( [path for path in (cache_dir / "downloads" / "extracted").iterdir() if path.suffix != ".lock"] == [] ) is deleted def distributed_load_dataset(args): data_name, tmp_dir, datafiles = args dataset = load_dataset(data_name, cache_dir=tmp_dir, data_files=datafiles) return dataset def test_load_dataset_distributed(tmp_path, csv_path): num_workers = 5 args = "csv", str(tmp_path), csv_path with Pool(processes=num_workers) as pool: # start num_workers processes datasets = pool.map(distributed_load_dataset, [args] * num_workers) assert len(datasets) == num_workers assert all(len(dataset) == len(datasets[0]) > 0 for dataset in datasets) assert len(datasets[0].cache_files) > 0 assert all(dataset.cache_files == datasets[0].cache_files for dataset in datasets) def distributed_load_dataset_with_script(args): data_name, tmp_dir, download_mode = args dataset = load_dataset(data_name, cache_dir=tmp_dir, download_mode=download_mode) return dataset @require_not_windows # windows doesn't support overwriting Arrow files from other processes @pytest.mark.parametrize("download_mode", [None, "force_redownload"]) def test_load_dataset_distributed_with_script(tmp_path, download_mode): # we need to check in the "force_redownload" case # since in `_copy_script_and_other_resources_in_importable_dir()` we might delete the directory # containing the .py file while the other processes use it num_workers = 5 args = (SAMPLE_DATASET_IDENTIFIER, str(tmp_path), download_mode) with Pool(processes=num_workers) as pool: # start num_workers processes datasets = pool.map(distributed_load_dataset_with_script, [args] * num_workers) assert len(datasets) == num_workers assert all(len(dataset) == len(datasets[0]) > 0 for dataset in datasets) assert len(datasets[0].cache_files) > 0 assert all(dataset.cache_files == datasets[0].cache_files for dataset in datasets) def test_load_dataset_with_storage_options(mockfs): with mockfs.open("data.txt", "w") as f: f.write("Hello there\n") f.write("General Kenobi !") data_files = {"train": ["mock://data.txt"]} ds = load_dataset("text", data_files=data_files, storage_options=mockfs.storage_options) assert list(ds["train"]) == [{"text": "Hello there"}, {"text": "General Kenobi !"}] @require_pil def test_load_dataset_with_storage_options_with_decoding(mockfs, image_file): import PIL.Image filename = os.path.basename(image_file) with mockfs.open(filename, "wb") as fout: with open(image_file, "rb") as fin: fout.write(fin.read()) data_files = {"train": ["mock://" + filename]} ds = load_dataset("imagefolder", data_files=data_files, storage_options=mockfs.storage_options) assert len(ds["train"]) == 1 assert isinstance(ds["train"][0]["image"], PIL.Image.Image) def test_load_dataset_without_script_with_zip(zip_csv_path): path = str(zip_csv_path.parent) ds = load_dataset(path) assert list(ds.keys()) == ["train"] assert ds["train"].column_names == ["col_1", "col_2", "col_3"] assert ds["train"].num_rows == 8 assert ds["train"][0] == {"col_1": 0, "col_2": 0, "col_3": 0.0} @pytest.mark.parametrize("trust_remote_code, expected", [(False, False), (True, True), (None, True)]) def test_resolve_trust_remote_code(trust_remote_code, expected): assert resolve_trust_remote_code(trust_remote_code, repo_id="dummy") is expected @pytest.mark.parametrize("trust_remote_code, expected", [(False, False), (True, True), (None, ValueError)]) def test_resolve_trust_remote_code_future(trust_remote_code, expected): with patch.object(config, "HF_DATASETS_TRUST_REMOTE_CODE", None): # this will be the default soon if isinstance(expected, bool): resolve_trust_remote_code(trust_remote_code, repo_id="dummy") is expected else: with pytest.raises(expected): resolve_trust_remote_code(trust_remote_code, repo_id="dummy") @pytest.mark.integration def test_reload_old_cache_from_2_15(tmp_path: Path): cache_dir = tmp_path / "test_reload_old_cache_from_2_15" builder_cache_dir = ( cache_dir / "polinaeterna___audiofolder_two_configs_in_metadata/v2-374bfde4f55442bc/0.0.0/7896925d64deea5d" ) builder_cache_dir.mkdir(parents=True) arrow_path = builder_cache_dir / "audiofolder_two_configs_in_metadata-train.arrow" dataset_info_path = builder_cache_dir / "dataset_info.json" with dataset_info_path.open("w") as f: f.write("{}") arrow_path.touch() builder = load_dataset_builder( "polinaeterna/audiofolder_two_configs_in_metadata", "v2", data_files="v2/train/*", cache_dir=cache_dir.as_posix(), ) assert builder.cache_dir == builder_cache_dir.as_posix() # old cache from 2.15 builder = load_dataset_builder( "polinaeterna/audiofolder_two_configs_in_metadata", "v2", cache_dir=cache_dir.as_posix() ) assert ( builder.cache_dir == ( cache_dir / "polinaeterna___audiofolder_two_configs_in_metadata" / "v2" / "0.0.0" / str(builder.hash) ).as_posix() ) # new cache @pytest.mark.integration def test_update_dataset_card_data_with_standalone_yaml(): # Labels defined in .huggingface.yml because they are too long to be in README.md from datasets.utils.metadata import MetadataConfigs with patch( "datasets.utils.metadata.MetadataConfigs.from_dataset_card_data", side_effect=MetadataConfigs.from_dataset_card_data, ) as card_data_read_mock: builder = load_dataset_builder("datasets-maintainers/dataset-with-standalone-yaml") assert card_data_read_mock.call_args.args[0]["license"] is not None # from README.md assert card_data_read_mock.call_args.args[0]["dataset_info"] is not None # from standalone yaml assert card_data_read_mock.call_args.args[0]["tags"] == ["test"] # standalone yaml has precedence assert isinstance( builder.info.features["label"], datasets.ClassLabel ) # correctly loaded from long labels list in standalone yaml
datasets/tests/test_load.py/0
{ "file_path": "datasets/tests/test_load.py", "repo_id": "datasets", "token_count": 35068 }
78
import fnmatch import gc import os import shutil import tempfile import textwrap import time import unittest from io import BytesIO from pathlib import Path from unittest.mock import patch import numpy as np import pytest from huggingface_hub import DatasetCard, HfApi from datasets import ( Audio, ClassLabel, Dataset, DatasetDict, DownloadManager, Features, Image, Value, load_dataset, load_dataset_builder, ) from datasets.config import METADATA_CONFIGS_FIELD from datasets.data_files import get_data_patterns from datasets.packaged_modules.folder_based_builder.folder_based_builder import ( FolderBasedBuilder, FolderBasedBuilderConfig, ) from datasets.utils.file_utils import cached_path from datasets.utils.hub import hf_hub_url from tests.fixtures.hub import CI_HUB_ENDPOINT, CI_HUB_USER, CI_HUB_USER_TOKEN from tests.utils import for_all_test_methods, require_pil, require_sndfile, xfail_if_500_502_http_error pytestmark = pytest.mark.integration @for_all_test_methods(xfail_if_500_502_http_error) @pytest.mark.usefixtures("ci_hub_config", "ci_hfh_hf_hub_url") class TestPushToHub: _api = HfApi(endpoint=CI_HUB_ENDPOINT) _token = CI_HUB_USER_TOKEN def test_push_dataset_dict_to_hub_no_token(self, temporary_repo, set_ci_hub_access_token): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) local_ds = DatasetDict({"train": ds}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features # Ensure that there is a single file on the repository that has the correct name files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset")) assert files == [".gitattributes", "README.md", "data/train-00000-of-00001.parquet"] def test_push_dataset_dict_to_hub_name_without_namespace(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) local_ds = DatasetDict({"train": ds}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name.split("/")[-1], token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features # Ensure that there is a single file on the repository that has the correct name files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset")) assert files == [".gitattributes", "README.md", "data/train-00000-of-00001.parquet"] def test_push_dataset_dict_to_hub_datasets_with_different_features(self, cleanup_repo): ds_train = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) ds_test = Dataset.from_dict({"x": [True, False, True], "y": ["a", "b", "c"]}) local_ds = DatasetDict({"train": ds_train, "test": ds_test}) ds_name = f"{CI_HUB_USER}/test-{int(time.time() * 10e6)}" try: with pytest.raises(ValueError): local_ds.push_to_hub(ds_name.split("/")[-1], token=self._token) except AssertionError: cleanup_repo(ds_name) raise def test_push_dataset_dict_to_hub_private(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) local_ds = DatasetDict({"train": ds}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, token=self._token, private=True) hub_ds = load_dataset(ds_name, download_mode="force_redownload", token=self._token) assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features # Ensure that there is a single file on the repository that has the correct name files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset", token=self._token)) assert files == [".gitattributes", "README.md", "data/train-00000-of-00001.parquet"] def test_push_dataset_dict_to_hub(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) local_ds = DatasetDict({"train": ds}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features # Ensure that there is a single file on the repository that has the correct name files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset", token=self._token)) assert files == [".gitattributes", "README.md", "data/train-00000-of-00001.parquet"] def test_push_dataset_dict_to_hub_with_pull_request(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) local_ds = DatasetDict({"train": ds}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, token=self._token, create_pr=True) hub_ds = load_dataset(ds_name, revision="refs/pr/1", download_mode="force_redownload") assert local_ds["train"].features == hub_ds["train"].features assert list(local_ds.keys()) == list(hub_ds.keys()) assert local_ds["train"].features == hub_ds["train"].features # Ensure that there is a single file on the repository that has the correct name files = sorted( self._api.list_repo_files(ds_name, revision="refs/pr/1", repo_type="dataset", token=self._token) ) assert files == [".gitattributes", "README.md", "data/train-00000-of-00001.parquet"] def test_push_dataset_dict_to_hub_with_revision(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) local_ds = DatasetDict({"train": ds}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, token=self._token, revision="dev") hub_ds = load_dataset(ds_name, revision="dev", download_mode="force_redownload") assert local_ds["train"].features == hub_ds["train"].features assert list(local_ds.keys()) == list(hub_ds.keys()) assert local_ds["train"].features == hub_ds["train"].features # Ensure that there is a single file on the repository that has the correct name files = sorted(self._api.list_repo_files(ds_name, revision="dev", repo_type="dataset", token=self._token)) assert files == [".gitattributes", "README.md", "data/train-00000-of-00001.parquet"] def test_push_dataset_dict_to_hub_multiple_files(self, temporary_repo): ds = Dataset.from_dict({"x": list(range(1000)), "y": list(range(1000))}) local_ds = DatasetDict({"train": ds}) with temporary_repo() as ds_name: with patch("datasets.config.MAX_SHARD_SIZE", "16KB"): local_ds.push_to_hub(ds_name, token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features # Ensure that there are two files on the repository that have the correct name files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset", token=self._token)) assert files == [ ".gitattributes", "README.md", "data/train-00000-of-00002.parquet", "data/train-00001-of-00002.parquet", ] def test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size(self, temporary_repo): ds = Dataset.from_dict({"x": list(range(1000)), "y": list(range(1000))}) local_ds = DatasetDict({"train": ds}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, token=self._token, max_shard_size="16KB") hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features # Ensure that there are two files on the repository that have the correct name files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset", token=self._token)) assert files == [ ".gitattributes", "README.md", "data/train-00000-of-00002.parquet", "data/train-00001-of-00002.parquet", ] def test_push_dataset_dict_to_hub_multiple_files_with_num_shards(self, temporary_repo): ds = Dataset.from_dict({"x": list(range(1000)), "y": list(range(1000))}) local_ds = DatasetDict({"train": ds}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, token=self._token, num_shards={"train": 2}) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features # Ensure that there are two files on the repository that have the correct name files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset", token=self._token)) assert files == [ ".gitattributes", "README.md", "data/train-00000-of-00002.parquet", "data/train-00001-of-00002.parquet", ] def test_push_dataset_dict_to_hub_with_multiple_commits(self, temporary_repo): ds = Dataset.from_dict({"x": list(range(1000)), "y": list(range(1000))}) local_ds = DatasetDict({"train": ds}) with temporary_repo() as ds_name: self._api.create_repo(ds_name, token=self._token, repo_type="dataset") num_commits_before_push = len(self._api.list_repo_commits(ds_name, repo_type="dataset", token=self._token)) with patch("datasets.config.MAX_SHARD_SIZE", "16KB"), patch( "datasets.config.UPLOADS_MAX_NUMBER_PER_COMMIT", 1 ): local_ds.push_to_hub(ds_name, token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features # Ensure that there are two files on the repository that have the correct name files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset", token=self._token)) assert files == [ ".gitattributes", "README.md", "data/train-00000-of-00002.parquet", "data/train-00001-of-00002.parquet", ] num_commits_after_push = len(self._api.list_repo_commits(ds_name, repo_type="dataset", token=self._token)) assert num_commits_after_push - num_commits_before_push > 1 def test_push_dataset_dict_to_hub_overwrite_files(self, temporary_repo): ds = Dataset.from_dict({"x": list(range(1000)), "y": list(range(1000))}) ds2 = Dataset.from_dict({"x": list(range(100)), "y": list(range(100))}) local_ds = DatasetDict({"train": ds, "random": ds2}) # Push to hub two times, but the second time with a larger amount of files. # Verify that the new files contain the correct dataset. with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, token=self._token) with tempfile.TemporaryDirectory() as tmp: # Add a file starting with "data" to ensure it doesn't get deleted. path = Path(tmp) / "datafile.txt" with open(path, "w") as f: f.write("Bogus file") self._api.upload_file( path_or_fileobj=str(path), path_in_repo="datafile.txt", repo_id=ds_name, repo_type="dataset", token=self._token, ) local_ds.push_to_hub(ds_name, token=self._token, max_shard_size=500 << 5) # Ensure that there are two files on the repository that have the correct name files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset", token=self._token)) assert files == [ ".gitattributes", "README.md", "data/random-00000-of-00001.parquet", "data/train-00000-of-00002.parquet", "data/train-00001-of-00002.parquet", "datafile.txt", ] self._api.delete_file("datafile.txt", repo_id=ds_name, repo_type="dataset", token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features del hub_ds # To ensure the reference to the memory-mapped Arrow file is dropped to avoid the PermissionError on Windows gc.collect() # Push to hub two times, but the second time with fewer files. # Verify that the new files contain the correct dataset and that non-necessary files have been deleted. with temporary_repo(ds_name): local_ds.push_to_hub(ds_name, token=self._token, max_shard_size=500 << 5) with tempfile.TemporaryDirectory() as tmp: # Add a file starting with "data" to ensure it doesn't get deleted. path = Path(tmp) / "datafile.txt" with open(path, "w") as f: f.write("Bogus file") self._api.upload_file( path_or_fileobj=str(path), path_in_repo="datafile.txt", repo_id=ds_name, repo_type="dataset", token=self._token, ) local_ds.push_to_hub(ds_name, token=self._token) # Ensure that there are two files on the repository that have the correct name files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset", token=self._token)) assert files == [ ".gitattributes", "README.md", "data/random-00000-of-00001.parquet", "data/train-00000-of-00001.parquet", "datafile.txt", ] # Keeping the "datafile.txt" breaks the load_dataset to think it's a text-based dataset self._api.delete_file("datafile.txt", repo_id=ds_name, repo_type="dataset", token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features def test_push_dataset_to_hub(self, temporary_repo): local_ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, split="train", token=self._token) local_ds_dict = {"train": local_ds} hub_ds_dict = load_dataset(ds_name, download_mode="force_redownload") assert list(local_ds_dict.keys()) == list(hub_ds_dict.keys()) for ds_split_name in local_ds_dict.keys(): local_ds = local_ds_dict[ds_split_name] hub_ds = hub_ds_dict[ds_split_name] assert local_ds.column_names == hub_ds.column_names assert list(local_ds.features.keys()) == list(hub_ds.features.keys()) assert local_ds.features == hub_ds.features def test_push_dataset_to_hub_custom_features(self, temporary_repo): features = Features({"x": Value("int64"), "y": ClassLabel(names=["neg", "pos"])}) ds = Dataset.from_dict({"x": [1, 2, 3], "y": [0, 0, 1]}, features=features) with temporary_repo() as ds_name: ds.push_to_hub(ds_name, token=self._token) hub_ds = load_dataset(ds_name, split="train", download_mode="force_redownload") assert ds.column_names == hub_ds.column_names assert list(ds.features.keys()) == list(hub_ds.features.keys()) assert ds.features == hub_ds.features assert ds[:] == hub_ds[:] @require_sndfile def test_push_dataset_to_hub_custom_features_audio(self, temporary_repo): audio_path = os.path.join(os.path.dirname(__file__), "features", "data", "test_audio_44100.wav") data = {"x": [audio_path, None], "y": [0, -1]} features = Features({"x": Audio(), "y": Value("int32")}) ds = Dataset.from_dict(data, features=features) for embed_external_files in [True, False]: with temporary_repo() as ds_name: ds.push_to_hub(ds_name, embed_external_files=embed_external_files, token=self._token) hub_ds = load_dataset(ds_name, split="train", download_mode="force_redownload") assert ds.column_names == hub_ds.column_names assert list(ds.features.keys()) == list(hub_ds.features.keys()) assert ds.features == hub_ds.features np.testing.assert_equal(ds[0]["x"]["array"], hub_ds[0]["x"]["array"]) assert ds[1] == hub_ds[1] # don't test hub_ds[0] since audio decoding might be slightly different hub_ds = hub_ds.cast_column("x", Audio(decode=False)) elem = hub_ds[0]["x"] path, bytes_ = elem["path"], elem["bytes"] assert isinstance(path, str) assert os.path.basename(path) == "test_audio_44100.wav" assert bool(bytes_) == embed_external_files @require_pil def test_push_dataset_to_hub_custom_features_image(self, temporary_repo): image_path = os.path.join(os.path.dirname(__file__), "features", "data", "test_image_rgb.jpg") data = {"x": [image_path, None], "y": [0, -1]} features = Features({"x": Image(), "y": Value("int32")}) ds = Dataset.from_dict(data, features=features) for embed_external_files in [True, False]: with temporary_repo() as ds_name: ds.push_to_hub(ds_name, embed_external_files=embed_external_files, token=self._token) hub_ds = load_dataset(ds_name, split="train", download_mode="force_redownload") assert ds.column_names == hub_ds.column_names assert list(ds.features.keys()) == list(hub_ds.features.keys()) assert ds.features == hub_ds.features assert ds[:] == hub_ds[:] hub_ds = hub_ds.cast_column("x", Image(decode=False)) elem = hub_ds[0]["x"] path, bytes_ = elem["path"], elem["bytes"] assert isinstance(path, str) assert bool(bytes_) == embed_external_files @require_pil def test_push_dataset_to_hub_custom_features_image_list(self, temporary_repo): image_path = os.path.join(os.path.dirname(__file__), "features", "data", "test_image_rgb.jpg") data = {"x": [[image_path], [image_path, image_path]], "y": [0, -1]} features = Features({"x": [Image()], "y": Value("int32")}) ds = Dataset.from_dict(data, features=features) for embed_external_files in [True, False]: with temporary_repo() as ds_name: ds.push_to_hub(ds_name, embed_external_files=embed_external_files, token=self._token) hub_ds = load_dataset(ds_name, split="train", download_mode="force_redownload") assert ds.column_names == hub_ds.column_names assert list(ds.features.keys()) == list(hub_ds.features.keys()) assert ds.features == hub_ds.features assert ds[:] == hub_ds[:] hub_ds = hub_ds.cast_column("x", [Image(decode=False)]) elem = hub_ds[0]["x"][0] path, bytes_ = elem["path"], elem["bytes"] assert isinstance(path, str) assert bool(bytes_) == embed_external_files def test_push_dataset_dict_to_hub_custom_features(self, temporary_repo): features = Features({"x": Value("int64"), "y": ClassLabel(names=["neg", "pos"])}) ds = Dataset.from_dict({"x": [1, 2, 3], "y": [0, 0, 1]}, features=features) local_ds = DatasetDict({"test": ds}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["test"].features.keys()) == list(hub_ds["test"].features.keys()) assert local_ds["test"].features == hub_ds["test"].features def test_push_dataset_to_hub_custom_splits(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) with temporary_repo() as ds_name: ds.push_to_hub(ds_name, split="random", token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert ds.column_names == hub_ds["random"].column_names assert list(ds.features.keys()) == list(hub_ds["random"].features.keys()) assert ds.features == hub_ds["random"].features def test_push_dataset_to_hub_multiple_splits_one_by_one(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) with temporary_repo() as ds_name: ds.push_to_hub(ds_name, split="train", token=self._token) ds.push_to_hub(ds_name, split="test", token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert sorted(hub_ds) == ["test", "train"] assert ds.column_names == hub_ds["train"].column_names assert list(ds.features.keys()) == list(hub_ds["train"].features.keys()) assert ds.features == hub_ds["train"].features def test_push_dataset_dict_to_hub_custom_splits(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) local_ds = DatasetDict({"random": ds}) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["random"].features.keys()) == list(hub_ds["random"].features.keys()) assert local_ds["random"].features == hub_ds["random"].features @unittest.skip("This test cannot pass until iterable datasets have push to hub") def test_push_streaming_dataset_dict_to_hub(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) local_ds = DatasetDict({"train": ds}) with tempfile.TemporaryDirectory() as tmp: local_ds.save_to_disk(tmp) local_ds = load_dataset(tmp, streaming=True) with temporary_repo() as ds_name: local_ds.push_to_hub(ds_name, token=self._token) hub_ds = load_dataset(ds_name, download_mode="force_redownload") assert local_ds.column_names == hub_ds.column_names assert list(local_ds["train"].features.keys()) == list(hub_ds["train"].features.keys()) assert local_ds["train"].features == hub_ds["train"].features def test_push_multiple_dataset_configs_to_hub_load_dataset_builder(self, temporary_repo): ds_default = Dataset.from_dict({"a": [0], "b": [1]}) ds_config1 = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) ds_config2 = Dataset.from_dict({"foo": [1, 2], "bar": [4, 5]}) with temporary_repo() as ds_name: ds_default.push_to_hub(ds_name, token=self._token) ds_config1.push_to_hub(ds_name, "config1", token=self._token) ds_config2.push_to_hub(ds_name, "config2", token=self._token) ds_builder_default = load_dataset_builder(ds_name, download_mode="force_redownload") # default config assert len(ds_builder_default.BUILDER_CONFIGS) == 3 assert len(ds_builder_default.config.data_files["train"]) == 1 assert fnmatch.fnmatch( ds_builder_default.config.data_files["train"][0], "*/data/train-*", ) ds_builder_config1 = load_dataset_builder(ds_name, "config1", download_mode="force_redownload") assert len(ds_builder_config1.BUILDER_CONFIGS) == 3 assert len(ds_builder_config1.config.data_files["train"]) == 1 assert fnmatch.fnmatch( ds_builder_config1.config.data_files["train"][0], "*/config1/train-*", ) ds_builder_config2 = load_dataset_builder(ds_name, "config2", download_mode="force_redownload") assert len(ds_builder_config2.BUILDER_CONFIGS) == 3 assert len(ds_builder_config2.config.data_files["train"]) == 1 assert fnmatch.fnmatch( ds_builder_config2.config.data_files["train"][0], "*/config2/train-*", ) with pytest.raises(ValueError): # no config 'config3' load_dataset_builder(ds_name, "config3", download_mode="force_redownload") def test_push_multiple_dataset_configs_to_hub_load_dataset(self, temporary_repo): ds_default = Dataset.from_dict({"a": [0], "b": [1]}) ds_config1 = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) ds_config2 = Dataset.from_dict({"foo": [1, 2], "bar": [4, 5]}) with temporary_repo() as ds_name: ds_default.push_to_hub(ds_name, token=self._token) ds_config1.push_to_hub(ds_name, "config1", token=self._token) ds_config2.push_to_hub(ds_name, "config2", token=self._token) files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset")) assert files == [ ".gitattributes", "README.md", "config1/train-00000-of-00001.parquet", "config2/train-00000-of-00001.parquet", "data/train-00000-of-00001.parquet", ] hub_ds_default = load_dataset(ds_name, download_mode="force_redownload") hub_ds_config1 = load_dataset(ds_name, "config1", download_mode="force_redownload") hub_ds_config2 = load_dataset(ds_name, "config2", download_mode="force_redownload") # only "train" split assert len(hub_ds_default) == len(hub_ds_config1) == len(hub_ds_config2) == 1 assert ds_default.column_names == hub_ds_default["train"].column_names == ["a", "b"] assert ds_config1.column_names == hub_ds_config1["train"].column_names == ["x", "y"] assert ds_config2.column_names == hub_ds_config2["train"].column_names == ["foo", "bar"] assert ds_default.features == hub_ds_default["train"].features assert ds_config1.features == hub_ds_config1["train"].features assert ds_config2.features == hub_ds_config2["train"].features assert ds_default.num_rows == hub_ds_default["train"].num_rows == 1 assert ds_config1.num_rows == hub_ds_config1["train"].num_rows == 3 assert ds_config2.num_rows == hub_ds_config2["train"].num_rows == 2 with pytest.raises(ValueError): # no config 'config3' load_dataset(ds_name, "config3", download_mode="force_redownload") @pytest.mark.parametrize("specific_default_config_name", [False, True]) def test_push_multiple_dataset_configs_to_hub_readme_metadata_content( self, specific_default_config_name, temporary_repo ): ds_default = Dataset.from_dict({"a": [0], "b": [2]}) ds_config1 = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) ds_config2 = Dataset.from_dict({"foo": [1, 2], "bar": [4, 5]}) with temporary_repo() as ds_name: if specific_default_config_name: ds_default.push_to_hub(ds_name, config_name="config0", set_default=True, token=self._token) else: ds_default.push_to_hub(ds_name, token=self._token) ds_config1.push_to_hub(ds_name, "config1", token=self._token) ds_config2.push_to_hub(ds_name, "config2", token=self._token) # check that configs args was correctly pushed to README.md ds_readme_path = cached_path(hf_hub_url(ds_name, "README.md")) dataset_card_data = DatasetCard.load(ds_readme_path).data assert METADATA_CONFIGS_FIELD in dataset_card_data assert isinstance(dataset_card_data[METADATA_CONFIGS_FIELD], list) assert sorted(dataset_card_data[METADATA_CONFIGS_FIELD], key=lambda x: x["config_name"]) == ( [ { "config_name": "config0", "data_files": [ {"split": "train", "path": "config0/train-*"}, ], "default": True, }, ] if specific_default_config_name else [] ) + [ { "config_name": "config1", "data_files": [ {"split": "train", "path": "config1/train-*"}, ], }, { "config_name": "config2", "data_files": [ {"split": "train", "path": "config2/train-*"}, ], }, ] + ( [] if specific_default_config_name else [ { "config_name": "default", "data_files": [ {"split": "train", "path": "data/train-*"}, ], }, ] ) def test_push_multiple_dataset_dict_configs_to_hub_load_dataset_builder(self, temporary_repo): ds_default = Dataset.from_dict({"a": [0], "b": [1]}) ds_config1 = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) ds_config2 = Dataset.from_dict({"foo": [1, 2], "bar": [4, 5]}) ds_default = DatasetDict({"random": ds_default}) ds_config1 = DatasetDict({"random": ds_config1}) ds_config2 = DatasetDict({"random": ds_config2}) with temporary_repo() as ds_name: ds_default.push_to_hub(ds_name, token=self._token) ds_config1.push_to_hub(ds_name, "config1", token=self._token) ds_config2.push_to_hub(ds_name, "config2", token=self._token) ds_builder_default = load_dataset_builder(ds_name, download_mode="force_redownload") # default config assert len(ds_builder_default.BUILDER_CONFIGS) == 3 assert len(ds_builder_default.config.data_files["random"]) == 1 assert fnmatch.fnmatch( ds_builder_default.config.data_files["random"][0], "*/data/random-*", ) ds_builder_config1 = load_dataset_builder(ds_name, "config1", download_mode="force_redownload") assert len(ds_builder_config1.BUILDER_CONFIGS) == 3 assert len(ds_builder_config1.config.data_files["random"]) == 1 assert fnmatch.fnmatch( ds_builder_config1.config.data_files["random"][0], "*/config1/random-*", ) ds_builder_config2 = load_dataset_builder(ds_name, "config2", download_mode="force_redownload") assert len(ds_builder_config2.BUILDER_CONFIGS) == 3 assert len(ds_builder_config2.config.data_files["random"]) == 1 assert fnmatch.fnmatch( ds_builder_config2.config.data_files["random"][0], "*/config2/random-*", ) with pytest.raises(ValueError): # no config named 'config3' load_dataset_builder(ds_name, "config3", download_mode="force_redownload") def test_push_multiple_dataset_dict_configs_to_hub_load_dataset(self, temporary_repo): ds_default = Dataset.from_dict({"a": [0], "b": [1]}) ds_config1 = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) ds_config2 = Dataset.from_dict({"foo": [1, 2], "bar": [4, 5]}) ds_default = DatasetDict({"train": ds_default, "random": ds_default}) ds_config1 = DatasetDict({"train": ds_config1, "random": ds_config1}) ds_config2 = DatasetDict({"train": ds_config2, "random": ds_config2}) with temporary_repo() as ds_name: ds_default.push_to_hub(ds_name, token=self._token) ds_config1.push_to_hub(ds_name, "config1", token=self._token) ds_config2.push_to_hub(ds_name, "config2", token=self._token) files = sorted(self._api.list_repo_files(ds_name, repo_type="dataset")) assert files == [ ".gitattributes", "README.md", "config1/random-00000-of-00001.parquet", "config1/train-00000-of-00001.parquet", "config2/random-00000-of-00001.parquet", "config2/train-00000-of-00001.parquet", "data/random-00000-of-00001.parquet", "data/train-00000-of-00001.parquet", ] hub_ds_default = load_dataset(ds_name, download_mode="force_redownload") hub_ds_config1 = load_dataset(ds_name, "config1", download_mode="force_redownload") hub_ds_config2 = load_dataset(ds_name, "config2", download_mode="force_redownload") # two splits expected_splits = ["random", "train"] assert len(hub_ds_default) == len(hub_ds_config1) == len(hub_ds_config2) == 2 assert sorted(hub_ds_default) == sorted(hub_ds_config1) == sorted(hub_ds_config2) == expected_splits for split in expected_splits: assert ds_default[split].column_names == hub_ds_default[split].column_names == ["a", "b"] assert ds_config1[split].column_names == hub_ds_config1[split].column_names == ["x", "y"] assert ds_config2[split].column_names == hub_ds_config2[split].column_names == ["foo", "bar"] assert ds_default[split].features == hub_ds_default[split].features assert ds_config1[split].features == hub_ds_config1[split].features assert ds_config2[split].features == hub_ds_config2["train"].features assert ds_default[split].num_rows == hub_ds_default[split].num_rows == 1 assert ds_config1[split].num_rows == hub_ds_config1[split].num_rows == 3 assert ds_config2[split].num_rows == hub_ds_config2[split].num_rows == 2 with pytest.raises(ValueError): # no config 'config3' load_dataset(ds_name, "config3", download_mode="force_redownload") @pytest.mark.parametrize("specific_default_config_name", [False, True]) def test_push_multiple_dataset_dict_configs_to_hub_readme_metadata_content( self, specific_default_config_name, temporary_repo ): ds_default = Dataset.from_dict({"a": [0], "b": [1]}) ds_config1 = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) ds_config2 = Dataset.from_dict({"foo": [1, 2], "bar": [4, 5]}) ds_default = DatasetDict({"train": ds_default, "random": ds_default}) ds_config1 = DatasetDict({"train": ds_config1, "random": ds_config1}) ds_config2 = DatasetDict({"train": ds_config2, "random": ds_config2}) with temporary_repo() as ds_name: if specific_default_config_name: ds_default.push_to_hub(ds_name, config_name="config0", set_default=True, token=self._token) else: ds_default.push_to_hub(ds_name, token=self._token) ds_config1.push_to_hub(ds_name, "config1", token=self._token) ds_config2.push_to_hub(ds_name, "config2", token=self._token) # check that configs args was correctly pushed to README.md ds_readme_path = cached_path(hf_hub_url(ds_name, "README.md")) dataset_card_data = DatasetCard.load(ds_readme_path).data assert METADATA_CONFIGS_FIELD in dataset_card_data assert isinstance(dataset_card_data[METADATA_CONFIGS_FIELD], list) assert sorted(dataset_card_data[METADATA_CONFIGS_FIELD], key=lambda x: x["config_name"]) == ( [ { "config_name": "config0", "data_files": [ {"split": "train", "path": "config0/train-*"}, {"split": "random", "path": "config0/random-*"}, ], "default": True, }, ] if specific_default_config_name else [] ) + [ { "config_name": "config1", "data_files": [ {"split": "train", "path": "config1/train-*"}, {"split": "random", "path": "config1/random-*"}, ], }, { "config_name": "config2", "data_files": [ {"split": "train", "path": "config2/train-*"}, {"split": "random", "path": "config2/random-*"}, ], }, ] + ( [] if specific_default_config_name else [ { "config_name": "default", "data_files": [ {"split": "train", "path": "data/train-*"}, {"split": "random", "path": "data/random-*"}, ], }, ] ) def test_push_dataset_to_hub_with_config_no_metadata_configs(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) ds_another_config = Dataset.from_dict({"foo": [1, 2], "bar": [4, 5]}) parquet_buf = BytesIO() ds.to_parquet(parquet_buf) parquet_content = parquet_buf.getvalue() with temporary_repo() as ds_name: self._api.create_repo(ds_name, token=self._token, repo_type="dataset") # old push_to_hub was uploading the parquet files only - without metadata configs self._api.upload_file( path_or_fileobj=parquet_content, path_in_repo="data/train-00000-of-00001.parquet", repo_id=ds_name, repo_type="dataset", token=self._token, ) ds_another_config.push_to_hub(ds_name, "another_config", token=self._token) ds_builder = load_dataset_builder(ds_name, download_mode="force_redownload") assert len(ds_builder.config.data_files) == 1 assert len(ds_builder.config.data_files["train"]) == 1 assert fnmatch.fnmatch(ds_builder.config.data_files["train"][0], "*/data/train-00000-of-00001.parquet") ds_another_config_builder = load_dataset_builder( ds_name, "another_config", download_mode="force_redownload" ) assert len(ds_another_config_builder.config.data_files) == 1 assert len(ds_another_config_builder.config.data_files["train"]) == 1 assert fnmatch.fnmatch( ds_another_config_builder.config.data_files["train"][0], "*/another_config/train-00000-of-00001.parquet", ) def test_push_dataset_dict_to_hub_with_config_no_metadata_configs(self, temporary_repo): ds = Dataset.from_dict({"x": [1, 2, 3], "y": [4, 5, 6]}) ds_another_config = Dataset.from_dict({"foo": [1, 2], "bar": [4, 5]}) parquet_buf = BytesIO() ds.to_parquet(parquet_buf) parquet_content = parquet_buf.getvalue() local_ds_another_config = DatasetDict({"random": ds_another_config}) with temporary_repo() as ds_name: self._api.create_repo(ds_name, token=self._token, repo_type="dataset") # old push_to_hub was uploading the parquet files only - without metadata configs self._api.upload_file( path_or_fileobj=parquet_content, path_in_repo="data/random-00000-of-00001.parquet", repo_id=ds_name, repo_type="dataset", token=self._token, ) local_ds_another_config.push_to_hub(ds_name, "another_config", token=self._token) ds_builder = load_dataset_builder(ds_name, download_mode="force_redownload") assert len(ds_builder.config.data_files) == 1 assert len(ds_builder.config.data_files["random"]) == 1 assert fnmatch.fnmatch(ds_builder.config.data_files["random"][0], "*/data/random-00000-of-00001.parquet") ds_another_config_builder = load_dataset_builder( ds_name, "another_config", download_mode="force_redownload" ) assert len(ds_another_config_builder.config.data_files) == 1 assert len(ds_another_config_builder.config.data_files["random"]) == 1 assert fnmatch.fnmatch( ds_another_config_builder.config.data_files["random"][0], "*/another_config/random-00000-of-00001.parquet", ) class DummyFolderBasedBuilder(FolderBasedBuilder): BASE_FEATURE = dict BASE_COLUMN_NAME = "base" BUILDER_CONFIG_CLASS = FolderBasedBuilderConfig EXTENSIONS = [".txt"] # CLASSIFICATION_TASK = TextClassification(text_column="base", label_column="label") @pytest.fixture(params=[".jsonl", ".csv"]) def text_file_with_metadata(request, tmp_path, text_file): metadata_filename_extension = request.param data_dir = tmp_path / "data_dir" data_dir.mkdir() text_file_path = data_dir / "file.txt" shutil.copyfile(text_file, text_file_path) metadata_file_path = data_dir / f"metadata{metadata_filename_extension}" metadata = textwrap.dedent( """\ {"file_name": "file.txt", "additional_feature": "Dummy file"} """ if metadata_filename_extension == ".jsonl" else """\ file_name,additional_feature file.txt,Dummy file """ ) with open(metadata_file_path, "w", encoding="utf-8") as f: f.write(metadata) return text_file_path, metadata_file_path @for_all_test_methods(xfail_if_500_502_http_error) @pytest.mark.usefixtures("ci_hub_config", "ci_hfh_hf_hub_url") class TestLoadFromHub: _api = HfApi(endpoint=CI_HUB_ENDPOINT) _token = CI_HUB_USER_TOKEN def test_load_dataset_with_metadata_file(self, temporary_repo, text_file_with_metadata, tmp_path): text_file_path, metadata_file_path = text_file_with_metadata data_dir_path = text_file_path.parent cache_dir_path = tmp_path / ".cache" cache_dir_path.mkdir() with temporary_repo() as repo_id: self._api.create_repo(repo_id, token=self._token, repo_type="dataset") self._api.upload_folder( folder_path=str(data_dir_path), repo_id=repo_id, repo_type="dataset", token=self._token, ) data_files = [ f"hf://datasets/{repo_id}/{text_file_path.name}", f"hf://datasets/{repo_id}/{metadata_file_path.name}", ] builder = DummyFolderBasedBuilder( dataset_name=repo_id.split("/")[-1], data_files=data_files, cache_dir=str(cache_dir_path) ) download_manager = DownloadManager() gen_kwargs = builder._split_generators(download_manager)[0].gen_kwargs generator = builder._generate_examples(**gen_kwargs) result = [example for _, example in generator] assert len(result) == 1 def test_get_data_patterns(self, temporary_repo, tmp_path): repo_dir = tmp_path / "test_get_data_patterns" data_dir = repo_dir / "data" data_dir.mkdir(parents=True) data_file = data_dir / "train-00001-of-00009.parquet" data_file.touch() with temporary_repo() as repo_id: self._api.create_repo(repo_id, token=self._token, repo_type="dataset") self._api.upload_folder( folder_path=str(repo_dir), repo_id=repo_id, repo_type="dataset", token=self._token, ) data_file_patterns = get_data_patterns(f"hf://datasets/{repo_id}") assert data_file_patterns == { "train": ["data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*"] }
datasets/tests/test_upstream_hub.py/0
{ "file_path": "datasets/tests/test_upstream_hub.py", "repo_id": "datasets", "token_count": 22798 }
79
<jupyter_start><jupyter_text>Unit 3: Deep Q-Learning with Atari Games ๐Ÿ‘พ using RL Baselines3 ZooIn this notebook, **you'll train a Deep Q-Learning agent** playing Space Invaders using [RL Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo), a training framework based on [Stable-Baselines3](https://stable-baselines3.readthedocs.io/en/master/) that provides scripts for training, evaluating agents, tuning hyperparameters, plotting results and recording videos.We're using the [RL-Baselines-3 Zoo integration, a vanilla version of Deep Q-Learning](https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) with no extensions such as Double-DQN, Dueling-DQN, and Prioritized Experience Replay.โฌ‡๏ธ Here is an example of what **you will achieve** โฌ‡๏ธ<jupyter_code>%%html <video controls autoplay><source src="https://huggingface.co/ThomasSimonini/ppo-SpaceInvadersNoFrameskip-v4/resolve/main/replay.mp4" type="video/mp4"></video><jupyter_output><empty_output><jupyter_text>๐ŸŽฎ Environments: - [SpacesInvadersNoFrameskip-v4](https://gymnasium.farama.org/environments/atari/space_invaders/)You can see the difference between Space Invaders versions here ๐Ÿ‘‰ https://gymnasium.farama.org/environments/atari/space_invaders/variants ๐Ÿ“š RL-Library: - [RL-Baselines3-Zoo](https://github.com/DLR-RM/rl-baselines3-zoo) Objectives of this notebook ๐Ÿ†At the end of the notebook, you will:- Be able to understand deeper **how RL Baselines3 Zoo works**.- Be able to **push your trained agent and the code to the Hub** with a nice video replay and an evaluation score ๐Ÿ”ฅ. This notebook is from Deep Reinforcement Learning Course In this free course, you will:- ๐Ÿ“– Study Deep Reinforcement Learning in **theory and practice**.- ๐Ÿง‘โ€๐Ÿ’ป Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0.- ๐Ÿค– Train **agents in unique environments** And more check ๐Ÿ“š the syllabus ๐Ÿ‘‰ https://simoninithomas.github.io/deep-rl-courseDonโ€™t forget to **sign up to the course** (we are collecting your email to be able toย **send you the links when each Unit is published and give you information about the challenges and updates).**The best way to keep in touch is to join our discord server to exchange with the community and with us ๐Ÿ‘‰๐Ÿป https://discord.gg/ydHrjt3WP5 Prerequisites ๐Ÿ—๏ธBefore diving into the notebook, you need to:๐Ÿ”ฒ ๐Ÿ“š **[Study Deep Q-Learning by reading Unit 3](https://huggingface.co/deep-rl-course/unit3/introduction)** ๐Ÿค— We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues). Let's train a Deep Q-Learning agent playing Atari' Space Invaders ๐Ÿ‘พ and upload it to the Hub.We strongly recommend students **to use Google Colab for the hands-on exercises instead of running them on their personal computers**.By using Google Colab, **you can focus on learning and experimenting without worrying about the technical aspects of setting up your environments**.To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 200**.To find your result, go to the leaderboard and find your model, **the result = mean_reward - std of reward**For more information about the certification process, check this section ๐Ÿ‘‰ https://huggingface.co/deep-rl-course/en/unit0/introductioncertification-process An advice ๐Ÿ’กIt's better to run this colab in a copy on your Google Drive, so that **if it timeouts** you still have the saved notebook on your Google Drive and do not need to fill everything from scratch.To do that you can either do `Ctrl + S` or `File > Save a copy in Google Drive.`Also, we're going to **train it for 90 minutes with 1M timesteps**. By typing `!nvidia-smi` will tell you what GPU you're using.And if you want to train more such 10 million steps, this will take about 9 hours, potentially resulting in Colab timing out. In that case, I recommend running this on your local computer (or somewhere else). Just click on: `File>Download`. Set the GPU ๐Ÿ’ช- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type` - `Hardware Accelerator > GPU` Install RL-Baselines3 Zoo and its dependencies ๐Ÿ“šIf you see `ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.` **this is normal and it's not a critical error** there's a conflict of version. But the packages we need are installed.<jupyter_code># For now we install this update of RL-Baselines3 Zoo !pip install git+https://github.com/DLR-RM/rl-baselines3-zoo@update/hf<jupyter_output><empty_output><jupyter_text>IF AND ONLY IF THE VERSION ABOVE DOES NOT EXIST ANYMORE. UNCOMMENT AND INSTALL THE ONE BELOW<jupyter_code>#!pip install rl_zoo3==2.0.0a9 !apt-get install swig cmake ffmpeg<jupyter_output><empty_output><jupyter_text>To be able to use Atari games in Gymnasium we need to install atari package. And accept-rom-license to download the rom files (games files).<jupyter_code>!pip install gymnasium[atari] !pip install gymnasium[accept-rom-license]<jupyter_output><empty_output><jupyter_text>Create a virtual display ๐Ÿ”ฝDuring the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames). Hence the following cell will install the librairies and create and run a virtual screen ๐Ÿ–ฅ<jupyter_code>%%capture !apt install python-opengl !apt install xvfb !pip3 install pyvirtualdisplay # Virtual display from pyvirtualdisplay import Display virtual_display = Display(visible=0, size=(1400, 900)) virtual_display.start()<jupyter_output><empty_output><jupyter_text>Train our Deep Q-Learning Agent to Play Space Invaders ๐Ÿ‘พTo train an agent with RL-Baselines3-Zoo, we just need to do two things:1. Create a hyperparameter config file that will contain our training hyperparameters called `dqn.yml`.This is a template example:```SpaceInvadersNoFrameskip-v4: env_wrapper: - stable_baselines3.common.atari_wrappers.AtariWrapper frame_stack: 4 policy: 'CnnPolicy' n_timesteps: !!float 1e6 buffer_size: 100000 learning_rate: !!float 1e-4 batch_size: 32 learning_starts: 100000 target_update_interval: 1000 train_freq: 4 gradient_steps: 1 exploration_fraction: 0.1 exploration_final_eps: 0.01 If True, you need to deactivate handle_timeout_termination in the replay_buffer_kwargs optimize_memory_usage: False``` Here we see that:- We use the `Atari Wrapper` that preprocess the input (Frame reduction ,grayscale, stack 4 frames)- We use `CnnPolicy`, since we use Convolutional layers to process the frames- We train it for 10 million `n_timesteps` - Memory (Experience Replay) size is 100000, aka the amount of experience steps you saved to train again your agent with.๐Ÿ’ก My advice is to **reduce the training timesteps to 1M,** which will take about 90 minutes on a P100. `!nvidia-smi` will tell you what GPU you're using. At 10 million steps, this will take about 9 hours, which could likely result in Colab timing out. I recommend running this on your local computer (or somewhere else). Just click on: `File>Download`. In terms of hyperparameters optimization, my advice is to focus on these 3 hyperparameters:- `learning_rate`- `buffer_size (Experience Memory size)`- `batch_size`As a good practice, you need to **check the documentation to understand what each hyperparameters does**: https://stable-baselines3.readthedocs.io/en/master/modules/dqn.htmlparameters 2. We start the training and save the models on `logs` folder ๐Ÿ“- Define the algorithm after `--algo`, where we save the model after `-f` and where the hyperparameter config is after `-c`.<jupyter_code>!python -m rl_zoo3.train --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________ -c _________<jupyter_output><empty_output><jupyter_text>Solution<jupyter_code>!python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -c dqn.yml<jupyter_output><empty_output><jupyter_text>Let's evaluate our agent ๐Ÿ‘€- RL-Baselines3-Zoo provides `enjoy.py`, a python script to evaluate our agent. In most RL libraries, we call the evaluation script `enjoy.py`.- Let's evaluate it for 5000 timesteps ๐Ÿ”ฅ<jupyter_code>!python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/<jupyter_output><empty_output><jupyter_text>Solution<jupyter_code>!python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/<jupyter_output><empty_output><jupyter_text>Publish our trained model on the Hub ๐Ÿš€Now that we saw we got good results after the training, we can publish our trained model on the hub ๐Ÿค— with one line of code. By using `rl_zoo3.push_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.This way:- You can **showcase our work** ๐Ÿ”ฅ- You can **visualize your agent playing** ๐Ÿ‘€- You can **share with the community an agent that others can use** ๐Ÿ’พ- You can **access a leaderboard ๐Ÿ† to see how well your agent is performing compared to your classmates** ๐Ÿ‘‰ https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard To be able to share your model with the community there are three more steps to follow:1๏ธโƒฃ (If it's not already done) create an account to HF โžก https://huggingface.co/join2๏ธโƒฃ Sign in and then, you need to store your authentication token from the Hugging Face website.- Create a new token (https://huggingface.co/settings/tokens) **with write role** - Copy the token - Run the cell below and past the token<jupyter_code>from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub. notebook_login() !git config --global credential.helper store<jupyter_output><empty_output><jupyter_text>If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login` 3๏ธโƒฃ We're now ready to push our trained agent to the ๐Ÿค— Hub ๐Ÿ”ฅ Let's run push_to_hub.py file to upload our trained agent to the Hub.`--repo-name `: The name of the repo`-orga`: Your Hugging Face username`-f`: Where the trained model folder is (in our case `logs`)<jupyter_code>!python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --repo-name _____________________ -orga _____________________ -f logs/<jupyter_output><empty_output><jupyter_text>Solution<jupyter_code>!python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --repo-name dqn-SpaceInvadersNoFrameskip-v4 -orga ThomasSimonini -f logs/<jupyter_output><empty_output><jupyter_text>. Congrats ๐Ÿฅณ you've just trained and uploaded your first Deep Q-Learning agent using RL-Baselines-3 Zoo. The script above should have displayed a link to a model repository such as https://huggingface.co/ThomasSimonini/dqn-SpaceInvadersNoFrameskip-v4. When you go to this link, you can:- See a **video preview of your agent** at the right. - Click "Files and versions" to see all the files in the repository.- Click "Use in stable-baselines3" to get a code snippet that shows how to load the model.- A model card (`README.md` file) which gives a description of the model and the hyperparameters you used.Under the hood, the Hub uses git-based repositories (don't worry if you don't know what git is), which means you can update the model with new versions as you experiment and improve your agent.**Compare the results of your agents with your classmates** using the [leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) ๐Ÿ† Load a powerful trained model ๐Ÿ”ฅ- The Stable-Baselines3 team uploaded **more than 150 trained Deep Reinforcement Learning agents on the Hub**.You can find them here: ๐Ÿ‘‰ https://huggingface.co/sb3Some examples:- Asteroids: https://huggingface.co/sb3/dqn-AsteroidsNoFrameskip-v4- Beam Rider: https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4- Breakout: https://huggingface.co/sb3/dqn-BreakoutNoFrameskip-v4- Road Runner: https://huggingface.co/sb3/dqn-RoadRunnerNoFrameskip-v4Let's load an agent playing Beam Rider: https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4<jupyter_code>%%html <video controls autoplay><source src="https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4/resolve/main/replay.mp4" type="video/mp4"></video><jupyter_output><empty_output><jupyter_text>1. We download the model using `rl_zoo3.load_from_hub`, and place it in a new folder that we can call `rl_trained`<jupyter_code># Download model and save it into the logs/ folder !python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga sb3 -f rl_trained/<jupyter_output><empty_output><jupyter_text>2. Let's evaluate if for 5000 timesteps<jupyter_code>!python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/ --no-render<jupyter_output><empty_output>
deep-rl-class/notebooks/unit3/unit3.ipynb/0
{ "file_path": "deep-rl-class/notebooks/unit3/unit3.ipynb", "repo_id": "deep-rl-class", "token_count": 3982 }
80
# Conclusion [[conclusion]] Congrats on finishing this unit! **That was the biggest one**, and there was a lot of information. And congrats on finishing the tutorial. Youโ€™ve just trained your first Deep RL agents and shared them with the community! ๐Ÿฅณ It's **normal if you still feel confused by some of these elements**. This was the same for me and for all people who studied RL. **Take time to really grasp the material** before continuing. Itโ€™s important to master these elements and have a solid foundation before entering the fun part. Naturally, during the course, weโ€™re going to use and explain these terms again, but itโ€™s better to understand them before diving into the next units. In the next (bonus) unit, weโ€™re going to reinforce what we just learned by **training Huggy the Dog to fetch a stick**. You will then be able to play with him ๐Ÿค—. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/huggy.jpg" alt="Huggy"/> Finally, we would love **to hear what you think of the course and how we can improve it**. If you have some feedback then, please ๐Ÿ‘‰ [fill this form](https://forms.gle/BzKXWzLAGZESGNaE9) ### Keep Learning, stay awesome ๐Ÿค—
deep-rl-class/units/en/unit1/conclusion.mdx/0
{ "file_path": "deep-rl-class/units/en/unit1/conclusion.mdx", "repo_id": "deep-rl-class", "token_count": 349 }
81
# Hands-on [[hands-on]] <CourseFloatingBanner classNames="absolute z-10 right-0 top-0" notebooks={[ {label: "Google Colab", value: "https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/notebooks/unit2/unit2.ipynb"} ]} askForHelpUrl="http://hf.co/join/discord" /> Now that we studied the Q-Learning algorithm, let's implement it from scratch and train our Q-Learning agent in two environments: 1. [Frozen-Lake-v1 (non-slippery and slippery version)](https://gymnasium.farama.org/environments/toy_text/frozen_lake/) โ˜ƒ๏ธ : where our agent will need toย **go from the starting state (S) to the goal state (G)**ย by walking only on frozen tiles (F) and avoiding holes (H). 2. [An autonomous taxi](https://gymnasium.farama.org/environments/toy_text/taxi/) ๐Ÿš– will needย **to learn to navigate**ย a city toย **transport its passengers from point A to point B.** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/envs.gif" alt="Environments"/> Thanks to a [leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard), you'll be able to compare your results with other classmates and exchange the best practices to improve your agent's scores. Who will win the challenge for Unit 2? To validate this hands-on for the [certification process](https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process), you need to push your trained Taxi model to the Hub and **get a result of >= 4.5**. To find your result, go to the [leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) and find your model, **the result = mean_reward - std of reward** For more information about the certification process, check this section ๐Ÿ‘‰ https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process And you can check your progress here ๐Ÿ‘‰ https://huggingface.co/spaces/ThomasSimonini/Check-my-progress-Deep-RL-Course **To start the hands-on click on the Open In Colab button** ๐Ÿ‘‡ : [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit2/unit2.ipynb) We strongly **recommend students use Google Colab for the hands-on exercises** instead of running them on their personal computers. By using Google Colab, **you can focus on learning and experimenting without worrying about the technical aspects** of setting up your environments. # Unit 2: Q-Learning with FrozenLake-v1 โ›„ and Taxi-v3 ๐Ÿš• <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/thumbnail.jpg" alt="Unit 2 Thumbnail"> In this notebook, **you'll code your first Reinforcement Learning agent from scratch** to play FrozenLake โ„๏ธ using Q-Learning, share it with the community, and experiment with different configurations. โฌ‡๏ธ Here is an example of what **you will achieve in just a couple of minutes.** โฌ‡๏ธ <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/envs.gif" alt="Environments"/> ### ๐ŸŽฎ Environments: - [FrozenLake-v1](https://gymnasium.farama.org/environments/toy_text/frozen_lake/) - [Taxi-v3](https://gymnasium.farama.org/environments/toy_text/taxi/) ### ๐Ÿ“š RL-Library: - Python and NumPy - [Gymnasium](https://gymnasium.farama.org/) We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the GitHub Repo](https://github.com/huggingface/deep-rl-class/issues). ## Objectives of this notebook ๐Ÿ† At the end of the notebook, you will: - Be able to use **Gymnasium**, the environment library. - Be able to code a Q-Learning agent from scratch. - Be able to **push your trained agent and the code to the Hub** with a nice video replay and an evaluation score ๐Ÿ”ฅ. ## This notebook is from the Deep Reinforcement Learning Course <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/deep-rl-course-illustration.jpg" alt="Deep RL Course illustration"/> In this free course, you will: - ๐Ÿ“– Study Deep Reinforcement Learning in **theory and practice**. - ๐Ÿง‘โ€๐Ÿ’ป Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0. - ๐Ÿค– Train **agents in unique environments** And more check ๐Ÿ“š the syllabus ๐Ÿ‘‰ https://simoninithomas.github.io/deep-rl-course Donโ€™t forget to **<a href="http://eepurl.com/ic5ZUD">sign up to the course</a>** (we are collecting your email to be able toย **send you the links when each Unit is published and give you information about the challenges and updates).** The best way to keep in touch is to join our discord server to exchange with the community and with us ๐Ÿ‘‰๐Ÿป https://discord.gg/ydHrjt3WP5 ## Prerequisites ๐Ÿ—๏ธ Before diving into the notebook, you need to: ๐Ÿ”ฒ ๐Ÿ“š **Study [Q-Learning by reading Unit 2](https://huggingface.co/deep-rl-course/unit2/introduction)** ๐Ÿค— ## A small recap of Q-Learning *Q-Learning* **is the RL algorithm that**: - Trains *Q-Function*, an **action-value function** that is encoded, in internal memory, by a *Q-table* **that contains all the state-action pair values.** - Given a state and action, our Q-Function **will search the Q-table for the corresponding value.** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/Q-function-2.jpg" alt="Q function" width="100%"/> - When the training is done, **we have an optimal Q-Function, so an optimal Q-Table.** - And if we **have an optimal Q-function**, we have an optimal policy, since we **know for each state, the best action to take.** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/link-value-policy.jpg" alt="Link value policy" width="100%"/> But, in the beginning,ย our **Q-Table is useless since it gives arbitrary value for each state-action pairย (most of the time we initialize the Q-Table to 0 values)**. But, as weโ€™llย explore the environment and update our Q-Table it will give us better and better approximations <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit2/q-learning.jpeg" alt="q-learning.jpeg" width="100%"/> This is the Q-Learning pseudocode: <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/Q-learning-2.jpg" alt="Q-Learning" width="100%"/> # Let's code our first Reinforcement Learning algorithm ๐Ÿš€ To validate this hands-on for the [certification process](https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process), you need to push your trained Taxi model to the Hub and **get a result of >= 4.5**. To find your result, go to the [leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) and find your model, **the result = mean_reward - std of reward** For more information about the certification process, check this section ๐Ÿ‘‰ https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process ## Install dependencies and create a virtual display ๐Ÿ”ฝ In the notebook, we'll need to generate a replay video. To do so, with Colab, **we need to have a virtual screen to render the environment** (and thus record the frames). Hence the following cell will install the libraries and create and run a virtual screen ๐Ÿ–ฅ Weโ€™ll install multiple ones: - `gymnasium`: Contains the FrozenLake-v1 โ›„ and Taxi-v3 ๐Ÿš• environments. - `pygame`: Used for the FrozenLake-v1 and Taxi-v3 UI. - `numpy`: Used for handling our Q-table. The Hugging Face Hub ๐Ÿค— works as a central place where anyone can share and explore models and datasets. It has versioning, metrics, visualizations and other features that will allow you to easily collaborate with others. You can see here all the Deep RL models available (if they use Q Learning) here ๐Ÿ‘‰ https://huggingface.co/models?other=q-learning ```bash pip install -r https://raw.githubusercontent.com/huggingface/deep-rl-class/main/notebooks/unit2/requirements-unit2.txt ``` ```bash sudo apt-get update sudo apt-get install -y python3-opengl apt install ffmpeg xvfb pip3 install pyvirtualdisplay ``` To make sure the new installed libraries are used, **sometimes it's required to restart the notebook runtime**. The next cell will force the **runtime to crash, so you'll need to connect again and run the code starting from here**. Thanks to this trick, **we will be able to run our virtual screen.** ```python import os os.kill(os.getpid(), 9) ``` ```python # Virtual display from pyvirtualdisplay import Display virtual_display = Display(visible=0, size=(1400, 900)) virtual_display.start() ``` ## Import the packages ๐Ÿ“ฆ In addition to the installed libraries, we also use: - `random`: To generate random numbers (that will be useful for epsilon-greedy policy). - `imageio`: To generate a replay video. ```python import numpy as np import gymnasium as gym import random import imageio import os import tqdm import pickle5 as pickle from tqdm.notebook import tqdm ``` We're now ready to code our Q-Learning algorithm ๐Ÿ”ฅ # Part 1: Frozen Lake โ›„ (non slippery version) ## Create and understand [FrozenLake environment โ›„]((https://gymnasium.farama.org/environments/toy_text/frozen_lake/) --- ๐Ÿ’ก A good habit when you start to use an environment is to check its documentation ๐Ÿ‘‰ https://gymnasium.farama.org/environments/toy_text/frozen_lake/ --- We're going to train our Q-Learning agent **to navigate from the starting state (S) to the goal state (G) by walking only on frozen tiles (F) and avoid holes (H)**. We can have two sizes of environment: - `map_name="4x4"`: a 4x4 grid version - `map_name="8x8"`: a 8x8 grid version The environment has two modes: - `is_slippery=False`: The agent always moves **in the intended direction** due to the non-slippery nature of the frozen lake (deterministic). - `is_slippery=True`: The agent **may not always move in the intended direction** due to the slippery nature of the frozen lake (stochastic). For now let's keep it simple with the 4x4 map and non-slippery. We add a parameter called `render_mode` that specifies how the environment should be visualised. In our case because we **want to record a video of the environment at the end, we need to set render_mode to rgb_array**. As [explained in the documentation](https://gymnasium.farama.org/api/env/#gymnasium.Env.render) โ€œrgb_arrayโ€: Return a single frame representing the current state of the environment. A frame is a np.ndarray with shape (x, y, 3) representing RGB values for an x-by-y pixel image. ```python # Create the FrozenLake-v1 environment using 4x4 map and non-slippery version and render_mode="rgb_array" env = gym.make() # TODO use the correct parameters ``` ### Solution ```python env = gym.make("FrozenLake-v1", map_name="4x4", is_slippery=False, render_mode="rgb_array") ``` You can create your own custom grid like this: ```python desc=["SFFF", "FHFH", "FFFH", "HFFG"] gym.make('FrozenLake-v1', desc=desc, is_slippery=True) ``` but we'll use the default environment for now. ### Let's see what the Environment looks like: ```python # We create our environment with gym.make("<name_of_the_environment>")- `is_slippery=False`: The agent always moves in the intended direction due to the non-slippery nature of the frozen lake (deterministic). print("_____OBSERVATION SPACE_____ \n") print("Observation Space", env.observation_space) print("Sample observation", env.observation_space.sample()) # Get a random observation ``` We see with `Observation Space Shape Discrete(16)` that the observation is an integer representing the **agentโ€™s current position as current_row * ncols + current_col (where both the row and col start at 0)**. For example, the goal position in the 4x4 map can be calculated as follows: 3 * 4 + 3 = 15. The number of possible observations is dependent on the size of the map. **For example, the 4x4 map has 16 possible observations.** For instance, this is what state = 0 looks like: <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit2/frozenlake.png" alt="FrozenLake"> ```python print("\n _____ACTION SPACE_____ \n") print("Action Space Shape", env.action_space.n) print("Action Space Sample", env.action_space.sample()) # Take a random action ``` The action space (the set of possible actions the agent can take) is discrete with 4 actions available ๐ŸŽฎ: - 0: GO LEFT - 1: GO DOWN - 2: GO RIGHT - 3: GO UP Reward function ๐Ÿ’ฐ: - Reach goal: +1 - Reach hole: 0 - Reach frozen: 0 ## Create and Initialize the Q-table ๐Ÿ—„๏ธ (๐Ÿ‘€ Step 1 of the pseudocode) <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/Q-learning-2.jpg" alt="Q-Learning" width="100%"/> It's time to initialize our Q-table! To know how many rows (states) and columns (actions) to use, we need to know the action and observation space. We already know their values from before, but we'll want to obtain them programmatically so that our algorithm generalizes for different environments. Gym provides us a way to do that: `env.action_space.n` and `env.observation_space.n` ```python state_space = print("There are ", state_space, " possible states") action_space = print("There are ", action_space, " possible actions") ``` ```python # Let's create our Qtable of size (state_space, action_space) and initialized each values at 0 using np.zeros. np.zeros needs a tuple (a,b) def initialize_q_table(state_space, action_space): Qtable = return Qtable ``` ```python Qtable_frozenlake = initialize_q_table(state_space, action_space) ``` ### Solution ```python state_space = env.observation_space.n print("There are ", state_space, " possible states") action_space = env.action_space.n print("There are ", action_space, " possible actions") ``` ```python # Let's create our Qtable of size (state_space, action_space) and initialized each values at 0 using np.zeros def initialize_q_table(state_space, action_space): Qtable = np.zeros((state_space, action_space)) return Qtable ``` ```python Qtable_frozenlake = initialize_q_table(state_space, action_space) ``` ## Define the greedy policy ๐Ÿค– Remember we have two policies since Q-Learning is an **off-policy** algorithm. This means we're using a **different policy for acting and updating the value function**. - Epsilon-greedy policy (acting policy) - Greedy-policy (updating policy) The greedy policy will also be the final policy we'll have when the Q-learning agent completes training. The greedy policy is used to select an action using the Q-table. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/off-on-4.jpg" alt="Q-Learning" width="100%"/> ```python def greedy_policy(Qtable, state): # Exploitation: take the action with the highest state, action value action = return action ``` #### Solution ```python def greedy_policy(Qtable, state): # Exploitation: take the action with the highest state, action value action = np.argmax(Qtable[state][:]) return action ``` ## Define the epsilon-greedy policy ๐Ÿค– Epsilon-greedy is the training policy that handles the exploration/exploitation trade-off. The idea with epsilon-greedy: - With *probability 1โ€Š-โ€Šษ›* : **we do exploitation** (i.e. our agent selects the action with the highest state-action pair value). - With *probability ษ›*: we do **exploration** (trying a random action). As the training continues, we progressively **reduce the epsilon value since we will need less and less exploration and more exploitation.** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/Q-learning-4.jpg" alt="Q-Learning" width="100%"/> ```python def epsilon_greedy_policy(Qtable, state, epsilon): # Randomly generate a number between 0 and 1 random_num = # if random_num > greater than epsilon --> exploitation if random_num > epsilon: # Take the action with the highest value given a state # np.argmax can be useful here action = # else --> exploration else: action = # Take a random action return action ``` #### Solution ```python def epsilon_greedy_policy(Qtable, state, epsilon): # Randomly generate a number between 0 and 1 random_num = random.uniform(0, 1) # if random_num > greater than epsilon --> exploitation if random_num > epsilon: # Take the action with the highest value given a state # np.argmax can be useful here action = greedy_policy(Qtable, state) # else --> exploration else: action = env.action_space.sample() return action ``` ## Define the hyperparameters โš™๏ธ The exploration related hyperparamters are some of the most important ones. - We need to make sure that our agent **explores enough of the state space** to learn a good value approximation. To do that, we need to have progressive decay of the epsilon. - If you decrease epsilon too fast (too high decay_rate), **you take the risk that your agent will be stuck**, since your agent didn't explore enough of the state space and hence can't solve the problem. ```python # Training parameters n_training_episodes = 10000 # Total training episodes learning_rate = 0.7 # Learning rate # Evaluation parameters n_eval_episodes = 100 # Total number of test episodes # Environment parameters env_id = "FrozenLake-v1" # Name of the environment max_steps = 99 # Max steps per episode gamma = 0.95 # Discounting rate eval_seed = [] # The evaluation seed of the environment # Exploration parameters max_epsilon = 1.0 # Exploration probability at start min_epsilon = 0.05 # Minimum exploration probability decay_rate = 0.0005 # Exponential decay rate for exploration prob ``` ## Create the training loop method <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/Q-learning-2.jpg" alt="Q-Learning" width="100%"/> The training loop goes like this: ``` For episode in the total of training episodes: Reduce epsilon (since we need less and less exploration) Reset the environment For step in max timesteps: Choose the action At using epsilon greedy policy Take the action (a) and observe the outcome state(s') and reward (r) Update the Q-value Q(s,a) using Bellman equation Q(s,a) + lr [R(s,a) + gamma * max Q(s',a') - Q(s,a)] If done, finish the episode Our next state is the new state ``` ```python def train(n_training_episodes, min_epsilon, max_epsilon, decay_rate, env, max_steps, Qtable): for episode in tqdm(range(n_training_episodes)): # Reduce epsilon (because we need less and less exploration) epsilon = min_epsilon + (max_epsilon - min_epsilon)*np.exp(-decay_rate*episode) # Reset the environment state, info = env.reset() step = 0 terminated = False truncated = False # repeat for step in range(max_steps): # Choose the action At using epsilon greedy policy action = # Take action At and observe Rt+1 and St+1 # Take the action (a) and observe the outcome state(s') and reward (r) new_state, reward, terminated, truncated, info = # Update Q(s,a):= Q(s,a) + lr [R(s,a) + gamma * max Q(s',a') - Q(s,a)] Qtable[state][action] = # If terminated or truncated finish the episode if terminated or truncated: break # Our next state is the new state state = new_state return Qtable ``` #### Solution ```python def train(n_training_episodes, min_epsilon, max_epsilon, decay_rate, env, max_steps, Qtable): for episode in tqdm(range(n_training_episodes)): # Reduce epsilon (because we need less and less exploration) epsilon = min_epsilon + (max_epsilon - min_epsilon) * np.exp(-decay_rate * episode) # Reset the environment state, info = env.reset() step = 0 terminated = False truncated = False # repeat for step in range(max_steps): # Choose the action At using epsilon greedy policy action = epsilon_greedy_policy(Qtable, state, epsilon) # Take action At and observe Rt+1 and St+1 # Take the action (a) and observe the outcome state(s') and reward (r) new_state, reward, terminated, truncated, info = env.step(action) # Update Q(s,a):= Q(s,a) + lr [R(s,a) + gamma * max Q(s',a') - Q(s,a)] Qtable[state][action] = Qtable[state][action] + learning_rate * ( reward + gamma * np.max(Qtable[new_state]) - Qtable[state][action] ) # If terminated or truncated finish the episode if terminated or truncated: break # Our next state is the new state state = new_state return Qtable ``` ## Train the Q-Learning agent ๐Ÿƒ ```python Qtable_frozenlake = train(n_training_episodes, min_epsilon, max_epsilon, decay_rate, env, max_steps, Qtable_frozenlake) ``` ## Let's see what our Q-Learning table looks like now ๐Ÿ‘€ ```python Qtable_frozenlake ``` ## The evaluation method ๐Ÿ“ - We defined the evaluation method that we're going to use to test our Q-Learning agent. ```python def evaluate_agent(env, max_steps, n_eval_episodes, Q, seed): """ Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward. :param env: The evaluation environment :param n_eval_episodes: Number of episode to evaluate the agent :param Q: The Q-table :param seed: The evaluation seed array (for taxi-v3) """ episode_rewards = [] for episode in tqdm(range(n_eval_episodes)): if seed: state, info = env.reset(seed=seed[episode]) else: state, info = env.reset() step = 0 truncated = False terminated = False total_rewards_ep = 0 for step in range(max_steps): # Take the action (index) that have the maximum expected future reward given that state action = greedy_policy(Q, state) new_state, reward, terminated, truncated, info = env.step(action) total_rewards_ep += reward if terminated or truncated: break state = new_state episode_rewards.append(total_rewards_ep) mean_reward = np.mean(episode_rewards) std_reward = np.std(episode_rewards) return mean_reward, std_reward ``` ## Evaluate our Q-Learning agent ๐Ÿ“ˆ - Usually, you should have a mean reward of 1.0 - The **environment is relatively easy** since the state space is really small (16). What you can try to do is [to replace it with the slippery version](https://www.gymlibrary.dev/environments/toy_text/frozen_lake/), which introduces stochasticity, making the environment more complex. ```python # Evaluate our Agent mean_reward, std_reward = evaluate_agent(env, max_steps, n_eval_episodes, Qtable_frozenlake, eval_seed) print(f"Mean_reward={mean_reward:.2f} +/- {std_reward:.2f}") ``` ## Publish our trained model to the Hub ๐Ÿ”ฅ Now that we saw good results after the training, **we can publish our trained model to the Hub ๐Ÿค— with one line of code**. Here's an example of a Model Card: <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit2/modelcard.png" alt="Model card" width="100%"/> Under the hood, the Hub uses git-based repositories (don't worry if you don't know what git is), which means you can update the model with new versions as you experiment and improve your agent. #### Do not modify this code ```python from huggingface_hub import HfApi, snapshot_download from huggingface_hub.repocard import metadata_eval_result, metadata_save from pathlib import Path import datetime import json ``` ```python def record_video(env, Qtable, out_directory, fps=1): """ Generate a replay video of the agent :param env :param Qtable: Qtable of our agent :param out_directory :param fps: how many frame per seconds (with taxi-v3 and frozenlake-v1 we use 1) """ images = [] terminated = False truncated = False state, info = env.reset(seed=random.randint(0, 500)) img = env.render() images.append(img) while not terminated or truncated: # Take the action (index) that have the maximum expected future reward given that state action = np.argmax(Qtable[state][:]) state, reward, terminated, truncated, info = env.step( action ) # We directly put next_state = state for recording logic img = env.render() images.append(img) imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps) ``` ```python def push_to_hub(repo_id, model, env, video_fps=1, local_repo_path="hub"): """ Evaluate, Generate a video and Upload a model to Hugging Face Hub. This method does the complete pipeline: - It evaluates the model - It generates the model card - It generates a replay video of the agent - It pushes everything to the Hub :param repo_id: repo_id: id of the model repository from the Hugging Face Hub :param env :param video_fps: how many frame per seconds to record our video replay (with taxi-v3 and frozenlake-v1 we use 1) :param local_repo_path: where the local repository is """ _, repo_name = repo_id.split("/") eval_env = env api = HfApi() # Step 1: Create the repo repo_url = api.create_repo( repo_id=repo_id, exist_ok=True, ) # Step 2: Download files repo_local_path = Path(snapshot_download(repo_id=repo_id)) # Step 3: Save the model if env.spec.kwargs.get("map_name"): model["map_name"] = env.spec.kwargs.get("map_name") if env.spec.kwargs.get("is_slippery", "") == False: model["slippery"] = False # Pickle the model with open((repo_local_path) / "q-learning.pkl", "wb") as f: pickle.dump(model, f) # Step 4: Evaluate the model and build JSON with evaluation metrics mean_reward, std_reward = evaluate_agent( eval_env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"] ) evaluate_data = { "env_id": model["env_id"], "mean_reward": mean_reward, "n_eval_episodes": model["n_eval_episodes"], "eval_datetime": datetime.datetime.now().isoformat(), } # Write a JSON file called "results.json" that will contain the # evaluation results with open(repo_local_path / "results.json", "w") as outfile: json.dump(evaluate_data, outfile) # Step 5: Create the model card env_name = model["env_id"] if env.spec.kwargs.get("map_name"): env_name += "-" + env.spec.kwargs.get("map_name") if env.spec.kwargs.get("is_slippery", "") == False: env_name += "-" + "no_slippery" metadata = {} metadata["tags"] = [env_name, "q-learning", "reinforcement-learning", "custom-implementation"] # Add metrics eval = metadata_eval_result( model_pretty_name=repo_name, task_pretty_name="reinforcement-learning", task_id="reinforcement-learning", metrics_pretty_name="mean_reward", metrics_id="mean_reward", metrics_value=f"{mean_reward:.2f} +/- {std_reward:.2f}", dataset_pretty_name=env_name, dataset_id=env_name, ) # Merges both dictionaries metadata = {**metadata, **eval} model_card = f""" # **Q-Learning** Agent playing1 **{env_id}** This is a trained model of a **Q-Learning** agent playing **{env_id}** . ## Usage model = load_from_hub(repo_id="{repo_id}", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) """ evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) readme_path = repo_local_path / "README.md" readme = "" print(readme_path.exists()) if readme_path.exists(): with readme_path.open("r", encoding="utf8") as f: readme = f.read() else: readme = model_card with readme_path.open("w", encoding="utf-8") as f: f.write(readme) # Save our metrics to Readme metadata metadata_save(readme_path, metadata) # Step 6: Record a video video_path = repo_local_path / "replay.mp4" record_video(env, model["qtable"], video_path, video_fps) # Step 7. Push everything to the Hub api.upload_folder( repo_id=repo_id, folder_path=repo_local_path, path_in_repo=".", ) print("Your model is pushed to the Hub. You can view your model here: ", repo_url) ``` ### . By using `push_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the Hub**. This way: - You can **showcase our work** ๐Ÿ”ฅ - You can **visualize your agent playing** ๐Ÿ‘€ - You can **share an agent with the community that others can use** ๐Ÿ’พ - You can **access a leaderboard ๐Ÿ† to see how well your agent is performing compared to your classmates** ๐Ÿ‘‰ https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard To be able to share your model with the community there are three more steps to follow: 1๏ธโƒฃ (If it's not already done) create an account to HF โžก https://huggingface.co/join 2๏ธโƒฃ Sign in and then, you need to store your authentication token from the Hugging Face website. - Create a new token (https://huggingface.co/settings/tokens) **with write role** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/create-token.jpg" alt="Create HF Token"> ```python from huggingface_hub import notebook_login notebook_login() ``` If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login` (or `login`) 3๏ธโƒฃ We're now ready to push our trained agent to the ๐Ÿค— Hub ๐Ÿ”ฅ using `push_to_hub()` function - Let's create **the model dictionary that contains the hyperparameters and the Q_table**. ```python model = { "env_id": env_id, "max_steps": max_steps, "n_training_episodes": n_training_episodes, "n_eval_episodes": n_eval_episodes, "eval_seed": eval_seed, "learning_rate": learning_rate, "gamma": gamma, "max_epsilon": max_epsilon, "min_epsilon": min_epsilon, "decay_rate": decay_rate, "qtable": Qtable_frozenlake, } ``` Let's fill the `push_to_hub` function: - `repo_id`: the name of the Hugging Face Hub Repository that will be created/updated ` (repo_id = {username}/{repo_name})` ๐Ÿ’ก A good `repo_id` is `{username}/q-{env_id}` - `model`: our model dictionary containing the hyperparameters and the Qtable. - `env`: the environment. - `commit_message`: message of the commit ```python model ``` ```python username = "" # FILL THIS repo_name = "q-FrozenLake-v1-4x4-noSlippery" push_to_hub(repo_id=f"{username}/{repo_name}", model=model, env=env) ``` Congrats ๐Ÿฅณ you've just implemented from scratch, trained, and uploaded your first Reinforcement Learning agent. FrozenLake-v1 no_slippery is very simple environment, let's try a harder one ๐Ÿ”ฅ. # Part 2: Taxi-v3 ๐Ÿš– ## Create and understand [Taxi-v3 ๐Ÿš•](https://gymnasium.farama.org/environments/toy_text/taxi/) --- ๐Ÿ’ก A good habit when you start to use an environment is to check its documentation ๐Ÿ‘‰ https://gymnasium.farama.org/environments/toy_text/taxi/ --- In `Taxi-v3` ๐Ÿš•, there are four designated locations in the grid world indicated by R(ed), G(reen), Y(ellow), and B(lue). When the episode starts, **the taxi starts off at a random square** and the passenger is at a random location. The taxi drives to the passengerโ€™s location, **picks up the passenger**, drives to the passengerโ€™s destination (another one of the four specified locations), and then **drops off the passenger**. Once the passenger is dropped off, the episode ends. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit2/taxi.png" alt="Taxi"> ```python env = gym.make("Taxi-v3", render_mode="rgb_array") ``` There are **500 discrete states since there are 25 taxi positions, 5 possible locations of the passenger** (including the case when the passenger is in the taxi), and **4 destination locations.** ```python state_space = env.observation_space.n print("There are ", state_space, " possible states") ``` ```python action_space = env.action_space.n print("There are ", action_space, " possible actions") ``` The action space (the set of possible actions the agent can take) is discrete with **6 actions available ๐ŸŽฎ**: - 0: move south - 1: move north - 2: move east - 3: move west - 4: pickup passenger - 5: drop off passenger Reward function ๐Ÿ’ฐ: - -1 per step unless other reward is triggered. - +20 delivering passenger. - -10 executing โ€œpickupโ€ and โ€œdrop-offโ€ actions illegally. ```python # Create our Q table with state_size rows and action_size columns (500x6) Qtable_taxi = initialize_q_table(state_space, action_space) print(Qtable_taxi) print("Q-table shape: ", Qtable_taxi.shape) ``` ## Define the hyperparameters โš™๏ธ โš  DO NOT MODIFY EVAL_SEED: the eval_seed array **allows us to evaluate your agent with the same taxi starting positions for every classmate** ```python # Training parameters n_training_episodes = 25000 # Total training episodes learning_rate = 0.7 # Learning rate # Evaluation parameters n_eval_episodes = 100 # Total number of test episodes # DO NOT MODIFY EVAL_SEED eval_seed = [ 16, 54, 165, 177, 191, 191, 120, 80, 149, 178, 48, 38, 6, 125, 174, 73, 50, 172, 100, 148, 146, 6, 25, 40, 68, 148, 49, 167, 9, 97, 164, 176, 61, 7, 54, 55, 161, 131, 184, 51, 170, 12, 120, 113, 95, 126, 51, 98, 36, 135, 54, 82, 45, 95, 89, 59, 95, 124, 9, 113, 58, 85, 51, 134, 121, 169, 105, 21, 30, 11, 50, 65, 12, 43, 82, 145, 152, 97, 106, 55, 31, 85, 38, 112, 102, 168, 123, 97, 21, 83, 158, 26, 80, 63, 5, 81, 32, 11, 28, 148, ] # Evaluation seed, this ensures that all classmates agents are trained on the same taxi starting position # Each seed has a specific starting state # Environment parameters env_id = "Taxi-v3" # Name of the environment max_steps = 99 # Max steps per episode gamma = 0.95 # Discounting rate # Exploration parameters max_epsilon = 1.0 # Exploration probability at start min_epsilon = 0.05 # Minimum exploration probability decay_rate = 0.005 # Exponential decay rate for exploration prob ``` ## Train our Q-Learning agent ๐Ÿƒ ```python Qtable_taxi = train(n_training_episodes, min_epsilon, max_epsilon, decay_rate, env, max_steps, Qtable_taxi) Qtable_taxi ``` ## Create a model dictionary ๐Ÿ’พ and publish our trained model to the Hub ๐Ÿ”ฅ - We create a model dictionary that will contain all the training hyperparameters for reproducibility and the Q-Table. ```python model = { "env_id": env_id, "max_steps": max_steps, "n_training_episodes": n_training_episodes, "n_eval_episodes": n_eval_episodes, "eval_seed": eval_seed, "learning_rate": learning_rate, "gamma": gamma, "max_epsilon": max_epsilon, "min_epsilon": min_epsilon, "decay_rate": decay_rate, "qtable": Qtable_taxi, } ``` ```python username = "" # FILL THIS repo_name = "" # FILL THIS push_to_hub(repo_id=f"{username}/{repo_name}", model=model, env=env) ``` Now that it's on the Hub, you can compare the results of your Taxi-v3 with your classmates using the leaderboard ๐Ÿ† ๐Ÿ‘‰ https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit2/taxi-leaderboard.png" alt="Taxi Leaderboard"> # Part 3: Load from Hub ๐Ÿ”ฝ What's amazing with Hugging Face Hub ๐Ÿค— is that you can easily load powerful models from the community. Loading a saved model from the Hub is really easy: 1. You go https://huggingface.co/models?other=q-learning to see the list of all the q-learning saved models. 2. You select one and copy its repo_id <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit2/copy-id.png" alt="Copy id"> 3. Then we just need to use `load_from_hub` with: - The repo_id - The filename: the saved model inside the repo. #### Do not modify this code ```python from urllib.error import HTTPError from huggingface_hub import hf_hub_download def load_from_hub(repo_id: str, filename: str) -> str: """ Download a model from Hugging Face Hub. :param repo_id: id of the model repository from the Hugging Face Hub :param filename: name of the model zip file from the repository """ # Get the model from the Hub, download and cache the model on your local disk pickle_model = hf_hub_download(repo_id=repo_id, filename=filename) with open(pickle_model, "rb") as f: downloaded_model_file = pickle.load(f) return downloaded_model_file ``` ### . ```python model = load_from_hub(repo_id="ThomasSimonini/q-Taxi-v3", filename="q-learning.pkl") # Try to use another model print(model) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ``` ```python model = load_from_hub( repo_id="ThomasSimonini/q-FrozenLake-v1-no-slippery", filename="q-learning.pkl" ) # Try to use another model env = gym.make(model["env_id"], is_slippery=False) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ``` ## Some additional challenges ๐Ÿ† The best way to learn **is to try things on your own**! As you saw, the current agent is not doing great. As a first suggestion, you can train for more steps. With 1,000,000 steps, we saw some great results! In the [Leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) you will find your agents. Can you get to the top? Here are some ideas to climb up the leaderboard: * Train more steps * Try different hyperparameters by looking at what your classmates have done. * **Push your new trained model** on the Hub ๐Ÿ”ฅ Are walking on ice and driving taxis too boring to you? Try to **change the environment**, why not use FrozenLake-v1 slippery version? Check how they work [using the gymnasium documentation](https://gymnasium.farama.org/) and have fun ๐ŸŽ‰. _____________________________________________________________________ Congrats ๐Ÿฅณ, you've just implemented, trained, and uploaded your first Reinforcement Learning agent. Understanding Q-Learning is an **important step to understanding value-based methods.** In the next Unit with Deep Q-Learning, we'll see that while creating and updating a Q-table was a good strategy โ€” **however, it is not scalable.** For instance, imagine you create an agent that learns to play Doom. <img src="https://vizdoom.cs.put.edu.pl/user/pages/01.tutorial/basic.png" alt="Doom"/> Doom is a large environment with a huge state space (millions of different states). Creating and updating a Q-table for that environment would not be efficient. That's why we'll study Deep Q-Learning in the next unit, an algorithm **where we use a neural network that approximates, given a state, the different Q-values for each action.** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit4/atari-envs.gif" alt="Environments"/> See you in Unit 3! ๐Ÿ”ฅ ## Keep learning, stay awesome ๐Ÿค—
deep-rl-class/units/en/unit2/hands-on.mdx/0
{ "file_path": "deep-rl-class/units/en/unit2/hands-on.mdx", "repo_id": "deep-rl-class", "token_count": 13932 }
82
# Glossary This is a community-created glossary. Contributions are welcomed! - **Tabular Method:** Type of problem in which the state and action spaces are small enough to approximate value functions to be represented as arrays and tables. **Q-learning** is an example of tabular method since a table is used to represent the value for different state-action pairs. - **Deep Q-Learning:** Method that trains a neural network to approximate, given a state, the different **Q-values** for each possible action at that state. It is used to solve problems when observational space is too big to apply a tabular Q-Learning approach. - **Temporal Limitation** is a difficulty presented when the environment state is represented by frames. A frame by itself does not provide temporal information. In order to obtain temporal information, we need to **stack** a number of frames together. - **Phases of Deep Q-Learning:** - **Sampling:** Actions are performed, and observed experience tuples are stored in a **replay memory**. - **Training:** Batches of tuples are selected randomly and the neural network updates its weights using gradient descent. - **Solutions to stabilize Deep Q-Learning:** - **Experience Replay:** A replay memory is created to save experiences samples that can be reused during training. This allows the agent to learn from the same experiences multiple times. Also, it helps the agent avoid forgetting previous experiences as it gets new ones. - **Random sampling** from replay buffer allows to remove correlation in the observation sequences and prevents action values from oscillating or diverging catastrophically. - **Fixed Q-Target:** In order to calculate the **Q-Target** we need to estimate the discounted optimal **Q-value** of the next state by using Bellman equation. The problem is that the same network weights are used to calculate the **Q-Target** and the **Q-value**. This means that everytime we are modifying the **Q-value**, the **Q-Target** also moves with it. To avoid this issue, a separate network with fixed parameters is used for estimating the Temporal Difference Target. The target network is updated by copying parameters from our Deep Q-Network after certain **C steps**. - **Double DQN:** Method to handle **overestimation** of **Q-Values**. This solution uses two networks to decouple the action selection from the target **Value generation**: - **DQN Network** to select the best action to take for the next state (the action with the highest **Q-Value**) - **Target Network** to calculate the target **Q-Value** of taking that action at the next state. This approach reduces the **Q-Values** overestimation, it helps to train faster and have more stable learning. If you want to improve the course, you can [open a Pull Request.](https://github.com/huggingface/deep-rl-class/pulls) This glossary was made possible thanks to: - [Dario Paez](https://github.com/dario248)
deep-rl-class/units/en/unit3/glossary.mdx/0
{ "file_path": "deep-rl-class/units/en/unit3/glossary.mdx", "repo_id": "deep-rl-class", "token_count": 721 }
83
# (Optional) What is Curiosity in Deep Reinforcement Learning? This is an (optional) introduction to Curiosity. If you want to learn more, you can read two additional articles where we dive into the mathematical details: - [Curiosity-Driven Learning through Next State Prediction](https://medium.com/data-from-the-trenches/curiosity-driven-learning-through-next-state-prediction-f7f4e2f592fa) - [Random Network Distillation: a new take on Curiosity-Driven Learning](https://medium.com/data-from-the-trenches/curiosity-driven-learning-through-random-network-distillation-488ffd8e5938) ## Two Major Problems in Modern RL To understand what Curiosity is, we first need to understand the two major problems with RL: First, the *sparse rewards problem:* that is, **most rewards do not contain information, and hence are set to zero**. Remember that RL is based on the *reward hypothesis*, which is the idea that each goal can be described as the maximization of the rewards. Therefore, rewards act as feedback for RL agents; **if they donโ€™t receive any, their knowledge of which action is appropriate (or not) cannot change**. <figure> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/curiosity1.png" alt="Curiosity"/> <figcaption>Source: Thanks to the reward, our agent knows that this action at that state was good</figcaption> </figure> For instance, in [Vizdoom](https://vizdoom.cs.put.edu.pl/), a set of environments based on the game Doom โ€œDoomMyWayHome,โ€ your agent is only rewarded **if it finds the vest**. However, the vest is far away from your starting point, so most of your rewards will be zero. Therefore, if our agent does not receive useful feedback (dense rewards), it will take much longer to learn an optimal policy, and **it can spend time turning around without finding the goal**. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/curiosity2.png" alt="Curiosity"/> The second big problem is that **the extrinsic reward function is handmade; in each environment, a human has to implement a reward function**. But how we can scale that in big and complex environments? ## So what is Curiosity? A solution to these problems is **to develop a reward function intrinsic to the agent, i.e., generated by the agent itself**. The agent will act as a self-learner since it will be the student and its own feedback master. **This intrinsic reward mechanism is known as Curiosity** because this reward pushes the agent to explore states that are novel/unfamiliar. To achieve that, our agent will receive a high reward when exploring new trajectories. This reward is inspired by how humans act. ** We naturally have an intrinsic desire to explore environments and discover new things**. There are different ways to calculate this intrinsic reward. The classical approach (Curiosity through next-state prediction) is to calculate Curiosity **as the error of our agent in predicting the next state, given the current state and action taken**. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/curiosity3.png" alt="Curiosity"/> Because the idea of Curiosity is to **encourage our agent to perform actions that reduce the uncertainty in the agentโ€™s ability to predict the consequences of its actions** (uncertainty will be higher in areas where the agent has spent less time or in areas with complex dynamics). If the agent spends a lot of time on these states, it will be good at predicting the next state (low Curiosity). On the other hand, if itโ€™s in a new, unexplored state, it will be hard to predict the following state (high Curiosity). <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/curiosity4.png" alt="Curiosity"/> Using Curiosity will push our agent to favor transitions with high prediction error (which will be higher in areas where the agent has spent less time, or in areas with complex dynamics) and **consequently better explore our environment**. Thereโ€™s also **other curiosity calculation methods**. ML-Agents uses a more advanced one called Curiosity through random network distillation. This is out of the scope of the tutorial but if youโ€™re interested [I wrote an article explaining it in detail](https://medium.com/data-from-the-trenches/curiosity-driven-learning-through-random-network-distillation-488ffd8e5938).
deep-rl-class/units/en/unit5/curiosity.mdx/0
{ "file_path": "deep-rl-class/units/en/unit5/curiosity.mdx", "repo_id": "deep-rl-class", "token_count": 1153 }
84
# Hands-on Now that you learned the basics of multi-agents, you're ready to train your first agents in a multi-agent system: **a 2vs2 soccer team that needs to beat the opponent team**. And youโ€™re going to participate in AI vs. AI challenges where your trained agent will compete against other classmatesโ€™ **agents every day and be ranked on a new leaderboard.** To validate this hands-on for the certification process, you just need to push a trained model. There **are no minimal results to attain to validate it.** For more information about the certification process, check this section ๐Ÿ‘‰ [https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process](https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process) This hands-on will be different since to get correct results **you need to train your agents from 4 hours to 8 hours**. And given the risk of timeout in Colab, we advise you to train on your computer. You donโ€™t need a supercomputer: a simple laptop is good enough for this exercise. Let's get started! ๐Ÿ”ฅ ## What is AI vs. AI? AI vs. AI is an open-source tool we developed at Hugging Face to compete agents on the Hub against one another in a multi-agent setting. These models are then ranked in a leaderboard. The idea of this tool is to have a robust evaluation tool: **by evaluating your agent with a lot of others, youโ€™ll get a good idea of the quality of your policy.** More precisely, AI vs. AI is three tools: - A *matchmaking process* defining the matches (which model against which) and running the model fights using a background task in the Space. - A *leaderboard* getting the match history results and displaying the modelsโ€™ ELO ratings: [https://huggingface.co/spaces/huggingface-projects/AIvsAI-SoccerTwos](https://huggingface.co/spaces/huggingface-projects/AIvsAI-SoccerTwos) - A *Space demo* to visualize your agents playing against others: [https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos](https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos) In addition to these three tools, your classmate cyllum created a ๐Ÿค— SoccerTwos Challenge Analytics where you can check the detailed match results of a model: [https://huggingface.co/spaces/cyllum/soccertwos-analytics](https://huggingface.co/spaces/cyllum/soccertwos-analytics) We're [wrote a blog post to explain this AI vs. AI tool in detail](https://huggingface.co/blog/aivsai), but to give you the big picture it works this way: - Every four hours, our algorithm **fetches all the available models for a given environment (in our case ML-Agents-SoccerTwos).** - It creates a **queue of matches with the matchmaking algorithm.** - We simulate the match in a Unity headless process and **gather the match result** (1 if the first model won, 0.5 if itโ€™s a draw, 0 if the second model won) in a Dataset. - Then, when all matches from the matches queue are done, **we update the ELO score for each model and update the leaderboard.** ### Competition Rules This first AI vs. AI competition **is an experiment**: the goal is to improve the tool in the future with your feedback. So some **breakups can happen during the challenge**. But don't worry **all the results are saved in a dataset so we can always restart the calculation correctly without losing information**. In order for your model to get correctly evaluated against others you need to follow these rules: 1. **You can't change the observation space or action space of the agent.** By doing that your model will not work during evaluation. 2. You **can't use a custom trainer for now,** you need to use the Unity MLAgents ones. 3. We provide executables to train your agents. You can also use the Unity Editor if you prefer **, but to avoid bugs, we advise that you use our executables**. What will make the difference during this challenge are **the hyperparameters you choose**. We're constantly trying to improve our tutorials, soย **if you find some issues in this notebook**, pleaseย [open an issue on the GitHub Repo](https://github.com/huggingface/deep-rl-class/issues). ### Chat with your classmates, share advice and ask questions on Discord - We created a new channel called `ai-vs-ai-challenge` to exchange advice and ask questions. - If you didnโ€™t join the discord server yet, you can [join here](https://discord.gg/ydHrjt3WP5) ## Step 0: Install MLAgents and download the correct executable We advise you to use [conda](https://docs.conda.io/en/latest/) as a package manager and create a new environment. With conda, we create a new environment called rl with **Python 3.10.12**: ```bash conda create --name rl python=3.10.12 conda activate rl ``` To be able to train our agents correctly and push to the Hub, we need to install ML-Agents ```bash git clone https://github.com/Unity-Technologies/ml-agents ``` When the cloning is done (it takes 2.63 GB), we go inside the repository and install the package ```bash cd ml-agents pip install -e ./ml-agents-envs pip install -e ./ml-agents ``` Finally, you need to install git-lfs: https://git-lfs.com/ Now that itโ€™s installed, we need to add the environment training executable. Based on your operating system you need to download one of them, unzip it and place it in a new folder inside `ml-agents` that you call `training-envs-executables` At the end your executable should be in `ml-agents/training-envs-executables/SoccerTwos` Windows: Download [this executable](https://drive.google.com/file/d/1sqFxbEdTMubjVktnV4C6ICjp89wLhUcP/view?usp=sharing) Linux (Ubuntu): Download [this executable](https://drive.google.com/file/d/1KuqBKYiXiIcU4kNMqEzhgypuFP5_45CL/view?usp=sharing) Mac: Download [this executable](https://drive.google.com/drive/folders/1h7YB0qwjoxxghApQdEUQmk95ZwIDxrPG?usp=share_link) โš  For Mac you need also to call this `xattr -cr training-envs-executables/SoccerTwos/SoccerTwos.app` to be able to run SoccerTwos ## Step 1: Understand the environment The environment is called `SoccerTwos`. The Unity MLAgents Team made it. You can find its documentation [here](https://github.com/Unity-Technologies/ml-agents/blob/develop/docs/Learning-Environment-Examples.md#soccer-twos) The goal in this environment **is to get the ball into the opponent's goal while preventing the ball from entering your own goal.** <figure> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/soccertwos.gif" alt="SoccerTwos"/> <figcaption>This environment was made by the <a href="https://github.com/Unity-Technologies/ml-agents"> Unity MLAgents Team</a></figcaption> </figure> ### The reward function The reward function is: <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/soccerreward.png" alt="SoccerTwos Reward"/> ### The observation space The observation space is composed of vectors of size 336: - 11 ray-casts forward distributed over 120 degrees (264 state dimensions) - 3 ray-casts backward distributed over 90 degrees (72 state dimensions) - Both of these ray-casts can detect 6 objects: - Ball - Blue Goal - Purple Goal - Wall - Blue Agent - Purple Agent ### The action space The action space is three discrete branches: <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/socceraction.png" alt="SoccerTwos Action"/> ## Step 2: Understand MA-POCA We know how to train agents to play against others: **we can use self-play.** This is a perfect technique for a 1vs1. But in our case weโ€™re 2vs2, and each team has 2 agents. How then can we **train cooperative behavior for groups of agents?** As explained in the [Unity Blog](https://blog.unity.com/technology/ml-agents-v20-release-now-supports-training-complex-cooperative-behaviors), agents typically receive a reward as a group (+1 - penalty) when the team scores a goal. This implies that **every agent on the team is rewarded even if each agent didnโ€™t contribute the same to the win**, which makes it difficult to learn what to do independently. The Unity MLAgents team developed the solution in a new multi-agent trainer called *MA-POCA (Multi-Agent POsthumous Credit Assignment)*. The idea is simple but powerful: a centralized critic **processes the states of all agents in the team to estimate how well each agent is doing**. Think of this critic as a coach. This allows each agent to **make decisions based only on what it perceives locally**, and **simultaneously evaluate how good its behavior is in the context of the whole group**. <figure> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/mapoca.png" alt="MA POCA"/> <figcaption>This illustrates MA-POCAโ€™s centralized learning and decentralized execution. Source: <a href="https://blog.unity.com/technology/ml-agents-plays-dodgeball">MLAgents Plays Dodgeball</a> </figcaption> </figure> The solution then is to use Self-Play with an MA-POCA trainer (called poca). The poca trainer will help us to train cooperative behavior and self-play to win against an opponent team. If you want to dive deeper into this MA-POCA algorithm, you need to read the paper they published [here](https://arxiv.org/pdf/2111.05992.pdf) and the sources we put on the additional readings section. ## Step 3: Define the config file We already learned in [Unit 5](https://huggingface.co/deep-rl-course/unit5/introduction) that in ML-Agents, you define **the training hyperparameters in `config.yaml` files.** There are multiple hyperparameters. To understand them better, you should read the explanations for each of them inย **[the documentation](https://github.com/Unity-Technologies/ml-agents/blob/release_20_docs/docs/Training-Configuration-File.md)** The config file weโ€™re going to use here is in `./config/poca/SoccerTwos.yaml`. It looks like this: ```csharp behaviors: SoccerTwos: trainer_type: poca hyperparameters: batch_size: 2048 buffer_size: 20480 learning_rate: 0.0003 beta: 0.005 epsilon: 0.2 lambd: 0.95 num_epoch: 3 learning_rate_schedule: constant network_settings: normalize: false hidden_units: 512 num_layers: 2 vis_encode_type: simple reward_signals: extrinsic: gamma: 0.99 strength: 1.0 keep_checkpoints: 5 max_steps: 5000000 time_horizon: 1000 summary_freq: 10000 self_play: save_steps: 50000 team_change: 200000 swap_steps: 2000 window: 10 play_against_latest_model_ratio: 0.5 initial_elo: 1200.0 ``` Compared to Pyramids or SnowballTarget, we have new hyperparameters with a self-play part. How you modify them can be critical in getting good results. The advice I can give you here is to check the explanation and recommended value for each parameters (especially self-play ones) againstย **[the documentation](https://github.com/Unity-Technologies/ml-agents/blob/release_20_docs/docs/Training-Configuration-File.md).** Now that youโ€™ve modified our config file, youโ€™re ready to train your agents. ## Step 4: Start the training To train the agents, we need toย **launch mlagents-learn and select the executable containing the environment.** We define four parameters: 1. `mlagents-learn <config>`: the path where the hyperparameter config file is. 2. `-env`: where the environment executable is. 3. `-run_id`: the name you want to give to your training run id. 4. `-no-graphics`: to not launch the visualization during the training. Depending on your hardware, 5M timesteps (the recommended value, but you can also try 10M) will take 5 to 8 hours of training. You can continue using your computer in the meantime, but I advise deactivating the computer standby mode to prevent the training from being stopped. Depending on the executable you use (windows, ubuntu, mac) the training command will look like this (your executable path can be different so donโ€™t hesitate to check before running). ```bash mlagents-learn ./config/poca/SoccerTwos.yaml --env=./training-envs-executables/SoccerTwos.exe --run-id="SoccerTwos" --no-graphics ``` The executable contains 8 copies of SoccerTwos. โš ๏ธ Itโ€™s normal if you donโ€™t see a big increase of ELO score (and even a decrease below 1200) before 2M timesteps, since your agents will spend most of their time moving randomly on the field before being able to goal. โš ๏ธ You can stop the training with Ctrl + C but beware of typing this command only once to stop the training since MLAgents needs to generate a final .onnx file before closing the run. ## Step 5: **Push the agent to the Hugging Face Hub** Now that we trained our agents, weโ€™reย **ready to push them to the Hub to be able to participate in the AI vs. AI challenge and visualize them playing on your browser๐Ÿ”ฅ.** To be able to share your model with the community, there are three more steps to follow: 1๏ธโƒฃ (If itโ€™s not already done) create an account to HF โžกย [https://huggingface.co/join](https://huggingface.co/join) 2๏ธโƒฃ Sign in and store your authentication token from the Hugging Face website. Create a new token (https://huggingface.co/settings/tokens)ย **with write role** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/create-token.jpg" alt="Create HF Token"> Copy the token, run this, and paste the token ```bash huggingface-cli login ``` Then, we need to run `mlagents-push-to-hf`. And we define four parameters: 1. `-run-id`: the name of the training run id. 2. `-local-dir`: where the agent was saved, itโ€™s results/<run_id name>, so in my case results/First Training. 3. `-repo-id`: the name of the Hugging Face repo you want to create or update. Itโ€™s always <your huggingface username>/<the repo name> If the repo does not exist **it will be created automatically** 4. `--commit-message`: since HF repos are git repositories you need to give a commit message. In my case ```bash mlagents-push-to-hf --run-id="SoccerTwos" --local-dir="./results/SoccerTwos" --repo-id="ThomasSimonini/poca-SoccerTwos" --commit-message="First Push"` ``` ```bash mlagents-push-to-hf --run-id= # Add your run id --local-dir= # Your local dir --repo-id= # Your repo id --commit-message="First Push" ``` If everything worked you should see this at the end of the process (but with a different url ๐Ÿ˜†) : Your model is pushed to the Hub. You can view your model here: https://huggingface.co/ThomasSimonini/poca-SoccerTwos It's the link to your model. It contains a model card that explains how to use it, your Tensorboard, and your config file. **What's awesome is that it's a git repository, which means you can have different commits, update your repository with a new push, etc.** ## Step 6: Verify that your model is ready for AI vs AI Challenge Now that your model is pushed to the Hub, **itโ€™s going to be added automatically to the AI vs AI Challenge model pool.** It can take a little bit of time before your model is added to the leaderboard given we do a run of matches every 4h. But to ensure that everything works perfectly you need to check: 1. That you have this tag in your model: ML-Agents-SoccerTwos. This is the tag we use to select models to be added to the challenge pool. To do that go to your model and check the tags <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/verify1.png" alt="Verify"/> If itโ€™s not the case you just need to modify the readme and add it <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/verify2.png" alt="Verify"/> 2. That you have a `SoccerTwos.onnx` file <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/verify3.png" alt="Verify"/> We strongly suggest that you create a new model when you push to the Hub if you want to train it again or train a new version. ## Step 7: Visualize some match in our demo Now that your model is part of AI vs AI Challenge, **you can visualize how good it is compared to others**: https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos In order to do that, you just need to go to this demo: - Select your model as team blue (or team purple if you prefer) and another model to compete against. The best opponents to compare your model to are either whoever is on top of the leaderboard or the [baseline model](https://huggingface.co/unity/MLAgents-SoccerTwos) The matches you see live are not used in the calculation of your result **but they are a good way to visualize how good your agent is**. And don't hesitate to share the best score your agent gets on discord in the #rl-i-made-this channel ๐Ÿ”ฅ
deep-rl-class/units/en/unit7/hands-on.mdx/0
{ "file_path": "deep-rl-class/units/en/unit7/hands-on.mdx", "repo_id": "deep-rl-class", "token_count": 5036 }
85
# Conclusion [[conclusion]] Congrats on finishing this bonus unit! You can now sit and enjoy playing with your Huggy ๐Ÿถ. And don't **forget to spread the love by sharing Huggy with your friends ๐Ÿค—**. And if you share about it on social media, **please tag us @huggingface and me @simoninithomas** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/huggy-cover.jpeg" alt="Huggy cover" width="100%"> Finally, we would love **to hear what you think of the course and how we can improve it**. If you have some feedback then please ๐Ÿ‘‰ [fill out this form](https://forms.gle/BzKXWzLAGZESGNaE9) ### Keep Learning, stay awesome ๐Ÿค—
deep-rl-class/units/en/unitbonus1/conclusion.mdx/0
{ "file_path": "deep-rl-class/units/en/unitbonus1/conclusion.mdx", "repo_id": "deep-rl-class", "token_count": 227 }
86
# Model Based Reinforcement Learning (MBRL) Model-based reinforcement learning only differs from its model-free counterpart in learning a *dynamics model*, but that has substantial downstream effects on how the decisions are made. The dynamics model usually models the environment transition dynamics, \\( s_{t+1} = f_\theta (s_t, a_t) \\), but things like inverse dynamics models (mapping from states to actions) or reward models (predicting rewards) can be used in this framework. ## Simple definition - There is an agent that repeatedly tries to solve a problem, **accumulating state and action data**. - With that data, the agent creates a structured learning tool, *a dynamics model*, to reason about the world. - With the dynamics model, the agent **decides how to act by predicting the future**. - With those actions, **the agent collects more data, improves said model, and hopefully improves future actions**. ## Academic definition Model-based reinforcement learning (MBRL) follows the framework of an agent interacting in an environment, **learning a model of said environment**, and then **leveraging the model for control (making decisions). Specifically, the agent acts in a Markov Decision Process (MDP) governed by a transition function \\( s_{t+1} = f (s_t , a_t) \\) and returns a reward at each step \\( r(s_t, a_t) \\). With a collected dataset \\( D :={ s_i, a_i, s_{i+1}, r_i} \\), the agent learns a model, \\( s_{t+1} = f_\theta (s_t , a_t) \\) **to minimize the negative log-likelihood of the transitions**. We employ sample-based model-predictive control (MPC) using the learned dynamics model, which optimizes the expected reward over a finite, recursively predicted horizon, \\( \tau \\), from a set of actions sampled from a uniform distribution \\( U(a) \\), (see [paper](https://arxiv.org/pdf/2002.04523) or [paper](https://arxiv.org/pdf/2012.09156.pdf) or [paper](https://arxiv.org/pdf/2009.01221.pdf)). ## Further reading For more information on MBRL, we recommend you check out the following resources: - A [blog post on debugging MBRL](https://www.natolambert.com/writing/debugging-mbrl). - A [recent review paper on MBRL](https://arxiv.org/abs/2006.16712), ## Author This section was written by <a href="https://twitter.com/natolambert"> Nathan Lambert </a>
deep-rl-class/units/en/unitbonus3/model-based.mdx/0
{ "file_path": "deep-rl-class/units/en/unitbonus3/model-based.mdx", "repo_id": "deep-rl-class", "token_count": 641 }
87
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # How to contribute to Diffusers ๐Ÿงจ We โค๏ธ contributions from the open-source community! Everyone is welcome, and all types of participation โ€“not just codeโ€“ are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don't be afraid and get involved if you're up for it! Everyone is encouraged to start by saying ๐Ÿ‘‹ in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out โ˜•. <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=Discord&logoColor=white"></a> Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our [code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md) and be mindful to respect it during your interactions. We also recommend you become familiar with the [ethical guidelines](https://huggingface.co/docs/diffusers/conceptual/ethical_guidelines) that guide our project and ask you to adhere to the same principles of transparency and responsibility. We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. ## Overview You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to the core library. In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. * 1. Asking and answering questions on [the Diffusers discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers) or on [Discord](https://discord.gg/G7tWnz98XR). * 2. Opening new issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues/new/choose). * 3. Answering issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues). * 4. Fix a simple issue, marked by the "Good first issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). * 5. Contribute to the [documentation](https://github.com/huggingface/diffusers/tree/main/docs/source). * 6. Contribute a [Community Pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3Acommunity-examples). * 7. Contribute to the [examples](https://github.com/huggingface/diffusers/tree/main/examples). * 8. Fix a more difficult issue, marked by the "Good second issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22). * 9. Add a new pipeline, model, or scheduler, see ["New Pipeline/Model"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) and ["New scheduler"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) issues. For this contribution, please have a look at [Design Philosophy](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md). As said before, **all contributions are valuable to the community**. In the following, we will explain each contribution a bit more in detail. For all contributions 4-9, you will need to open a PR. It is explained in detail how to do so in [Opening a pull request](#how-to-open-a-pr). ### 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord Any question or comment related to the Diffusers library can be asked on the [discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/) or on [Discord](https://discord.gg/G7tWnz98XR). Such questions and comments include (but are not limited to): - Reports of training or inference experiments in an attempt to share knowledge - Presentation of personal projects - Questions to non-official training examples - Project proposals - General feedback - Paper summaries - Asking for help on personal projects that build on top of the Diffusers library - General questions - Ethical questions regarding diffusion models - ... Every question that is asked on the forum or on Discord actively encourages the community to publicly share knowledge and might very well help a beginner in the future that has the same question you're having. Please do pose any questions you might have. In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. **Please** keep in mind that the more effort you put into asking or answering a question, the higher the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. In short, a high quality question or answer is *precise*, *concise*, *relevant*, *easy-to-understand*, *accessible*, and *well-formated/well-posed*. For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section. **NOTE about channels**: [*The forum*](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it's easier to look up questions and answers that we posted some time ago. In addition, questions and answers posted in the forum can easily be linked to. In contrast, *Discord* has a chat-like format that invites fast back-and-forth communication. While it will most likely take less time for you to get an answer to your question on Discord, your question won't be visible anymore over time. Also, it's much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. ### 2. Opening new issues on the GitHub issues tab The ๐Ÿงจ Diffusers library is robust and reliable thanks to the users who notify us of the problems they encounter. So thank you for reporting an issue. Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. In a nutshell, this means that everything that is **not** related to the **code of the Diffusers library** (including the documentation) should **not** be asked on GitHub, but rather on either the [forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR). **Please consider the following guidelines when opening a new issue**: - Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). - Please never report a new issue on another (related) issue. If another issue is highly related, please open a new issue nevertheless and link to the related issue. - Make sure your issue is written in English. Please use one of the great, free online translation services, such as [DeepL](https://www.deepl.com/translator) to translate from your native language to English if you are not comfortable in English. - Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that `python -c "import diffusers; print(diffusers.__version__)"` is higher or matches the latest Diffusers version. - Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. New issues usually include the following. #### 2.1. Reproducible, minimal bug reports A bug report should always have a reproducible code snippet and be as minimal and concise as possible. This means in more detail: - Narrow the bug down as much as you can, **do not just dump your whole code file**. - Format your code. - Do not include any external libraries except for Diffusers depending on them. - **Always** provide all necessary information about your environment; for this, you can run: `diffusers-cli env` in your shell and copy-paste the displayed information to the issue. - Explain the issue. If the reader doesn't know what the issue is and why it is an issue, she cannot solve it. - **Always** make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. - If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the [Hub](https://huggingface.co) to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section. You can open a bug report [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=bug&projects=&template=bug-report.yml). #### 2.2. Feature requests A world-class feature request addresses the following points: 1. Motivation first: * Is it related to a problem/frustration with the library? If so, please explain why. Providing a code snippet that demonstrates the problem is best. * Is it related to something you would need for a project? We'd love to hear about it! * Is it something you worked on and think could benefit the community? Awesome! Tell us what problem it solved for you. 2. Write a *full paragraph* describing the feature; 3. Provide a **code snippet** that demonstrates its future use; 4. In case this is related to a paper, please attach a link; 5. Attach any additional information (drawings, screenshots, etc.) you think may help. You can open a feature request [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=). #### 2.3 Feedback Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look [here](https://huggingface.co/docs/diffusers/conceptual/philosophy). If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. You can open an issue about feedback [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=). #### 2.4 Technical questions Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide detail on why this part of the code is difficult to understand. You can open an issue about a technical question [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=bug&template=bug-report.yml). #### 2.5 Proposal to add a new model, scheduler, or pipeline If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: * Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. * Link to any of its open-source implementation. * Link to the model weights if they are available. If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don't forget to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. You can open a request for a model/pipeline/scheduler [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=New+model%2Fpipeline%2Fscheduler&template=new-model-addition.yml). ### 3. Answering issues on the GitHub issues tab Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. Some tips to give a high-quality answer to an issue: - Be as concise and minimal as possible. - Stay on topic. An answer to the issue should concern the issue and only the issue. - Provide links to code, papers, or other sources that prove or encourage your point. - Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great help to the maintainers if you can answer such issues, encouraging the author of the issue to be more precise, provide the link to a duplicated issue or redirect them to [the forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR). If you have verified that the issued bug report is correct and requires a correction in the source code, please have a look at the next sections. For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the [Opening a pull request](#how-to-open-a-pr) section. ### 4. Fixing a "Good first issue" *Good first issues* are marked by the [Good first issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) label. Usually, the issue already explains how a potential solution should look so that it is easier to fix. If the issue hasn't been closed and you would like to try to fix this issue, you can just leave a message "I would like to try this issue.". There are usually three scenarios: - a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. - b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. - c.) There is already an open PR to fix the issue, but the issue hasn't been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. ### 5. Contribute to the documentation A good library **always** has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a **highly valuable contribution**. Contributing to the library can have many forms: - Correcting spelling or grammatical errors. - Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we are very happy if you take some time to correct it. - Correct the shape or dimensions of a docstring input or output tensor. - Clarify documentation that is hard to understand or incorrect. - Update outdated code examples. - Translating the documentation to another language. Anything displayed on [the official Diffusers doc page](https://huggingface.co/docs/diffusers/index) is part of the official documentation and can be corrected, adjusted in the respective [documentation source](https://github.com/huggingface/diffusers/tree/main/docs/source). Please have a look at [this page](https://github.com/huggingface/diffusers/tree/main/docs) on how to verify changes made to the documentation locally. ### 6. Contribute a community pipeline [Pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) are usually the first point of contact between the Diffusers library and the user. Pipelines are examples of how to use Diffusers [models](https://huggingface.co/docs/diffusers/api/models/overview) and [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview). We support two types of pipelines: - Official Pipelines - Community Pipelines Both official and community pipelines follow the same design and consist of the same type of components. Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code resides in [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines). In contrast, community pipelines are contributed and maintained purely by the **community** and are **not** tested. They reside in [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and while they can be accessed via the [PyPI diffusers package](https://pypi.org/project/diffusers/), their code is not part of the PyPI distribution. The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all possible ways diffusion models can be used for inference, but some of them may be of interest to the community. Officially released diffusion pipelines, such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures high quality of maintenance, no backward-breaking code changes, and testing. More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. To add a community pipeline, one should add a <name-of-the-community>.py file to [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and adapt the [examples/community/README.md](https://github.com/huggingface/diffusers/tree/main/examples/community/README.md) to include an example of the new pipeline. An example can be seen [here](https://github.com/huggingface/diffusers/pull/2400). Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the core package. ### 7. Contribute to training examples Diffusers examples are a collection of training scripts that reside in [examples](https://github.com/huggingface/diffusers/tree/main/examples). We support two types of training examples: - Official training examples - Research training examples Research training examples are located in [examples/research_projects](https://github.com/huggingface/diffusers/tree/main/examples/research_projects) whereas official training examples include all folders under [examples](https://github.com/huggingface/diffusers/tree/main/examples) except the `research_projects` and `community` folders. The official training examples are maintained by the Diffusers' core maintainers whereas the research training examples are maintained by the community. This is because of the same reasons put forward in [6. Contribute a community pipeline](#6-contribute-a-community-pipeline) for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the `research_projects` folder and maintained by the author. Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the training examples, it is required to clone the repository: ```bash git clone https://github.com/huggingface/diffusers ``` as well as to install all additional dependencies required for training: ```bash pip install -r /examples/<your-example-folder>/requirements.txt ``` Therefore when adding an example, the `requirements.txt` file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example's training script. See, for example, the [DreamBooth `requirements.txt` file](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/requirements.txt). Training examples of the Diffusers library should adhere to the following philosophy: - All the code necessary to run the examples should be found in a single Python file. - One should be able to run the example from the command line with `python <your-example>.py --args`. - Examples should be kept simple and serve as **an example** on how to use Diffusers for training. The purpose of example scripts is **not** to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. To contribute an example, it is highly recommended to look at already existing examples such as [dreambooth](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) to get an idea of how they should look like. We strongly advise contributors to make use of the [Accelerate library](https://github.com/huggingface/accelerate) as it's tightly integrated with Diffusers. Once an example script works, please make sure to add a comprehensive `README.md` that states how to use the example exactly. This README should include: - An example command on how to run the example script as shown [here e.g.](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#running-locally-with-pytorch). - A link to some training results (logs, models, ...) that show what the user can expect as shown [here e.g.](https://api.wandb.ai/report/patrickvonplaten/xm6cd5q5). - If you are adding a non-official/research training example, **please don't forget** to add a sentence that you are maintaining this training example which includes your git handle as shown [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/intel_opts#diffusers-examples-with-intel-optimizations). If you are contributing to the official training examples, please also make sure to add a test to [examples/test_examples.py](https://github.com/huggingface/diffusers/blob/main/examples/test_examples.py). This is not necessary for non-official training examples. ### 8. Fixing a "Good second issue" *Good second issues* are marked by the [Good second issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22) label. Good second issues are usually more complicated to solve than [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). The issue description usually gives less guidance on how to fix the issue and requires a decent understanding of the library by the interested contributor. If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn't merged and try to open an improved PR. Good second issues are usually more difficult to get merged compared to good first issues, so don't hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. ### 9. Adding pipelines, models, schedulers Pipelines, models, and schedulers are the most important pieces of the Diffusers library. They provide easy access to state-of-the-art diffusion technologies and thus allow the community to build powerful generative AI applications. By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. Diffusers has a couple of open feature requests for all three components - feel free to gloss over them if you don't know yet what specific component you would like to add: - [Model or pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) - [Scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) Before adding any of the three components, it is strongly recommended that you give the [Philosophy guide](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md) a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a [Feedback issue](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=) instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions. If you are unsure or stuck in the PR, don't hesitate to leave a message to ask for a first review or help. ## How to write a good issue **The better your issue is written, the higher the chances that it will be quickly resolved.** 1. Make sure that you've used the correct template for your issue. You can pick between *Bug Report*, *Feature Request*, *Feedback about API Design*, *New model/pipeline/scheduler addition*, *Forum*, or a blank issue. Make sure to pick the correct one when opening [a new issue](https://github.com/huggingface/diffusers/issues/new/choose). 2. **Be precise**: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write "Error in diffusers". 3. **Reproducibility**: No reproducible code snippet == no solution. If you encounter a bug, maintainers **have to be able to reproduce** it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, *i.e.* that there are no missing imports or missing links to images, ... Your issue should contain an error message **and** a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. 4. **Minimalistic**: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. 5. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. 6. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the [official GitHub formatting docs](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) for more information. 7. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. ## How to write a good PR 1. Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. 2. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of "also fixing another problem while we're adding it". It is much more difficult to review pull requests that solve multiple, unrelated problems at once. 3. If helpful, try to add a code snippet that displays an example of how your addition can be used. 4. The title of your pull request should be a summary of its contribution. 5. If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people consulting the issue know you are working on it); 6. To indicate a work in progress please prefix the title with `[WIP]`. These are useful to avoid duplicated work, and to differentiate it from PRs ready to be merged; 7. Try to formulate and format your text as explained in [How to write a good issue](#how-to-write-a-good-issue). 8. Make sure existing tests pass; 9. Add high-coverage tests. No quality testing = no merge. - If you are adding new `@slow` tests, make sure they pass using `RUN_SLOW=1 python -m pytest tests/test_my_new_model.py`. CircleCI does not run the slow tests, but GitHub Actions does every night! 10. All public methods must have informative docstrings that work nicely with markdown. See [`pipeline_latent_diffusion.py`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py) for an example. 11. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) or [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images) to place these files. If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images to this dataset. ## How to open a PR Before writing code, we strongly advise you to search through the existing PRs or issues to make sure that nobody is already working on the same thing. If you are unsure, it is always a good idea to open an issue to get some feedback. You will need basic `git` proficiency to be able to contribute to ๐Ÿงจ Diffusers. `git` is not the easiest tool to use but it has the greatest manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro Git](https://git-scm.com/book/en/v2) is a very good reference. Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/main/setup.py#L265)): 1. Fork the [repository](https://github.com/huggingface/diffusers) by clicking on the 'Fork' button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your fork to your local disk, and add the base repository as a remote: ```bash $ git clone git@github.com:<your GitHub handle>/diffusers.git $ cd diffusers $ git remote add upstream https://github.com/huggingface/diffusers.git ``` 3. Create a new branch to hold your development changes: ```bash $ git checkout -b a-descriptive-name-for-my-changes ``` **Do not** work on the `main` branch. 4. Set up a development environment by running the following command in a virtual environment: ```bash $ pip install -e ".[dev]" ``` If you have already cloned the repo, you might need to `git pull` to get the most recent changes in the library. 5. Develop the features on your branch. As you work on the features, you should make sure that the test suite passes. You should run the tests impacted by your changes like this: ```bash $ pytest tests/<TEST_TO_RUN>.py ``` Before you run the tests, please make sure you install the dependencies required for testing. You can do so with this command: ```bash $ pip install -e ".[test]" ``` You can also run the full test suite with the following command, but it takes a beefy machine to produce a result in a decent amount of time now that Diffusers has grown a lot. Here is the command for it: ```bash $ make test ``` ๐Ÿงจ Diffusers relies on `ruff` and `isort` to format its source code consistently. After you make changes, apply automatic style corrections and code verifications that can't be automated in one go with: ```bash $ make style ``` ๐Ÿงจ Diffusers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality control runs in CI, however, you can also run the same checks with: ```bash $ make quality ``` Once you're happy with your changes, add changed files using `git add` and make a commit with `git commit` to record your changes locally: ```bash $ git add modified_file.py $ git commit -m "A descriptive message about your changes." ``` It is a good idea to sync your copy of the code with the original repository regularly. This way you can quickly account for changes: ```bash $ git pull upstream main ``` Push the changes to your account using: ```bash $ git push -u origin a-descriptive-name-for-my-changes ``` 6. Once you are satisfied, go to the webpage of your fork on GitHub. Click on 'Pull request' to send your changes to the project maintainers for review. 7. It's ok if maintainers ask you for changes. It happens to core contributors too! So everyone can see the changes in the Pull request, work in your local branch and push the changes to your fork. They will automatically appear in the pull request. ### Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in the [tests folder](https://github.com/huggingface/diffusers/tree/main/tests). We like `pytest` and `pytest-xdist` because it's faster. From the root of the repository, here's how to run tests with `pytest` for the library: ```bash $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ ``` In fact, that's how `make test` is implemented! You can specify a smaller set of tests in order to test only the feature you're working on. By default, slow tests are skipped. Set the `RUN_SLOW` environment variable to `yes` to run them. This will download many gigabytes of models โ€” make sure you have enough disk space and a good Internet connection, or a lot of patience! ```bash $ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ ``` `unittest` is fully supported, here's how to run tests with it: ```bash $ python -m unittest discover -s tests -t . -v $ python -m unittest discover -s examples -t examples -v ``` ### Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, when syncing the main branch of a forked repository, please, follow these steps: 1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. 2. If a PR is absolutely necessary, use the following steps after checking out your branch: ```bash $ git checkout -b your-branch-for-syncing $ git pull --squash --no-commit upstream main $ git commit -m '<your message without GitHub references>' $ git push --set-upstream origin your-branch-for-syncing ``` ### Style guide For documentation strings, ๐Ÿงจ Diffusers follows the [Google style](https://google.github.io/styleguide/pyguide.html).
diffusers/CONTRIBUTING.md/0
{ "file_path": "diffusers/CONTRIBUTING.md", "repo_id": "diffusers", "token_count": 9909 }
88
import glob import subprocess import sys from typing import List sys.path.append(".") from benchmark_text_to_image import ALL_T2I_CKPTS # noqa: E402 PATTERN = "benchmark_*.py" class SubprocessCallException(Exception): pass # Taken from `test_examples_utils.py` def run_command(command: List[str], return_stdout=False): """ Runs `command` with `subprocess.check_output` and will potentially return the `stdout`. Will also properly capture if an error occurred while running `command` """ try: output = subprocess.check_output(command, stderr=subprocess.STDOUT) if return_stdout: if hasattr(output, "decode"): output = output.decode("utf-8") return output except subprocess.CalledProcessError as e: raise SubprocessCallException( f"Command `{' '.join(command)}` failed with the following error:\n\n{e.output.decode()}" ) from e def main(): python_files = glob.glob(PATTERN) for file in python_files: print(f"****** Running file: {file} ******") # Run with canonical settings. if file != "benchmark_text_to_image.py": command = f"python {file}" run_command(command.split()) command += " --run_compile" run_command(command.split()) # Run variants. for file in python_files: if file == "benchmark_text_to_image.py": for ckpt in ALL_T2I_CKPTS: command = f"python {file} --ckpt {ckpt}" if "turbo" in ckpt: command += " --num_inference_steps 1" run_command(command.split()) command += " --run_compile" run_command(command.split()) elif file == "benchmark_sd_img.py": for ckpt in ["stabilityai/stable-diffusion-xl-refiner-1.0", "stabilityai/sdxl-turbo"]: command = f"python {file} --ckpt {ckpt}" if ckpt == "stabilityai/sdxl-turbo": command += " --num_inference_steps 2" run_command(command.split()) command += " --run_compile" run_command(command.split()) elif file in ["benchmark_sd_inpainting.py", "benchmark_ip_adapters.py"]: sdxl_ckpt = "stabilityai/stable-diffusion-xl-base-1.0" command = f"python {file} --ckpt {sdxl_ckpt}" run_command(command.split()) command += " --run_compile" run_command(command.split()) elif file in ["benchmark_controlnet.py", "benchmark_t2i_adapter.py"]: sdxl_ckpt = ( "diffusers/controlnet-canny-sdxl-1.0" if "controlnet" in file else "TencentARC/t2i-adapter-canny-sdxl-1.0" ) command = f"python {file} --ckpt {sdxl_ckpt}" run_command(command.split()) command += " --run_compile" run_command(command.split()) if __name__ == "__main__": main()
diffusers/benchmarks/run_all.py/0
{ "file_path": "diffusers/benchmarks/run_all.py", "repo_id": "diffusers", "token_count": 1448 }
89
<!--Copyright 2024 The GLIGEN Authors and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from [University of Wisconsin-Madison, Columbia University, and Microsoft](https://github.com/gligen/GLIGEN). The [`StableDiffusionGLIGENPipeline`] and [`StableDiffusionGLIGENTextImagePipeline`] can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with [`StableDiffusionGLIGENPipeline`], if input images are given, [`StableDiffusionGLIGENTextImagePipeline`] can insert objects described by text at the region defined by bounding boxes. Otherwise, it'll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It's trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. The abstract from the [paper](https://huggingface.co/papers/2301.07093) is: *Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGENโ€™s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin.* <Tip> Make sure to check out the Stable Diffusion [Tips](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently! If you want to use one of the official checkpoints for a task, explore the [gligen](https://huggingface.co/gligen) Hub organizations! </Tip> [`StableDiffusionGLIGENPipeline`] was contributed by [Nikhil Gajendrakumar](https://github.com/nikhil-masterful) and [`StableDiffusionGLIGENTextImagePipeline`] was contributed by [Nguyแป…n Cรดng Tรบ Anh](https://github.com/tuanh123789). ## StableDiffusionGLIGENPipeline [[autodoc]] StableDiffusionGLIGENPipeline - all - __call__ - enable_vae_slicing - disable_vae_slicing - enable_vae_tiling - disable_vae_tiling - enable_model_cpu_offload - prepare_latents - enable_fuser ## StableDiffusionGLIGENTextImagePipeline [[autodoc]] StableDiffusionGLIGENTextImagePipeline - all - __call__ - enable_vae_slicing - disable_vae_slicing - enable_vae_tiling - disable_vae_tiling - enable_model_cpu_offload - prepare_latents - enable_fuser ## StableDiffusionPipelineOutput [[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
diffusers/docs/source/en/api/pipelines/stable_diffusion/gligen.md/0
{ "file_path": "diffusers/docs/source/en/api/pipelines/stable_diffusion/gligen.md", "repo_id": "diffusers", "token_count": 1049 }
90
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <Tip warning={true}> ๐Ÿงช This pipeline is for research purposes only. </Tip> # Text-to-video [ModelScope Text-to-Video Technical Report](https://arxiv.org/abs/2308.06571) is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang. The abstract from the paper is: *This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both image-text and video-text datasets. ModelScopeT2V brings together three components (i.e., VQGAN, a text encoder, and a denoising UNet), totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over state-of-the-art methods across three evaluation metrics. The code and an online demo are available at https://modelscope.cn/models/damo/text-to-video-synthesis/summary.* You can find additional information about Text-to-Video on the [project page](https://modelscope.cn/models/damo/text-to-video-synthesis/summary), [original codebase](https://github.com/modelscope/modelscope/), and try it out in a [demo](https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis). Official checkpoints can be found at [damo-vilab](https://huggingface.co/damo-vilab) and [cerspense](https://huggingface.co/cerspense). ## Usage example ### `text-to-video-ms-1.7b` Let's start by generating a short video with the default length of 16 frames (2s at 8 fps): ```python import torch from diffusers import DiffusionPipeline from diffusers.utils import export_to_video pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") pipe = pipe.to("cuda") prompt = "Spiderman is surfing" video_frames = pipe(prompt).frames[0] video_path = export_to_video(video_frames) video_path ``` Diffusers supports different optimization techniques to improve the latency and memory footprint of a pipeline. Since videos are often more memory-heavy than images, we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. Let's generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: ```python import torch from diffusers import DiffusionPipeline from diffusers.utils import export_to_video pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") pipe.enable_model_cpu_offload() # memory optimization pipe.enable_vae_slicing() prompt = "Darth Vader surfing a wave" video_frames = pipe(prompt, num_frames=64).frames[0] video_path = export_to_video(video_frames) video_path ``` It just takes **7 GBs of GPU memory** to generate the 64 video frames using PyTorch 2.0, "fp16" precision and the techniques mentioned above. We can also use a different scheduler easily, using the same method we'd use for Stable Diffusion: ```python import torch from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler from diffusers.utils import export_to_video pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() prompt = "Spiderman is surfing" video_frames = pipe(prompt, num_inference_steps=25).frames[0] video_path = export_to_video(video_frames) video_path ``` Here are some sample outputs: <table> <tr> <td><center> An astronaut riding a horse. <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astr.gif" alt="An astronaut riding a horse." style="width: 300px;" /> </center></td> <td ><center> Darth vader surfing in waves. <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vader.gif" alt="Darth vader surfing in waves." style="width: 300px;" /> </center></td> </tr> </table> ### `cerspense/zeroscope_v2_576w` & `cerspense/zeroscope_v2_XL` Zeroscope are watermark-free model and have been trained on specific sizes such as `576x320` and `1024x576`. One should first generate a video using the lower resolution checkpoint [`cerspense/zeroscope_v2_576w`](https://huggingface.co/cerspense/zeroscope_v2_576w) with [`TextToVideoSDPipeline`], which can then be upscaled using [`VideoToVideoSDPipeline`] and [`cerspense/zeroscope_v2_XL`](https://huggingface.co/cerspense/zeroscope_v2_XL). ```py import torch from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler from diffusers.utils import export_to_video from PIL import Image pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) pipe.enable_model_cpu_offload() # memory optimization pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) pipe.enable_vae_slicing() prompt = "Darth Vader surfing a wave" video_frames = pipe(prompt, num_frames=24).frames[0] video_path = export_to_video(video_frames) video_path ``` Now the video can be upscaled: ```py pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() # memory optimization pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) pipe.enable_vae_slicing() video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] video_frames = pipe(prompt, video=video, strength=0.6).frames[0] video_path = export_to_video(video_frames) video_path ``` Here are some sample outputs: <table> <tr> <td ><center> Darth vader surfing in waves. <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/darthvader_cerpense.gif" alt="Darth vader surfing in waves." style="width: 576px;" /> </center></td> </tr> </table> ## Tips Video generation is memory-intensive and one way to reduce your memory usage is to set `enable_forward_chunking` on the pipeline's UNet so you don't run the entire feedforward layer at once. Breaking it up into chunks in a loop is more efficient. Check out the [Text or image-to-video](text-img2vid) guide for more details about how certain parameters can affect video generation and how to optimize inference by reducing memory usage. <Tip> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. </Tip> ## TextToVideoSDPipeline [[autodoc]] TextToVideoSDPipeline - all - __call__ ## VideoToVideoSDPipeline [[autodoc]] VideoToVideoSDPipeline - all - __call__ ## TextToVideoSDPipelineOutput [[autodoc]] pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput
diffusers/docs/source/en/api/pipelines/text_to_video.md/0
{ "file_path": "diffusers/docs/source/en/api/pipelines/text_to_video.md", "repo_id": "diffusers", "token_count": 2638 }
91
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # EDMDPMSolverMultistepScheduler `EDMDPMSolverMultistepScheduler` is a [Karras formulation](https://huggingface.co/papers/2206.00364) of `DPMSolverMultistep`, a multistep scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality samples, and it can generate quite good samples even in 10 steps. ## EDMDPMSolverMultistepScheduler [[autodoc]] EDMDPMSolverMultistepScheduler ## SchedulerOutput [[autodoc]] schedulers.scheduling_utils.SchedulerOutput
diffusers/docs/source/en/api/schedulers/edm_multistep_dpm_solver.md/0
{ "file_path": "diffusers/docs/source/en/api/schedulers/edm_multistep_dpm_solver.md", "repo_id": "diffusers", "token_count": 447 }
92
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Reduce memory usage A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage. <Tip> In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to [Speed up inference](fp16). </Tip> The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption. | | latency | speed-up | | ---------------- | ------- | ------- | | original | 9.50s | x1 | | fp16 | 3.61s | x2.63 | | channels last | 3.30s | x2.88 | | traced UNet | 3.21s | x2.96 | | memory-efficient attention | 2.63s | x3.61 | ## Sliced VAE Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You'll likely want to couple this with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to reduce memory use further if you have xFormers installed. To use sliced VAE, call [`~StableDiffusionPipeline.enable_vae_slicing`] on your pipeline before inference: ```python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_vae_slicing() #pipe.enable_xformers_memory_efficient_attention() images = pipe([prompt] * 32).images ``` You may see a small performance boost in VAE decoding on multi-image batches, and there should be no performance impact on single-image batches. ## Tiled VAE Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to reduce memory use further if you have xFormers installed. To use tiled VAE processing, call [`~StableDiffusionPipeline.enable_vae_tiling`] on your pipeline before inference: ```python import torch from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "a beautiful landscape photograph" pipe.enable_vae_tiling() #pipe.enable_xformers_memory_efficient_attention() image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] ``` The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn't see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller. ## CPU offloading Offloading the weights to the CPU and only loading them on the GPU when performing the forward pass can also save memory. Often, this technique can reduce memory consumption to less than 3GB. To perform CPU offloading, call [`~StableDiffusionPipeline.enable_sequential_cpu_offload`]: ```Python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ) prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_sequential_cpu_offload() image = pipe(prompt).images[0] ``` CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as `num_inference_steps`); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers. <Tip> Consider using [model offloading](#model-offloading) if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won't be as large. </Tip> <Tip warning={true}> When using [`~StableDiffusionPipeline.enable_sequential_cpu_offload`], don't move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal (see this [issue](https://github.com/huggingface/diffusers/issues/1934) for more information). [`~StableDiffusionPipeline.enable_sequential_cpu_offload`] is a stateful operation that installs hooks on the models. </Tip> ## Model offloading <Tip> Model offloading requires ๐Ÿค— Accelerate version 0.17.0 or higher. </Tip> [Sequential CPU offloading](#cpu-offloading) preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they're immediately returned to the CPU when a new module runs. Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model's constituent *submodules*. There is a negligible impact on inference time (compared with moving the pipeline to `cuda`), and it still provides some memory savings. During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE) is placed on the GPU while the others wait on the CPU. Components like the UNet that run for multiple iterations stay on the GPU until they're no longer needed. Enable model offloading by calling [`~StableDiffusionPipeline.enable_model_cpu_offload`] on the pipeline: ```Python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ) prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_model_cpu_offload() image = pipe(prompt).images[0] ``` <Tip warning={true}> In order to properly offload models after they're called, it is required to run the entire pipeline and models are called in the pipeline's expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See [Removing Hooks](https://huggingface.co/docs/accelerate/en/package_reference/big_modeling#accelerate.hooks.remove_hook_from_module) for more information. [`~StableDiffusionPipeline.enable_model_cpu_offload`] is a stateful operation that installs hooks on the models and state on the pipeline. </Tip> ## Channels-last memory format The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model. For example, to set the pipeline's UNet to use the channels-last format: ```python print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) pipe.unet.to(memory_format=torch.channels_last) # in-place operation print( pipe.unet.conv_out.state_dict()["weight"].stride() ) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works ``` ## Tracing Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model's layers. The executable or `ScriptFunction` that is returned is optimized with just-in-time compilation. To trace a UNet: ```python import time import torch from diffusers import StableDiffusionPipeline import functools # torch disable grad torch.set_grad_enabled(False) # set variables n_experiments = 2 unet_runs_per_experiment = 50 # load inputs def generate_inputs(): sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16) timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999 encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16) return sample, timestep, encoder_hidden_states pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ).to("cuda") unet = pipe.unet unet.eval() unet.to(memory_format=torch.channels_last) # use channels_last memory format unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default # warmup for _ in range(3): with torch.inference_mode(): inputs = generate_inputs() orig_output = unet(*inputs) # trace print("tracing..") unet_traced = torch.jit.trace(unet, inputs) unet_traced.eval() print("done tracing") # warmup and optimize graph for _ in range(5): with torch.inference_mode(): inputs = generate_inputs() orig_output = unet_traced(*inputs) # benchmarking with torch.inference_mode(): for _ in range(n_experiments): torch.cuda.synchronize() start_time = time.time() for _ in range(unet_runs_per_experiment): orig_output = unet_traced(*inputs) torch.cuda.synchronize() print(f"unet traced inference took {time.time() - start_time:.2f} seconds") for _ in range(n_experiments): torch.cuda.synchronize() start_time = time.time() for _ in range(unet_runs_per_experiment): orig_output = unet(*inputs) torch.cuda.synchronize() print(f"unet inference took {time.time() - start_time:.2f} seconds") # save the model unet_traced.save("unet_traced.pt") ``` Replace the `unet` attribute of the pipeline with the traced model: ```python from diffusers import StableDiffusionPipeline import torch from dataclasses import dataclass @dataclass class UNet2DConditionOutput: sample: torch.FloatTensor pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ).to("cuda") # use jitted unet unet_traced = torch.jit.load("unet_traced.pt") # del pipe.unet class TracedUNet(torch.nn.Module): def __init__(self): super().__init__() self.in_channels = pipe.unet.config.in_channels self.device = pipe.unet.device def forward(self, latent_model_input, t, encoder_hidden_states): sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] return UNet2DConditionOutput(sample=sample) pipe.unet = TracedUNet() with torch.inference_mode(): image = pipe([prompt] * 1, num_inference_steps=50).images[0] ``` ## Memory-efficient attention Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is [Flash Attention](https://arxiv.org/abs/2205.14135) (you can check out the original code at [HazyResearch/flash-attention](https://github.com/HazyResearch/flash-attention)). <Tip> If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling `xformers`. </Tip> To use Flash Attention, install the following: - PyTorch > 1.12 - CUDA available - [xFormers](xformers) Then call [`~ModelMixin.enable_xformers_memory_efficient_attention`] on the pipeline: ```python from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ).to("cuda") pipe.enable_xformers_memory_efficient_attention() with torch.inference_mode(): sample = pipe("a small cat") # optional: You can disable it via # pipe.disable_xformers_memory_efficient_attention() ``` The iteration speed when using `xformers` should match the iteration speed of PyTorch 2.0 as described [here](torch2.0).
diffusers/docs/source/en/optimization/memory.md/0
{ "file_path": "diffusers/docs/source/en/optimization/memory.md", "repo_id": "diffusers", "token_count": 4134 }
93
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # DreamBooth [DreamBooth](https://huggingface.co/papers/2208.12242) is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. If you're training on a GPU with limited vRAM, you should try enabling the `gradient_checkpointing` and `mixed_precision` parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with [xFormers](../optimization/xformers). JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn't support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the [train_dreambooth.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Navigate to the example folder with the training script and install the required dependencies for the script you're using: <hfoptions id="installation"> <hfoption id="PyTorch"> ```bash cd examples/dreambooth pip install -r requirements.txt ``` </hfoption> <hfoption id="Flax"> ```bash cd examples/dreambooth pip install -r requirements_flax.txt ``` </hfoption> </hfoptions> <Tip> ๐Ÿค— Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the ๐Ÿค— Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more. </Tip> Initialize an ๐Ÿค— Accelerate environment: ```bash accelerate config ``` To setup a default ๐Ÿค— Accelerate environment without choosing any configurations: ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell, like a notebook, you can use: ```bash from accelerate.utils import write_basic_config write_basic_config() ``` Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. <Tip> The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) and let us know if you have any questions or concerns. </Tip> ## Script parameters <Tip warning={true}> DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the [Training Stable Diffusion with Dreambooth using ๐Ÿงจ Diffusers](https://huggingface.co/blog/dreambooth) blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters. </Tip> The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L228) function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you'd like. For example, to train in the bf16 format: ```bash accelerate launch train_dreambooth.py \ --mixed_precision="bf16" ``` Some basic and important parameters to know and specify are: - `--pretrained_model_name_or_path`: the name of the model on the Hub or a local path to the pretrained model - `--instance_data_dir`: path to a folder containing the training dataset (example images) - `--instance_prompt`: the text prompt that contains the special word for the example images - `--train_text_encoder`: whether to also train the text encoder - `--output_dir`: where to save the trained model - `--push_to_hub`: whether to push the trained model to the Hub - `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command ### Min-SNR weighting The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the `--snr_gamma` parameter and set it to the recommended value of 5.0: ```bash accelerate launch train_dreambooth.py \ --snr_gamma=5.0 ``` ### Prior preservation loss Prior preservation loss is a method that uses a model's own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. - `--with_prior_preservation`: whether to use prior preservation loss - `--prior_loss_weight`: controls the influence of the prior preservation loss on the model - `--class_data_dir`: path to a folder containing the generated class sample images - `--class_prompt`: the text prompt describing the class of the generated sample images ```bash accelerate launch train_dreambooth.py \ --with_prior_preservation \ --prior_loss_weight=1.0 \ --class_data_dir="path/to/class/images" \ --class_prompt="text prompt describing class" ``` ### Train text encoder To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you'll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by: ```bash accelerate launch train_dreambooth.py \ --train_text_encoder ``` ## Training script DreamBooth comes with its own dataset classes: - [`DreamBoothDataset`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L604): preprocesses the images and class images, and tokenizes the prompts for training - [`PromptDataset`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L738): generates the prompt embeddings to generate the class images If you enabled [prior preservation loss](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L842), the class images are generated here: ```py sample_dataset = PromptDataset(args.class_prompt, num_new_images) sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) sample_dataloader = accelerator.prepare(sample_dataloader) pipeline.to(accelerator.device) for example in tqdm( sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process ): images = pipeline(example["prompt"]).images ``` Next is the [`main()`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L799) function which handles setting up the dataset for training and the training loop itself. The script loads the [tokenizer](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L898), [scheduler and models](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L912C1-L912C1): ```py # Load the tokenizer if args.tokenizer_name: tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) elif args.pretrained_model_name_or_path: tokenizer = AutoTokenizer.from_pretrained( args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False, ) # Load scheduler and models noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") text_encoder = text_encoder_cls.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision ) if model_has_vae(args): vae = AutoencoderKL.from_pretrained( args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision ) else: vae = None unet = UNet2DConditionModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision ) ``` Then, it's time to [create the training dataset](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L1073) and DataLoader from `DreamBoothDataset`: ```py train_dataset = DreamBoothDataset( instance_data_root=args.instance_data_dir, instance_prompt=args.instance_prompt, class_data_root=args.class_data_dir if args.with_prior_preservation else None, class_prompt=args.class_prompt, class_num=args.num_class_images, tokenizer=tokenizer, size=args.resolution, center_crop=args.center_crop, encoder_hidden_states=pre_computed_encoder_hidden_states, class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, tokenizer_max_length=args.tokenizer_max_length, ) train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), num_workers=args.dataloader_num_workers, ) ``` Lastly, the [training loop](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L1151) takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. ## Launch the script You're now ready to launch the training script! ๐Ÿš€ For this guide, you'll download some images of a [dog](https://huggingface.co/datasets/diffusers/dog-example) and store them in a directory. But remember, you can create and use your own dataset if you want (see the [Create a dataset for training](create_dataset) guide). ```py from huggingface_hub import snapshot_download local_dir = "./dog" snapshot_download( "diffusers/dog-example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes", ) ``` Set the environment variable `MODEL_NAME` to a model id on the Hub or a path to a local model, `INSTANCE_DIR` to the path where you just downloaded the dog images to, and `OUTPUT_DIR` to where you want to save the model. You'll use `sks` as the special word to tie the training to. If you're interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: ```bash --validation_prompt="a photo of a sks dog" --num_validation_images=4 --validation_steps=100 ``` One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth. <hfoptions id="gpu-select"> <hfoption id="16GB"> On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes: ```py pip install bitsandbytes ``` Then, add the following parameter to your training command: ```bash accelerate launch train_dreambooth.py \ --gradient_checkpointing \ --use_8bit_adam \ ``` </hfoption> <hfoption id="12GB"> On a 12GB GPU, you'll need bitsandbytes 8-bit optimizer, gradient checkpointing, xFormers, and set the gradients to `None` instead of zero to reduce your memory-usage. ```bash accelerate launch train_dreambooth.py \ --use_8bit_adam \ --gradient_checkpointing \ --enable_xformers_memory_efficient_attention \ --set_grads_to_none \ ``` </hfoption> <hfoption id="8GB"> On a 8GB GPU, you'll need [DeepSpeed](https://www.deepspeed.ai/) to offload some of the tensors from the vRAM to either the CPU or NVME to allow training with less GPU memory. Run the following command to configure your ๐Ÿค— Accelerate environment: ```bash accelerate config ``` During configuration, confirm that you want to use DeepSpeed. Now it should be possible to train on under 8GB vRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM (~25 GB). See the [DeepSpeed documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more configuration options. You should also change the default Adam optimizer to DeepSpeedโ€™s optimized version of Adam [`deepspeed.ops.adam.DeepSpeedCPUAdam`](https://deepspeed.readthedocs.io/en/latest/optimizers.html#adam-cpu) for a substantial speedup. Enabling `DeepSpeedCPUAdam` requires your systemโ€™s CUDA toolchain version to be the same as the one installed with PyTorch. bitsandbytes 8-bit optimizers donโ€™t seem to be compatible with DeepSpeed at the moment. That's it! You don't need to add any additional parameters to your training command. </hfoption> </hfoptions> <hfoptions id="training-inference"> <hfoption id="PyTorch"> ```bash export MODEL_NAME="runwayml/stable-diffusion-v1-5" export INSTANCE_DIR="./dog" export OUTPUT_DIR="path_to_saved_model" accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --instance_prompt="a photo of sks dog" \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=1 \ --learning_rate=5e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --max_train_steps=400 \ --push_to_hub ``` </hfoption> <hfoption id="Flax"> ```bash export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" export INSTANCE_DIR="./dog" export OUTPUT_DIR="path-to-save-model" python train_dreambooth_flax.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --instance_prompt="a photo of sks dog" \ --resolution=512 \ --train_batch_size=1 \ --learning_rate=5e-6 \ --max_train_steps=400 \ --push_to_hub ``` </hfoption> </hfoptions> Once training is complete, you can use your newly trained model for inference! <Tip> Can't wait to try your model for inference before training is complete? ๐Ÿคญ Make sure you have the latest version of ๐Ÿค— Accelerate installed. ```py from diffusers import DiffusionPipeline, UNet2DConditionModel from transformers import CLIPTextModel import torch unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet") # if you have trained with `--args.train_text_encoder` make sure to also load the text encoder text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder") pipeline = DiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16, ).to("cuda") image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] image.save("dog-bucket.png") ``` </Tip> <hfoptions id="training-inference"> <hfoption id="PyTorch"> ```py from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] image.save("dog-bucket.png") ``` </hfoption> <hfoption id="Flax"> ```py import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained("path-to-your-trained-model", dtype=jax.numpy.bfloat16) prompt = "A photo of sks dog in a bucket" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, jax.device_count()) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) image.save("dog-bucket.png") ``` </hfoption> </hfoptions> ## LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the [train_dreambooth_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py) script to train with LoRA. The LoRA training script is discussed in more detail in the [LoRA training](lora) guide. ## Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the [train_dreambooth_lora_sdxl.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py) script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the [SDXL training](sdxl) guide. ## Next steps Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful: - Learn how to [load a DreamBooth](../using-diffusers/loading_adapters) model for inference if you trained your model with LoRA.
diffusers/docs/source/en/training/dreambooth.md/0
{ "file_path": "diffusers/docs/source/en/training/dreambooth.md", "repo_id": "diffusers", "token_count": 6164 }
94
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]] # Load LoRAs for inference There are many adapter types (with [LoRAs](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) being the most popular) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. In this tutorial, you'll learn how to easily load and manage adapters for inference with the ๐Ÿค— [PEFT](https://huggingface.co/docs/peft/index) integration in ๐Ÿค— Diffusers. You'll use LoRA as the main adapter technique, so you'll see the terms LoRA and adapter used interchangeably. Let's first install all the required libraries. ```bash !pip install -q transformers accelerate peft diffusers ``` Now, load a pipeline with a [Stable Diffusion XL (SDXL)](../api/pipelines/stable_diffusion/stable_diffusion_xl) checkpoint: ```python from diffusers import DiffusionPipeline import torch pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") ``` Next, load a [CiroN2022/toy-face](https://huggingface.co/CiroN2022/toy-face) adapter with the [`~diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights`] method. With the ๐Ÿค— PEFT integration, you can assign a specific `adapter_name` to the checkpoint, which let's you easily switch between different LoRA checkpoints. Let's call this adapter `"toy"`. ```python pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") ``` Make sure to include the token `toy_face` in the prompt and then you can perform inference: ```python prompt = "toy_face of a hacker with a hoodie" lora_scale= 0.9 image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_8_1.png) With the `adapter_name` parameter, it is really easy to use another adapter for inference! Load the [nerijs/pixel-art-xl](https://huggingface.co/nerijs/pixel-art-xl) adapter that has been fine-tuned to generate pixel art images and call it `"pixel"`. The pipeline automatically sets the first loaded adapter (`"toy"`) as the active adapter, but you can activate the `"pixel"` adapter with the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method: ```python pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipe.set_adapters("pixel") ``` Make sure you include the token `pixel art` in your prompt to generate a pixel art image: ```python prompt = "a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` ![pixel-art](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_12_1.png) ## Merge adapters You can also merge different adapter checkpoints for inference to blend their styles together. Once again, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method to activate the `pixel` and `toy` adapters and specify the weights for how they should be merged. ```python pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) ``` <Tip> LoRA checkpoints in the diffusion community are almost always obtained with [DreamBooth](https://huggingface.co/docs/diffusers/main/en/training/dreambooth). DreamBooth training often relies on "trigger" words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it's important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. </Tip> Remember to use the trigger words for [CiroN2022/toy-face](https://hf.co/CiroN2022/toy-face) and [nerijs/pixel-art-xl](https://hf.co/nerijs/pixel-art-xl) (these are found in their repositories) in the prompt to generate an image. ```python prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face-pixel-art](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_16_1.png) Impressive! As you can see, the model generated an image that mixed the characteristics of both adapters. > [!TIP] > Through its PEFT integration, Diffusers also offers more efficient merging methods which you can learn about in the [Merge LoRAs](../using-diffusers/merge_loras) guide! To return to only using one adapter, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method to activate the `"toy"` adapter: ```python pipe.set_adapters("toy") prompt = "toy_face of a hacker with a hoodie" lora_scale= 0.9 image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` Or to disable all adapters entirely, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.disable_lora`] method to return the base model. ```python pipe.disable_lora() prompt = "toy_face of a hacker with a hoodie" lora_scale= 0.9 image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ## Manage active adapters You have attached multiple adapters in this tutorial, and if you're feeling a bit lost on what adapters have been attached to the pipeline's components, use the [`~diffusers.loaders.LoraLoaderMixin.get_active_adapters`] method to check the list of active adapters: ```py active_adapters = pipe.get_active_adapters() active_adapters ["toy", "pixel"] ``` You can also get the active adapters of each pipeline component with [`~diffusers.loaders.LoraLoaderMixin.get_list_adapters`]: ```py list_adapters_component_wise = pipe.get_list_adapters() list_adapters_component_wise {"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} ```
diffusers/docs/source/en/tutorials/using_peft_for_inference.md/0
{ "file_path": "diffusers/docs/source/en/tutorials/using_peft_for_inference.md", "repo_id": "diffusers", "token_count": 2215 }
95
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]] # Trajectory Consistency Distillation-LoRA Trajectory Consistency Distillation (TCD) enables a model to generate higher quality and more detailed images with fewer steps. Moreover, owing to the effective error mitigation during the distillation process, TCD demonstrates superior performance even under conditions of large inference steps. The major advantages of TCD are: - Better than Teacher: TCD demonstrates superior generative quality at both small and large inference steps and exceeds the performance of [DPM-Solver++(2S)](../../api/schedulers/multistep_dpm_solver) with Stable Diffusion XL (SDXL). There is no additional discriminator or LPIPS supervision included during TCD training. - Flexible Inference Steps: The inference steps for TCD sampling can be freely adjusted without adversely affecting the image quality. - Freely change detail level: During inference, the level of detail in the image can be adjusted with a single hyperparameter, *gamma*. > [!TIP] > For more technical details of TCD, please refer to the [paper](https://arxiv.org/abs/2402.19159) or official [project page](https://mhh0318.github.io/tcd/)). For large models like SDXL, TCD is trained with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) to reduce memory usage. This is also useful because you can reuse LoRAs between different finetuned models, as long as they share the same base model, without further training. This guide will show you how to perform inference with TCD-LoRAs for a variety of tasks like text-to-image and inpainting, as well as how you can easily combine TCD-LoRAs with other adapters. Choose one of the supported base model and it's corresponding TCD-LoRA checkpoint from the table below to get started. | Base model | TCD-LoRA checkpoint | |-------------------------------------------------------------------------------------------------|----------------------------------------------------------------| | [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) | [TCD-SD15](https://huggingface.co/h1t/TCD-SD15-LoRA) | | [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) | [TCD-SD21-base](https://huggingface.co/h1t/TCD-SD21-base-LoRA) | | [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) | [TCD-SDXL](https://huggingface.co/h1t/TCD-SDXL-LoRA) | Make sure you have [PEFT](https://github.com/huggingface/peft) installed for better LoRA support. ```bash pip install -U peft ``` ## General tasks In this guide, let's use the [`StableDiffusionXLPipeline`] and the [`TCDScheduler`]. Use the [`~StableDiffusionPipeline.load_lora_weights`] method to load the SDXL-compatible TCD-LoRA weights. A few tips to keep in mind for TCD-LoRA inference are to: - Keep the `num_inference_steps` between 4 and 50 - Set `eta` (used to control stochasticity at each step) between 0 and 1. You should use a higher `eta` when increasing the number of inference steps, but the downside is that a larger `eta` in [`TCDScheduler`] leads to blurrier images. A value of 0.3 is recommended to produce good results. <hfoptions id="tasks"> <hfoption id="text-to-image"> ```python import torch from diffusers import StableDiffusionXLPipeline, TCDScheduler device = "cuda" base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" tcd_lora_id = "h1t/TCD-SDXL-LoRA" pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device) pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(tcd_lora_id) pipe.fuse_lora() prompt = "Painting of the orange cat Otto von Garfield, Count of Bismarck-Schรถnhausen, Duke of Lauenburg, Minister-President of Prussia. Depicted wearing a Prussian Pickelhaube and eating his favorite meal - lasagna." image = pipe( prompt=prompt, num_inference_steps=4, guidance_scale=0, eta=0.3, generator=torch.Generator(device=device).manual_seed(0), ).images[0] ``` ![](https://github.com/jabir-zheng/TCD/raw/main/assets/demo_image.png) </hfoption> <hfoption id="inpainting"> ```python import torch from diffusers import AutoPipelineForInpainting, TCDScheduler from diffusers.utils import load_image, make_image_grid device = "cuda" base_model_id = "diffusers/stable-diffusion-xl-1.0-inpainting-0.1" tcd_lora_id = "h1t/TCD-SDXL-LoRA" pipe = AutoPipelineForInpainting.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device) pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(tcd_lora_id) pipe.fuse_lora() img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = load_image(img_url).resize((1024, 1024)) mask_image = load_image(mask_url).resize((1024, 1024)) prompt = "a tiger sitting on a park bench" image = pipe( prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=8, guidance_scale=0, eta=0.3, strength=0.99, # make sure to use `strength` below 1.0 generator=torch.Generator(device=device).manual_seed(0), ).images[0] grid_image = make_image_grid([init_image, mask_image, image], rows=1, cols=3) ``` ![](https://github.com/jabir-zheng/TCD/raw/main/assets/inpainting_tcd.png) </hfoption> </hfoptions> ## Community models TCD-LoRA also works with many community finetuned models and plugins. For example, load the [animagine-xl-3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0) checkpoint which is a community finetuned version of SDXL for generating anime images. ```python import torch from diffusers import StableDiffusionXLPipeline, TCDScheduler device = "cuda" base_model_id = "cagliostrolab/animagine-xl-3.0" tcd_lora_id = "h1t/TCD-SDXL-LoRA" pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device) pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(tcd_lora_id) pipe.fuse_lora() prompt = "A man, clad in a meticulously tailored military uniform, stands with unwavering resolve. The uniform boasts intricate details, and his eyes gleam with determination. Strands of vibrant, windswept hair peek out from beneath the brim of his cap." image = pipe( prompt=prompt, num_inference_steps=8, guidance_scale=0, eta=0.3, generator=torch.Generator(device=device).manual_seed(0), ).images[0] ``` ![](https://github.com/jabir-zheng/TCD/raw/main/assets/animagine_xl.png) TCD-LoRA also supports other LoRAs trained on different styles. For example, let's load the [TheLastBen/Papercut_SDXL](https://huggingface.co/TheLastBen/Papercut_SDXL) LoRA and fuse it with the TCD-LoRA with the [`~loaders.UNet2DConditionLoadersMixin.set_adapters`] method. > [!TIP] > Check out the [Merge LoRAs](merge_loras) guide to learn more about efficient merging methods. ```python import torch from diffusers import StableDiffusionXLPipeline from scheduling_tcd import TCDScheduler device = "cuda" base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" tcd_lora_id = "h1t/TCD-SDXL-LoRA" styled_lora_id = "TheLastBen/Papercut_SDXL" pipe = StableDiffusionXLPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device) pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(tcd_lora_id, adapter_name="tcd") pipe.load_lora_weights(styled_lora_id, adapter_name="style") pipe.set_adapters(["tcd", "style"], adapter_weights=[1.0, 1.0]) prompt = "papercut of a winter mountain, snow" image = pipe( prompt=prompt, num_inference_steps=4, guidance_scale=0, eta=0.3, generator=torch.Generator(device=device).manual_seed(0), ).images[0] ``` ![](https://github.com/jabir-zheng/TCD/raw/main/assets/styled_lora.png) ## Adapters TCD-LoRA is very versatile, and it can be combined with other adapter types like ControlNets, IP-Adapter, and AnimateDiff. <hfoptions id="adapters"> <hfoption id="ControlNet"> ### Depth ControlNet ```python import torch import numpy as np from PIL import Image from transformers import DPTFeatureExtractor, DPTForDepthEstimation from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline from diffusers.utils import load_image, make_image_grid from scheduling_tcd import TCDScheduler device = "cuda" depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to(device) feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") def get_depth_map(image): image = feature_extractor(images=image, return_tensors="pt").pixel_values.to(device) with torch.no_grad(), torch.autocast(device): depth_map = depth_estimator(image).predicted_depth depth_map = torch.nn.functional.interpolate( depth_map.unsqueeze(1), size=(1024, 1024), mode="bicubic", align_corners=False, ) depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) depth_map = (depth_map - depth_min) / (depth_max - depth_min) image = torch.cat([depth_map] * 3, dim=1) image = image.permute(0, 2, 3, 1).cpu().numpy()[0] image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) return image base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" controlnet_id = "diffusers/controlnet-depth-sdxl-1.0" tcd_lora_id = "h1t/TCD-SDXL-LoRA" controlnet = ControlNetModel.from_pretrained( controlnet_id, torch_dtype=torch.float16, variant="fp16", ).to(device) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( base_model_id, controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", ).to(device) pipe.enable_model_cpu_offload() pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(tcd_lora_id) pipe.fuse_lora() prompt = "stormtrooper lecture, photorealistic" image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-depth/resolve/main/images/stormtrooper.png") depth_image = get_depth_map(image) controlnet_conditioning_scale = 0.5 # recommended for good generalization image = pipe( prompt, image=depth_image, num_inference_steps=4, guidance_scale=0, eta=0.3, controlnet_conditioning_scale=controlnet_conditioning_scale, generator=torch.Generator(device=device).manual_seed(0), ).images[0] grid_image = make_image_grid([depth_image, image], rows=1, cols=2) ``` ![](https://github.com/jabir-zheng/TCD/raw/main/assets/controlnet_depth_tcd.png) ### Canny ControlNet ```python import torch from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline from diffusers.utils import load_image, make_image_grid from scheduling_tcd import TCDScheduler device = "cuda" base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" controlnet_id = "diffusers/controlnet-canny-sdxl-1.0" tcd_lora_id = "h1t/TCD-SDXL-LoRA" controlnet = ControlNetModel.from_pretrained( controlnet_id, torch_dtype=torch.float16, variant="fp16", ).to(device) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( base_model_id, controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", ).to(device) pipe.enable_model_cpu_offload() pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(tcd_lora_id) pipe.fuse_lora() prompt = "ultrarealistic shot of a furry blue bird" canny_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/bird_canny.png") controlnet_conditioning_scale = 0.5 # recommended for good generalization image = pipe( prompt, image=canny_image, num_inference_steps=4, guidance_scale=0, eta=0.3, controlnet_conditioning_scale=controlnet_conditioning_scale, generator=torch.Generator(device=device).manual_seed(0), ).images[0] grid_image = make_image_grid([canny_image, image], rows=1, cols=2) ``` ![](https://github.com/jabir-zheng/TCD/raw/main/assets/controlnet_canny_tcd.png) <Tip> The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. </Tip> </hfoption> <hfoption id="IP-Adapter"> This example shows how to use the TCD-LoRA with the [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter/tree/main) and SDXL. ```python import torch from diffusers import StableDiffusionXLPipeline from diffusers.utils import load_image, make_image_grid from ip_adapter import IPAdapterXL from scheduling_tcd import TCDScheduler device = "cuda" base_model_path = "stabilityai/stable-diffusion-xl-base-1.0" image_encoder_path = "sdxl_models/image_encoder" ip_ckpt = "sdxl_models/ip-adapter_sdxl.bin" tcd_lora_id = "h1t/TCD-SDXL-LoRA" pipe = StableDiffusionXLPipeline.from_pretrained( base_model_path, torch_dtype=torch.float16, variant="fp16" ) pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) pipe.load_lora_weights(tcd_lora_id) pipe.fuse_lora() ip_model = IPAdapterXL(pipe, image_encoder_path, ip_ckpt, device) ref_image = load_image("https://raw.githubusercontent.com/tencent-ailab/IP-Adapter/main/assets/images/woman.png").resize((512, 512)) prompt = "best quality, high quality, wearing sunglasses" image = ip_model.generate( pil_image=ref_image, prompt=prompt, scale=0.5, num_samples=1, num_inference_steps=4, guidance_scale=0, eta=0.3, seed=0, )[0] grid_image = make_image_grid([ref_image, image], rows=1, cols=2) ``` ![](https://github.com/jabir-zheng/TCD/raw/main/assets/ip_adapter.png) </hfoption> <hfoption id="AnimateDiff"> [`AnimateDiff`] allows animating images using Stable Diffusion models. TCD-LoRA can substantially accelerate the process without degrading image quality. The quality of animation with TCD-LoRA and AnimateDiff has a more lucid outcome. ```python import torch from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler from scheduling_tcd import TCDScheduler from diffusers.utils import export_to_gif adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5") pipe = AnimateDiffPipeline.from_pretrained( "frankjoshua/toonyou_beta6", motion_adapter=adapter, ).to("cuda") # set TCDScheduler pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) # load TCD LoRA pipe.load_lora_weights("h1t/TCD-SD15-LoRA", adapter_name="tcd") pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora") pipe.set_adapters(["tcd", "motion-lora"], adapter_weights=[1.0, 1.2]) prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress" generator = torch.manual_seed(0) frames = pipe( prompt=prompt, num_inference_steps=5, guidance_scale=0, cross_attention_kwargs={"scale": 1}, num_frames=24, eta=0.3, generator=generator ).frames[0] export_to_gif(frames, "animation.gif") ``` ![](https://github.com/jabir-zheng/TCD/raw/main/assets/animation_example.gif) </hfoption> </hfoptions>
diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md/0
{ "file_path": "diffusers/docs/source/en/using-diffusers/inference_with_tcd_lora.md", "repo_id": "diffusers", "token_count": 6044 }
96
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Stable Diffusion XL Turbo [[open-in-colab]] SDXL Turbo is an adversarial time-distilled [Stable Diffusion XL](https://huggingface.co/papers/2307.01952) (SDXL) model capable of running inference in as little as 1 step. This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image. Before you begin, make sure you have the following libraries installed: ```py # uncomment to install the necessary libraries in Colab #!pip install -q diffusers transformers accelerate ``` ## Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the [`~StableDiffusionXLPipeline.from_pretrained`] method: ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") pipeline = pipeline.to("cuda") ``` You can also use the [`~StableDiffusionXLPipeline.from_single_file`] method to load a model checkpoint stored in a single file format (`.ckpt` or `.safetensors`) from the Hub or locally. For this loading method, you need to set `timestep_spacing="trailing"` (feel free to experiment with the other scheduler config values to get better results): ```py from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler import torch pipeline = StableDiffusionXLPipeline.from_single_file( "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", torch_dtype=torch.float16, variant="fp16") pipeline = pipeline.to("cuda") pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config, timestep_spacing="trailing") ``` ## Text-to-image For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the `height` and `width` parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so. Make sure to set `guidance_scale` to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images. Increasing the number of steps to 2, 3 or 4 should improve image quality. ```py from diffusers import AutoPipelineForText2Image import torch pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") pipeline_text2image = pipeline_text2image.to("cuda") prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/sdxl-turbo-text2img.png" alt="generated image of a racoon in a robe"/> </div> ## Image-to-image For image-to-image generation, make sure that `num_inference_steps * strength` is larger or equal to 1. The image-to-image pipeline will run for `int(num_inference_steps * strength)` steps, e.g. `0.5 * 2.0 = 1` step in our example below. ```py from diffusers import AutoPipelineForImage2Image from diffusers.utils import load_image, make_image_grid # use from_pipe to avoid consuming additional memory when loading a checkpoint pipeline_image2image = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") init_image = init_image.resize((512, 512)) prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" image = pipeline_image2image(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0] make_image_grid([init_image, image], rows=1, cols=2) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/sdxl-turbo-img2img.png" alt="Image-to-image generation sample using SDXL Turbo"/> </div> ## Speed-up SDXL Turbo even more - Compile the UNet if you are using PyTorch version 2.0 or higher. The first inference run will be very slow, but subsequent ones will be much faster. ```py pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` - When using the default VAE, keep it in `float32` to avoid costly `dtype` conversions before and after each generation. You only need to do this one before your first generation: ```py pipe.upcast_vae() ``` As an alternative, you can also use a [16-bit VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix) created by community member [`@madebyollin`](https://huggingface.co/madebyollin) that does not need to be upcasted to `float32`.
diffusers/docs/source/en/using-diffusers/sdxl_turbo.md/0
{ "file_path": "diffusers/docs/source/en/using-diffusers/sdxl_turbo.md", "repo_id": "diffusers", "token_count": 1714 }
97
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]] # ํ›‘์–ด๋ณด๊ธฐ Diffusion ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€๋‚˜ ์˜ค๋””์˜ค์™€ ๊ฐ™์€ ๊ด€์‹ฌ ์ƒ˜ํ”Œ๋“ค์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ๋žœ๋ค ๊ฐ€์šฐ์‹œ์•ˆ ๋…ธ์ด์ฆˆ๋ฅผ ๋‹จ๊ณ„๋ณ„๋กœ ์ œ๊ฑฐํ•˜๋„๋ก ํ•™์Šต๋ฉ๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด ์ƒ์„ฑ AI์— ๋Œ€ํ•œ ๊ด€์‹ฌ์ด ๋งค์šฐ ๋†’์•„์กŒ์œผ๋ฉฐ, ์ธํ„ฐ๋„ท์—์„œ diffusion ์ƒ์„ฑ ์ด๋ฏธ์ง€์˜ ์˜ˆ๋ฅผ ๋ณธ ์ ์ด ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๐Ÿงจ Diffusers๋Š” ๋ˆ„๊ตฌ๋‚˜ diffusion ๋ชจ๋ธ๋“ค์„ ๋„๋ฆฌ ์ด์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๊ธฐ ์œ„ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ๊ฐœ๋ฐœ์ž๋“  ์ผ๋ฐ˜ ์‚ฌ์šฉ์ž๋“  ์ด ํ›‘์–ด๋ณด๊ธฐ๋ฅผ ํ†ตํ•ด ๐Ÿงจ diffusers๋ฅผ ์†Œ๊ฐœํ•˜๊ณ  ๋น ๋ฅด๊ฒŒ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์™€๋“œ๋ฆฝ๋‹ˆ๋‹ค! ์•Œ์•„์•ผ ํ•  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๋Š” ํฌ๊ฒŒ ์„ธ ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค: * [`DiffusionPipeline`]์€ ์ถ”๋ก ์„ ์œ„ํ•ด ์‚ฌ์ „ ํ•™์Šต๋œ diffusion ๋ชจ๋ธ์—์„œ ์ƒ˜ํ”Œ์„ ๋น ๋ฅด๊ฒŒ ์ƒ์„ฑํ•˜๋„๋ก ์„ค๊ณ„๋œ ๋†’์€ ์ˆ˜์ค€์˜ ์—”๋“œํˆฌ์—”๋“œ ํด๋ž˜์Šค์ž…๋‹ˆ๋‹ค. * Diffusion ์‹œ์Šคํ…œ ์ƒ์„ฑ์„ ์œ„ํ•œ ๋นŒ๋”ฉ ๋ธ”๋ก์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๋Š” ์‚ฌ์ „ ํ•™์Šต๋œ [model](./api/models) ์•„ํ‚คํ…์ฒ˜ ๋ฐ ๋ชจ๋“ˆ. * ๋‹ค์–‘ํ•œ [schedulers](./api/schedulers/overview) - ํ•™์Šต์„ ์œ„ํ•ด ๋…ธ์ด์ฆˆ๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ์ถ”๋ก  ์ค‘์— ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ๋œ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์–ดํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์ž…๋‹ˆ๋‹ค. ํ›‘์–ด๋ณด๊ธฐ์—์„œ๋Š” ์ถ”๋ก ์„ ์œ„ํ•ด [`DiffusionPipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค€ ๋‹ค์Œ, ๋ชจ๋ธ๊ณผ ์Šค์ผ€์ค„๋Ÿฌ๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ [`DiffusionPipeline`] ๋‚ด๋ถ€์—์„œ ์ผ์–ด๋‚˜๋Š” ์ผ์„ ๋ณต์ œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. <Tip> ํ›‘์–ด๋ณด๊ธฐ๋Š” ๊ฐ„๊ฒฐํ•œ ๋ฒ„์ „์˜ ๐Ÿงจ Diffusers ์†Œ๊ฐœ๋กœ์„œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) ๋น ๋ฅด๊ฒŒ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์™€๋“œ๋ฆฝ๋‹ˆ๋‹ค. ๋””ํ“จ์ €์˜ ๋ชฉํ‘œ, ๋””์ž์ธ ์ฒ ํ•™, ํ•ต์‹ฌ API์— ๋Œ€ํ•œ ์ถ”๊ฐ€ ์„ธ๋ถ€ ์ •๋ณด๋ฅผ ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด ๋…ธํŠธ๋ถ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```py # ์ฃผ์„ ํ’€์–ด์„œ Colab์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์„ค์น˜ํ•˜๊ธฐ. #!pip install --upgrade diffusers accelerate transformers ``` - [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate/index)๋Š” ์ถ”๋ก  ๋ฐ ํ•™์Šต์„ ์œ„ํ•œ ๋ชจ๋ธ ๋กœ๋”ฉ ์†๋„๋ฅผ ๋†’์—ฌ์ค๋‹ˆ๋‹ค. - [๐Ÿค— Transformers](https://huggingface.co/docs/transformers/index)๋Š” [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview)๊ณผ ๊ฐ™์ด ๊ฐ€์žฅ ๋งŽ์ด ์‚ฌ์šฉ๋˜๋Š” diffusion ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๋Š” ๋ฐ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ## DiffusionPipeline [`DiffusionPipeline`] ์€ ์ถ”๋ก ์„ ์œ„ํ•ด ์‚ฌ์ „ ํ•™์Šต๋œ diffusion ์‹œ์Šคํ…œ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ์‰ฌ์šด ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๊ณผ ์Šค์ผ€์ค„๋Ÿฌ๋ฅผ ํฌํ•จํ•˜๋Š” ์—”๋“œ ํˆฌ ์—”๋“œ ์‹œ์Šคํ…œ์ž…๋‹ˆ๋‹ค. ๋‹ค์–‘ํ•œ ์ž‘์—…์— [`DiffusionPipeline`]์„ ๋ฐ”๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜ ํ‘œ์—์„œ ์ง€์›๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ์ž‘์—…์„ ์‚ดํŽด๋ณด๊ณ , ์ง€์›๋˜๋Š” ์ž‘์—…์˜ ์ „์ฒด ๋ชฉ๋ก์€ [๐Ÿงจ Diffusers Summary](./api/pipelines/overview#diffusers-summary) ํ‘œ์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. | **Task** | **Description** | **Pipeline** |------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------| | Unconditional Image Generation | generate an image from Gaussian noise | [unconditional_image_generation](./using-diffusers/unconditional_image_generation) | | Text-Guided Image Generation | generate an image given a text prompt | [conditional_image_generation](./using-diffusers/conditional_image_generation) | | Text-Guided Image-to-Image Translation | adapt an image guided by a text prompt | [img2img](./using-diffusers/img2img) | | Text-Guided Image-Inpainting | fill the masked part of an image given the image, the mask and a text prompt | [inpaint](./using-diffusers/inpaint) | | Text-Guided Depth-to-Image Translation | adapt parts of an image guided by a text prompt while preserving structure via depth estimation | [depth2img](./using-diffusers/depth2img) | ๋จผ์ € [`DiffusionPipeline`]์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ๋‹ค์šด๋กœ๋“œํ•  ํŒŒ์ดํ”„๋ผ์ธ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ์ €์žฅ๋œ ๋ชจ๋“  [checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads)์— ๋Œ€ํ•ด [`DiffusionPipeline`]์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ํ›‘์–ด๋ณด๊ธฐ์—์„œ๋Š” text-to-image ์ƒ์„ฑ์„ ์œ„ํ•œ [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. <Tip warning={true}> [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion) ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๊ธฐ ์ „์— [๋ผ์ด์„ ์Šค](https://huggingface.co/spaces/CompVis/stable-diffusion-license)๋ฅผ ๋จผ์ € ์ฃผ์˜ ๊นŠ๊ฒŒ ์ฝ์–ด์ฃผ์„ธ์š”. ๐Ÿงจ Diffusers๋Š” ๋ถˆ์พŒํ•˜๊ฑฐ๋‚˜ ์œ ํ•ดํ•œ ์ฝ˜ํ…์ธ ๋ฅผ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด [`safety_checker`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py)๋ฅผ ๊ตฌํ˜„ํ•˜๊ณ  ์žˆ์ง€๋งŒ, ๋ชจ๋ธ์˜ ํ–ฅ์ƒ๋œ ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๊ธฐ๋Šฅ์œผ๋กœ ์ธํ•ด ์—ฌ์ „ํžˆ ์ž ์žฌ์ ์œผ๋กœ ์œ ํ•ดํ•œ ์ฝ˜ํ…์ธ ๊ฐ€ ์ƒ์„ฑ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> [`~DiffusionPipeline.from_pretrained`] ๋ฐฉ๋ฒ•์œผ๋กœ ๋ชจ๋ธ ๋กœ๋“œํ•˜๊ธฐ: ```python >>> from diffusers import DiffusionPipeline >>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") ``` The [`DiffusionPipeline`]์€ ๋ชจ๋“  ๋ชจ๋ธ๋ง, ํ† ํฐํ™”, ์Šค์ผ€์ค„๋ง ์ปดํฌ๋„ŒํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์บ์‹œํ•ฉ๋‹ˆ๋‹ค. Stable Diffusion Pipeline์€ ๋ฌด์—‡๋ณด๋‹ค๋„ [`UNet2DConditionModel`]๊ณผ [`PNDMScheduler`]๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> pipeline StableDiffusionPipeline { "_class_name": "StableDiffusionPipeline", "_diffusers_version": "0.13.1", ..., "scheduler": [ "diffusers", "PNDMScheduler" ], ..., "unet": [ "diffusers", "UNet2DConditionModel" ], "vae": [ "diffusers", "AutoencoderKL" ] } ``` ์ด ๋ชจ๋ธ์€ ์•ฝ 14์–ต ๊ฐœ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ GPU์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์‹คํ–‰ํ•  ๊ฒƒ์„ ๊ฐ•๋ ฅํžˆ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. PyTorch์—์„œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ œ๋„ˆ๋ ˆ์ดํ„ฐ ๊ฐ์ฒด๋ฅผ GPU๋กœ ์ด๋™ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> pipeline.to("cuda") ``` ์ด์ œ `ํŒŒ์ดํ”„๋ผ์ธ`์— ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ „๋‹ฌํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•œ ๋‹ค์Œ ๋…ธ์ด์ฆˆ๊ฐ€ ์ œ๊ฑฐ๋œ ์ด๋ฏธ์ง€์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ์ ์œผ๋กœ ์ด๋ฏธ์ง€ ์ถœ๋ ฅ์€ [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) ๊ฐ์ฒด๋กœ ๊ฐ์‹ธ์ง‘๋‹ˆ๋‹ค. ```python >>> image = pipeline("An image of a squirrel in Picasso style").images[0] >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/image_of_squirrel_painting.png"/> </div> `save`๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค: ```python >>> image.save("image_of_squirrel_painting.png") ``` ### ๋กœ์ปฌ ํŒŒ์ดํ”„๋ผ์ธ ํŒŒ์ดํ”„๋ผ์ธ์„ ๋กœ์ปฌ์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ์ผํ•œ ์ฐจ์ด์ ์€ ๊ฐ€์ค‘์น˜๋ฅผ ๋จผ์ € ๋‹ค์šด๋กœ๋“œํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค: ```bash !git lfs install !git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ €์žฅ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ํŒŒ์ดํ”„๋ผ์ธ์— ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```python >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5") ``` ์ด์ œ ์œ„ ์„น์…˜์—์„œ์™€ ๊ฐ™์ด ํŒŒ์ดํ”„๋ผ์ธ์„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์Šค์ผ€์ค„๋Ÿฌ ๊ต์ฒด ์Šค์ผ€์ค„๋Ÿฌ๋งˆ๋‹ค ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ ์†๋„์™€ ํ’ˆ์งˆ์ด ์„œ๋กœ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ž์‹ ์—๊ฒŒ ๊ฐ€์žฅ ์ ํ•ฉํ•œ ์Šค์ผ€์ค„๋Ÿฌ๋ฅผ ์ฐพ๋Š” ๊ฐ€์žฅ ์ข‹์€ ๋ฐฉ๋ฒ•์€ ์ง์ ‘ ์‚ฌ์šฉํ•ด ๋ณด๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! ๐Ÿงจ Diffusers์˜ ์ฃผ์š” ๊ธฐ๋Šฅ ์ค‘ ํ•˜๋‚˜๋Š” ์Šค์ผ€์ค„๋Ÿฌ ๊ฐ„์— ์‰ฝ๊ฒŒ ์ „ํ™˜์ด ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๊ธฐ๋ณธ ์Šค์ผ€์ค„๋Ÿฌ์ธ [`PNDMScheduler`]๋ฅผ [`EulerDiscreteScheduler`]๋กœ ๋ฐ”๊พธ๋ ค๋ฉด, [`~diffusers.ConfigMixin.from_config`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from diffusers import EulerDiscreteScheduler >>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") >>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) ``` ์ƒˆ ์Šค์ผ€์ค„๋Ÿฌ๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•ด๋ณด๊ณ  ์–ด๋–ค ์ฐจ์ด๊ฐ€ ์žˆ๋Š”์ง€ ํ™•์ธํ•ด ๋ณด์„ธ์š”! ๋‹ค์Œ ์„น์…˜์—์„œ๋Š” ๋ชจ๋ธ๊ณผ ์Šค์ผ€์ค„๋Ÿฌ๋ผ๋Š” [`DiffusionPipeline`]์„ ๊ตฌ์„ฑํ•˜๋Š” ์ปดํฌ๋„ŒํŠธ๋ฅผ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ณ  ์ด๋Ÿฌํ•œ ์ปดํฌ๋„ŒํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ณ ์–‘์ด ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์›Œ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## ๋ชจ๋ธ ๋Œ€๋ถ€๋ถ„์˜ ๋ชจ๋ธ์€ ๋…ธ์ด์ฆˆ๊ฐ€ ์žˆ๋Š” ์ƒ˜ํ”Œ์„ ๊ฐ€์ ธ์™€ ๊ฐ ์‹œ๊ฐ„ ๊ฐ„๊ฒฉ๋งˆ๋‹ค ๋…ธ์ด์ฆˆ๊ฐ€ ์ ์€ ์ด๋ฏธ์ง€์™€ ์ž…๋ ฅ ์ด๋ฏธ์ง€ ์‚ฌ์ด์˜ ์ฐจ์ด์ธ *๋…ธ์ด์ฆˆ ์ž”์ฐจ*(๋‹ค๋ฅธ ๋ชจ๋ธ์€ ์ด์ „ ์ƒ˜ํ”Œ์„ ์ง์ ‘ ์˜ˆ์ธกํ•˜๊ฑฐ๋‚˜ ์†๋„ ๋˜๋Š” [`v-prediction`](https://github.com/huggingface/diffusers/blob/5e5ce13e2f89ac45a0066cb3f369462a3cf1d9ef/src/diffusers/schedulers/scheduling_ddim.py#L110)์„ ์˜ˆ์ธกํ•˜๋Š” ํ•™์Šต์„ ํ•ฉ๋‹ˆ๋‹ค)์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ๋ฏน์Šค ์•ค ๋งค์น˜ํ•˜์—ฌ ๋‹ค๋ฅธ diffusion ์‹œ์Šคํ…œ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ [`~ModelMixin.from_pretrained`] ๋ฉ”์„œ๋“œ๋กœ ์‹œ์ž‘๋˜๋ฉฐ, ์ด ๋ฉ”์„œ๋“œ๋Š” ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ๋กœ์ปฌ์— ์บ์‹œํ•˜์—ฌ ๋‹ค์Œ์— ๋ชจ๋ธ์„ ๋กœ๋“œํ•  ๋•Œ ๋” ๋น ๋ฅด๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ›‘์–ด๋ณด๊ธฐ์—์„œ๋Š” ๊ณ ์–‘์ด ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด ํ•™์Šต๋œ ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์žˆ๋Š” ๊ธฐ๋ณธ์ ์ธ unconditional ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋ชจ๋ธ์ธ [`UNet2DModel`]์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from diffusers import UNet2DModel >>> repo_id = "google/ddpm-cat-256" >>> model = UNet2DModel.from_pretrained(repo_id) ``` ๋ชจ๋ธ ๋งค๊ฐœ๋ณ€์ˆ˜์— ์•ก์„ธ์Šคํ•˜๋ ค๋ฉด `model.config`๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.config ``` ๋ชจ๋ธ ๊ตฌ์„ฑ์€ ๐ŸงŠ ๊ณ ์ •๋œ ๐ŸงŠ ๋”•์…”๋„ˆ๋ฆฌ๋กœ, ๋ชจ๋ธ์ด ์ƒ์„ฑ๋œ ํ›„์—๋Š” ํ•ด๋‹น ๋งค๊ฐœ ๋ณ€์ˆ˜๋“ค์„ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์˜๋„์ ์ธ ๊ฒƒ์œผ๋กœ, ์ฒ˜์Œ์— ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ •์˜ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋œ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋™์ผํ•˜๊ฒŒ ์œ ์ง€ํ•˜๋ฉด์„œ ๋‹ค๋ฅธ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ์ถ”๋ก  ์ค‘์— ์กฐ์ •ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๊ธฐ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋“ค์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * `sample_size`: ์ž…๋ ฅ ์ƒ˜ํ”Œ์˜ ๋†’์ด ๋ฐ ๋„ˆ๋น„ ์น˜์ˆ˜์ž…๋‹ˆ๋‹ค. * `in_channels`: ์ž…๋ ฅ ์ƒ˜ํ”Œ์˜ ์ž…๋ ฅ ์ฑ„๋„ ์ˆ˜์ž…๋‹ˆ๋‹ค. * `down_block_types` ๋ฐ `up_block_types`: UNet ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ๋‹ค์šด ๋ฐ ์—…์ƒ˜ํ”Œ๋ง ๋ธ”๋ก์˜ ์œ ํ˜•. * `block_out_channels`: ๋‹ค์šด์ƒ˜ํ”Œ๋ง ๋ธ”๋ก์˜ ์ถœ๋ ฅ ์ฑ„๋„ ์ˆ˜. ์—…์ƒ˜ํ”Œ๋ง ๋ธ”๋ก์˜ ์ž…๋ ฅ ์ฑ„๋„ ์ˆ˜์— ์—ญ์ˆœ์œผ๋กœ ์‚ฌ์šฉ๋˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. * `layers_per_block`: ๊ฐ UNet ๋ธ”๋ก์— ์กด์žฌํ•˜๋Š” ResNet ๋ธ”๋ก์˜ ์ˆ˜์ž…๋‹ˆ๋‹ค. ์ถ”๋ก ์— ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋žœ๋ค ๊ฐ€์šฐ์‹œ์•ˆ ๋…ธ์ด์ฆˆ๋กœ ์ด๋ฏธ์ง€ ๋ชจ์–‘์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋ฌด์ž‘์œ„ ๋…ธ์ด์ฆˆ๋ฅผ ์ˆ˜์‹ ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ 'batch' ์ถ•, ์ž…๋ ฅ ์ฑ„๋„ ์ˆ˜์— ํ•ด๋‹นํ•˜๋Š” 'channel' ์ถ•, ์ด๋ฏธ์ง€์˜ ๋†’์ด์™€ ๋„ˆ๋น„๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” 'sample_size' ์ถ•์ด ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> torch.manual_seed(0) >>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) >>> noisy_sample.shape torch.Size([1, 3, 256, 256]) ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ชจ๋ธ์— ๋…ธ์ด์ฆˆ๊ฐ€ ์žˆ๋Š” ์ด๋ฏธ์ง€์™€ `timestep`์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 'timestep'์€ ์ž…๋ ฅ ์ด๋ฏธ์ง€์˜ ๋…ธ์ด์ฆˆ ์ •๋„๋ฅผ ๋‚˜ํƒ€๋‚ด๋ฉฐ, ์‹œ์ž‘ ๋ถ€๋ถ„์— ๋” ๋งŽ์€ ๋…ธ์ด์ฆˆ๊ฐ€ ์žˆ๊ณ  ๋ ๋ถ€๋ถ„์— ๋” ์ ์€ ๋…ธ์ด์ฆˆ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ์ด diffusion ๊ณผ์ •์—์„œ ์‹œ์ž‘ ๋˜๋Š” ๋์— ๋” ๊ฐ€๊นŒ์šด ์œ„์น˜๋ฅผ ๊ฒฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `sample` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ ์ถœ๋ ฅ์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> with torch.no_grad(): ... noisy_residual = model(sample=noisy_sample, timestep=2).sample ``` ํ•˜์ง€๋งŒ ์‹ค์ œ ์˜ˆ๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ ํ”„๋กœ์„ธ์Šค๋ฅผ ์•ˆ๋‚ดํ•  ์Šค์ผ€์ค„๋Ÿฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์„น์…˜์—์„œ๋Š” ๋ชจ๋ธ์„ ์Šค์ผ€์ค„๋Ÿฌ์™€ ๊ฒฐํ•ฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค. ## ์Šค์ผ€์ค„๋Ÿฌ ์Šค์ผ€์ค„๋Ÿฌ๋Š” ๋ชจ๋ธ ์ถœ๋ ฅ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ ๋…ธ์ด์ฆˆ๊ฐ€ ๋งŽ์€ ์ƒ˜ํ”Œ์—์„œ ๋…ธ์ด์ฆˆ๊ฐ€ ์ ์€ ์ƒ˜ํ”Œ๋กœ ์ „ํ™˜ํ•˜๋Š” ๊ฒƒ์„ ๊ด€๋ฆฌํ•ฉ๋‹ˆ๋‹ค - ์ด ๊ฒฝ์šฐ 'noisy_residual'. <Tip> ๐Ÿงจ Diffusers๋Š” Diffusion ์‹œ์Šคํ…œ์„ ๊ตฌ์ถ•ํ•˜๊ธฐ ์œ„ํ•œ ํˆด๋ฐ•์Šค์ž…๋‹ˆ๋‹ค. [`DiffusionPipeline`]์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ฏธ๋ฆฌ ๋งŒ๋“ค์–ด์ง„ Diffusion ์‹œ์Šคํ…œ์„ ํŽธ๋ฆฌํ•˜๊ฒŒ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๋ชจ๋ธ๊ณผ ์Šค์ผ€์ค„๋Ÿฌ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ๊ฐœ๋ณ„์ ์œผ๋กœ ์„ ํƒํ•˜์—ฌ ์‚ฌ์šฉ์ž ์ง€์ • Diffusion ์‹œ์Šคํ…œ์„ ๊ตฌ์ถ•ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ํ›‘์–ด๋ณด๊ธฐ์˜ ๊ฒฝ์šฐ, [`~diffusers.ConfigMixin.from_config`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ [`DDPMScheduler`]๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from diffusers import DDPMScheduler >>> scheduler = DDPMScheduler.from_config(repo_id) >>> scheduler DDPMScheduler { "_class_name": "DDPMScheduler", "_diffusers_version": "0.13.1", "beta_end": 0.02, "beta_schedule": "linear", "beta_start": 0.0001, "clip_sample": true, "clip_sample_range": 1.0, "num_train_timesteps": 1000, "prediction_type": "epsilon", "trained_betas": null, "variance_type": "fixed_small" } ``` <Tip> ๐Ÿ’ก ์Šค์ผ€์ค„๋Ÿฌ๊ฐ€ ๊ตฌ์„ฑ์—์„œ ์–ด๋–ป๊ฒŒ ์ธ์Šคํ„ด์Šคํ™”๋˜๋Š”์ง€ ์ฃผ๋ชฉํ•˜์„ธ์š”. ๋ชจ๋ธ๊ณผ ๋‹ฌ๋ฆฌ ์Šค์ผ€์ค„๋Ÿฌ์—๋Š” ํ•™์Šต ๊ฐ€๋Šฅํ•œ ๊ฐ€์ค‘์น˜๊ฐ€ ์—†์œผ๋ฉฐ ๋งค๊ฐœ๋ณ€์ˆ˜๋„ ์—†์Šต๋‹ˆ๋‹ค! </Tip> ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * `num_train_timesteps`: ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ ํ”„๋กœ์„ธ์Šค์˜ ๊ธธ์ด, ์ฆ‰ ๋žœ๋ค ๊ฐ€์šฐ์Šค ๋…ธ์ด์ฆˆ๋ฅผ ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ํƒ€์ž„์Šคํ… ์ˆ˜์ž…๋‹ˆ๋‹ค. * `beta_schedule`: ์ถ”๋ก  ๋ฐ ํ•™์Šต์— ์‚ฌ์šฉํ•  ๋…ธ์ด์ฆˆ ์Šค์ผ€์ค„ ์œ ํ˜•์ž…๋‹ˆ๋‹ค. * `beta_start` ๋ฐ `beta_end`: ๋…ธ์ด์ฆˆ ์Šค์ผ€์ค„์˜ ์‹œ์ž‘ ๋ฐ ์ข…๋ฃŒ ๋…ธ์ด์ฆˆ ๊ฐ’์ž…๋‹ˆ๋‹ค. ๋…ธ์ด์ฆˆ๊ฐ€ ์•ฝ๊ฐ„ ์ ์€ ์ด๋ฏธ์ง€๋ฅผ ์˜ˆ์ธกํ•˜๋ ค๋ฉด ์Šค์ผ€์ค„๋Ÿฌ์˜ [`~diffusers.DDPMScheduler.step`] ๋ฉ”์„œ๋“œ์— ๋ชจ๋ธ ์ถœ๋ ฅ, `timestep`, ํ˜„์žฌ `sample`์„ ์ „๋‹ฌํ•˜์„ธ์š”. ```py >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample >>> less_noisy_sample.shape ``` `less_noisy_sample`์„ ๋‹ค์Œ `timestep`์œผ๋กœ ๋„˜๊ธฐ๋ฉด ๋…ธ์ด์ฆˆ๊ฐ€ ๋” ์ค„์–ด๋“ญ๋‹ˆ๋‹ค! ์ด์ œ ์ด ๋ชจ๋“  ๊ฒƒ์„ ํ•œ๋ฐ ๋ชจ์•„ ์ „์ฒด ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ ๊ณผ์ •์„ ์‹œ๊ฐํ™”ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ๋œ ์ด๋ฏธ์ง€๋ฅผ ํ›„์ฒ˜๋ฆฌํ•˜์—ฌ `PIL.Image`๋กœ ํ‘œ์‹œํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import PIL.Image >>> import numpy as np >>> def display_sample(sample, i): ... image_processed = sample.cpu().permute(0, 2, 3, 1) ... image_processed = (image_processed + 1.0) * 127.5 ... image_processed = image_processed.numpy().astype(np.uint8) ... image_pil = PIL.Image.fromarray(image_processed[0]) ... display(f"Image at step {i}") ... display(image_pil) ``` ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ ํ”„๋กœ์„ธ์Šค์˜ ์†๋„๋ฅผ ๋†’์ด๋ ค๋ฉด ์ž…๋ ฅ๊ณผ ๋ชจ๋ธ์„ GPU๋กœ ์˜ฎ๊ธฐ์„ธ์š”: ```py >>> model.to("cuda") >>> noisy_sample = noisy_sample.to("cuda") ``` ์ด์ œ ๋…ธ์ด์ฆˆ๊ฐ€ ์ ์€ ์ƒ˜ํ”Œ์˜ ์ž”์ฐจ๋ฅผ ์˜ˆ์ธกํ•˜๊ณ  ์Šค์ผ€์ค„๋Ÿฌ๋กœ ๋…ธ์ด์ฆˆ๊ฐ€ ์ ์€ ์ƒ˜ํ”Œ์„ ๊ณ„์‚ฐํ•˜๋Š” ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ ๋ฃจํ”„๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tqdm >>> sample = noisy_sample >>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): ... # 1. predict noise residual ... with torch.no_grad(): ... residual = model(sample, t).sample ... # 2. compute less noisy image and set x_t -> x_t-1 ... sample = scheduler.step(residual, t, sample).prev_sample ... # 3. optionally look at image ... if (i + 1) % 50 == 0: ... display_sample(sample, i + 1) ``` ๊ฐ€๋งŒํžˆ ์•‰์•„์„œ ๊ณ ์–‘์ด๊ฐ€ ์†Œ์Œ์œผ๋กœ๋งŒ ์ƒ์„ฑ๋˜๋Š” ๊ฒƒ์„ ์ง€์ผœ๋ณด์„ธ์š”!๐Ÿ˜ป <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/diffusion-quicktour.png"/> </div> ## ๋‹ค์Œ ๋‹จ๊ณ„ ์ด๋ฒˆ ํ›‘์–ด๋ณด๊ธฐ์—์„œ ๐Ÿงจ Diffusers๋กœ ๋ฉ‹์ง„ ์ด๋ฏธ์ง€๋ฅผ ๋งŒ๋“ค์–ด ๋ณด์…จ๊ธฐ๋ฅผ ๋ฐ”๋ž๋‹ˆ๋‹ค! ๋‹ค์Œ ๋‹จ๊ณ„๋กœ ๋„˜์–ด๊ฐ€์„ธ์š”: * [training](./tutorials/basic_training) ํŠœํ† ๋ฆฌ์–ผ์—์„œ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ฑฐ๋‚˜ ํŒŒ์ธํŠœ๋‹ํ•˜์—ฌ ๋‚˜๋งŒ์˜ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ๋‹ค์–‘ํ•œ ์‚ฌ์šฉ ์‚ฌ๋ก€๋Š” ๊ณต์‹ ๋ฐ ์ปค๋ฎค๋‹ˆํ‹ฐ [ํ•™์Šต ๋˜๋Š” ํŒŒ์ธํŠœ๋‹ ์Šคํฌ๋ฆฝํŠธ](https://github.com/huggingface/diffusers/tree/main/examples#-diffusers-examples) ์˜ˆ์‹œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. * ์Šค์ผ€์ค„๋Ÿฌ ๋กœ๋“œ, ์•ก์„ธ์Šค, ๋ณ€๊ฒฝ ๋ฐ ๋น„๊ต์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [๋‹ค๋ฅธ ์Šค์ผ€์ค„๋Ÿฌ ์‚ฌ์šฉ](./using-diffusers/schedulers) ๊ฐ€์ด๋“œ์—์„œ ํ™•์ธํ•˜์„ธ์š”. * [Stable Diffusion](./stable_diffusion) ๊ฐ€์ด๋“œ์—์„œ ํ”„๋กฌํ”„ํŠธ ์—”์ง€๋‹ˆ์–ด๋ง, ์†๋„ ๋ฐ ๋ฉ”๋ชจ๋ฆฌ ์ตœ์ ํ™”, ๊ณ ํ’ˆ์งˆ ์ด๋ฏธ์ง€ ์ƒ์„ฑ์„ ์œ„ํ•œ ํŒ๊ณผ ์š”๋ น์„ ์‚ดํŽด๋ณด์„ธ์š”. * [GPU์—์„œ ํŒŒ์ดํ† ์น˜ ์ตœ์ ํ™”](./optimization/fp16) ๊ฐ€์ด๋“œ์™€ [์• ํ”Œ ์‹ค๋ฆฌ์ฝ˜(M1/M2)์—์„œ์˜ Stable Diffusion](./optimization/mps) ๋ฐ [ONNX ๋Ÿฐํƒ€์ž„](./optimization/onnx) ์‹คํ–‰์— ๋Œ€ํ•œ ์ถ”๋ก  ๊ฐ€์ด๋“œ๋ฅผ ํ†ตํ•ด ๐Ÿงจ Diffuser ์†๋„๋ฅผ ๋†’์ด๋Š” ๋ฐฉ๋ฒ•์„ ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด์„ธ์š”.
diffusers/docs/source/ko/quicktour.md/0
{ "file_path": "diffusers/docs/source/ko/quicktour.md", "repo_id": "diffusers", "token_count": 11429 }
98
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ์กฐ๊ฑด๋ถ€ ์ด๋ฏธ์ง€ ์ƒ์„ฑ [[open-in-colab]] ์กฐ๊ฑด๋ถ€ ์ด๋ฏธ์ง€ ์ƒ์„ฑ์„ ์‚ฌ์šฉํ•˜๋ฉด ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ์—์„œ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ๋Š” ์ž„๋ฒ ๋”ฉ์œผ๋กœ ๋ณ€ํ™˜๋˜๋ฉฐ, ์ž„๋ฒ ๋”ฉ์€ ๋…ธ์ด์ฆˆ์—์„œ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๋„๋ก ๋ชจ๋ธ์„ ์กฐ๊ฑดํ™”ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. [`DiffusionPipeline`]์€ ์ถ”๋ก ์„ ์œ„ํ•ด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ diffusion ์‹œ์Šคํ…œ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ์‰ฌ์šด ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ๋จผ์ € [`DiffusionPipeline`]์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ๋‹ค์šด๋กœ๋“œํ•  ํŒŒ์ดํ”„๋ผ์ธ [์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?library=diffusers&sort=downloads)๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” [์ž ์žฌ Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256)๊ณผ ํ•จ๊ป˜ ํ…์ŠคํŠธ-์ด๋ฏธ์ง€ ์ƒ์„ฑ์— [`DiffusionPipeline`]์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```python >>> from diffusers import DiffusionPipeline >>> generator = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") ``` [`DiffusionPipeline`]์€ ๋ชจ๋“  ๋ชจ๋ธ๋ง, ํ† ํฐํ™”, ์Šค์ผ€์ค„๋ง ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์บ์‹œํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์•ฝ 14์–ต ๊ฐœ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์— GPU์—์„œ ์‹คํ–‰ํ•  ๊ฒƒ์„ ๊ฐ•๋ ฅํžˆ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. PyTorch์—์„œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ƒ์„ฑ๊ธฐ ๊ฐ์ฒด๋ฅผ GPU๋กœ ์ด๋™ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> generator.to("cuda") ``` ์ด์ œ ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ์—์„œ `์ƒ์„ฑ๊ธฐ`๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> image = generator("An image of a squirrel in Picasso style").images[0] ``` ์ถœ๋ ฅ๊ฐ’์€ ๊ธฐ๋ณธ์ ์œผ๋กœ [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) ๊ฐ์ฒด๋กœ ๋ž˜ํ•‘๋ฉ๋‹ˆ๋‹ค. ํ˜ธ์ถœํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> image.save("image_of_squirrel_painting.png") ``` ์•„๋ž˜ ์ŠคํŽ˜์ด์Šค๋ฅผ ์‚ฌ์šฉํ•ด๋ณด๊ณ  ์•ˆ๋‚ด ๋ฐฐ์œจ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ž์œ ๋กญ๊ฒŒ ์กฐ์ •ํ•˜์—ฌ ์ด๋ฏธ์ง€ ํ’ˆ์งˆ์— ์–ด๋–ค ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š”์ง€ ํ™•์ธํ•ด ๋ณด์„ธ์š”! <iframe src="https://stabilityai-stable-diffusion.hf.space" frameborder="0" width="850" height="500" ></iframe>
diffusers/docs/source/ko/using-diffusers/conditional_image_generation.md/0
{ "file_path": "diffusers/docs/source/ko/using-diffusers/conditional_image_generation.md", "repo_id": "diffusers", "token_count": 1551 }
99