--- library_name: transformers license: other base_model: nvidia/segformer-b2-finetuned-cityscapes-1024-1024 tags: - generated_from_trainer model-index: - name: SegFormer_b2_10 results: [] --- # SegFormer_b2_10 This model is a fine-tuned version of [nvidia/segformer-b2-finetuned-cityscapes-1024-1024](https://huggingface.co/nvidia/segformer-b2-finetuned-cityscapes-1024-1024) on an unknown dataset. It achieves the following results on the evaluation set: - epoch: 14.5161 - eval_accuracy_bicycle: 0.8914 - eval_accuracy_building: 0.9612 - eval_accuracy_bus: 0.9483 - eval_accuracy_car: 0.9763 - eval_accuracy_fence: 0.7181 - eval_accuracy_motorcycle: 0.7986 - eval_accuracy_person: 0.9057 - eval_accuracy_pole: 0.7198 - eval_accuracy_rider: 0.7552 - eval_accuracy_road: 0.9902 - eval_accuracy_sidewalk: 0.9345 - eval_accuracy_sky: 0.9831 - eval_accuracy_terrain: 0.7525 - eval_accuracy_traffic light: 0.8652 - eval_accuracy_traffic sign: 0.8838 - eval_accuracy_train: 0.8680 - eval_accuracy_truck: 0.8765 - eval_accuracy_vegetation: 0.9637 - eval_accuracy_wall: 0.7237 - eval_iou_bicycle: 0.7541 - eval_iou_building: 0.9244 - eval_iou_bus: 0.8603 - eval_iou_car: 0.9482 - eval_iou_fence: 0.6075 - eval_iou_motorcycle: 0.6289 - eval_iou_person: 0.7921 - eval_iou_pole: 0.5893 - eval_iou_rider: 0.5955 - eval_iou_road: 0.9835 - eval_iou_sidewalk: 0.8649 - eval_iou_sky: 0.9465 - eval_iou_terrain: 0.6534 - eval_iou_traffic light: 0.6718 - eval_iou_traffic sign: 0.7801 - eval_iou_train: 0.8124 - eval_iou_truck: 0.8174 - eval_iou_vegetation: 0.9245 - eval_iou_wall: 0.6499 - eval_loss: 0.8030 - eval_mean_accuracy: 0.8693 - eval_mean_iou: 0.7792 - eval_overall_accuracy: 0.9609 - eval_runtime: 202.8122 - eval_samples_per_second: 2.465 - eval_steps_per_second: 0.616 - step: 2700 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.48.1 - Pytorch 2.1.2+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0