Fine-Tuned Qwen2vl Model

This repository contains a fine-tuned version of the Qwen2vl model for exploration purpose.

Model Files

  • model.safetensors.index.json: The index for the model weights in safetensors format.
  • model-00001-of-00003.safetensors: Part 1 of the model weights.
  • model-00002-of-00003.safetensors: Part 2 of the model weights.
  • model-00003-of-00003.safetensors: Part 3 of the model weights.
  • config.json: Configuration file for the model architecture.
  • tokenizer.json: Tokenizer file.
  • tokenizer_config.json: Tokenizer configuration.
  • special_tokens_map.json: Special tokens mapping.
  • vocab.json: Vocabulary file.

Usage

To use this model, you can load it using the following code:

from transformers import AutoModel, AutoTokenizer, SafeTensorsLoader

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("FatimaAziz/qwen2B_Finetunned4/")

# Load model with safetensors
model = SafeTensorsLoader.from_pretrained("FatimaAziz/qwen2B_Finetunned4/")

# If SafeTensorsLoader is not available, you might need to convert the model
# from safetensors to the standard format supported by AutoModel
Downloads last month
29
Safetensors
Model size
2.21B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.