Model Description

This model is a compressed version of the OpenHathi-7B-Hi base model, optimized for chat format text data in the Hindi language. It has been quantized using the AWQ technique with calibration data from the samvaad-hi-v1 dataset. The compression process aims to reduce the model size while preserving its performance on chat-oriented tasks.

Model Usage:

The compressed model can be utilized for various natural language processing tasks, particularly those involving chat format text data in Hindi. It can be deployed in conversational AI systems, chatbots, or any application requiring efficient processing of chat-style interactions.

Performance Metrics:

  • Model Size: 4.15 GB
  • Compression Technique: AWQ
  • Calibration Data: wikitext dataset
  • Tokenization Model Size: 968 KB
  • Performance: The compressed model's performance has been evaluated on various chat-oriented tasks, demonstrating efficiency in handling conversational text data while maintaining comparable performance to the original base model.

Limitations: While the compressed model offers significant reductions in size, there may be slight trade-offs in performance compared to the full-sized base model. It may not perform optimally on tasks outside the scope of chat-oriented text data in Hindi.

Downloads last month
13
Safetensors
Model size
1.26B params
Tensor type
I32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.