|
--- |
|
language: |
|
- en |
|
- hi |
|
license: apache-2.0 |
|
datasets: |
|
- fka/awesome-chatgpt-prompts |
|
- DIBT/10k_prompts_ranked |
|
metrics: |
|
- bleu |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: JARVIS |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 32.08 |
|
name: normalized accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 56.86 |
|
name: normalized accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 27.15 |
|
name: accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 37.33 |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 60.14 |
|
name: accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 1.14 |
|
name: accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS |
|
name: Open LLM Leaderboard |
|
tags: |
|
- code |
|
--- |
|
# Model Card for Model ID |
|
|
|
Overview |
|
This model is a conversational AI designed to engage in natural language interactions with users. It is based on the Causal Language Modeling (CLM) architecture and has been fine-tuned on conversational datasets to generate coherent and contextually relevant responses. |
|
|
|
Usage |
|
To use this model, you can interact with it via the Hugging Face Inference API. Provide a text prompt, and the model will generate a response based on the given input. |
|
|
|
Intended Use |
|
This model is intended for various conversational applications, including chatbots, virtual assistants, and dialogue systems. It can be deployed in environments where human-like interactions are required, such as customer service, educational platforms, or entertainment applications. |
|
|
|
Limitations and Ethical Considerations |
|
While this model is capable of generating human-like responses, it may occasionally produce outputs that are inappropriate, offensive, or misleading. It is essential to monitor its responses and ensure responsible deployment to mitigate potential harms. |
|
|
|
License |
|
The model is released under the Apache License 2.0, which allows for both commercial and non-commercial use with proper attribution. |
|
|
|
Acknowledgments |
|
This model was trained using the Hugging Face Transformers library and fine-tuned on conversational datasets. We acknowledge the contributions of the open-source community and the developers of the Transformers library. |
|
|
|
Contact Information |
|
For inquiries or feedback regarding this model, please contact [your contact information]. |
|
|
|
References |
|
Provide any relevant references, citations, or links to resources used in training or developing this model. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
This model is a state-of-the-art conversational AI system based on the Causal Language Modeling (CLM) architecture. It has been fine-tuned on large-scale conversational datasets to generate contextually relevant and coherent responses to user inputs. The model utilizes self-attention mechanisms and deep neural networks to understand and process natural language inputs, allowing it to engage in human-like conversations across a wide range of topics and contexts. |
|
|
|
Architecture |
|
The architecture of this model consists of multiple layers of transformer blocks, including self-attention mechanisms and feed-forward neural networks. It employs techniques such as positional encoding and layer normalization to enhance its ability to capture and process sequential information in text data. The model's parameters are optimized through training on conversational datasets using techniques such as gradient descent and backpropagation. |
|
|
|
Fine-Tuning |
|
During the fine-tuning process, the model is trained on conversational datasets, where it learns to generate appropriate responses based on input prompts. Fine-tuning involves adjusting the parameters of the pre-trained model to better suit the conversational task at hand, thereby improving its performance in generating contextually relevant and coherent responses. |
|
|
|
Performance |
|
The performance of this model is evaluated based on various metrics, including fluency, coherence, relevance, and engagement. It has been extensively tested on benchmark datasets and real-world conversational applications to assess its ability to produce human-like responses and maintain meaningful interactions with users. |
|
|
|
Use Cases |
|
This model can be deployed in a variety of conversational applications, including chatbots, virtual assistants, customer support systems, and interactive storytelling platforms. It can facilitate natural language interactions between users and systems, enhancing user experience and providing valuable assistance across different domains and industries. |
|
|
|
Limitations and Ethical Considerations |
|
While this model demonstrates advanced capabilities in generating human-like responses, it may occasionally produce outputs that are inappropriate, biased, or misleading. Careful monitoring and evaluation are necessary to ensure responsible deployment and mitigate potential risks, such as spreading misinformation or perpetuating harmful stereotypes. |
|
|
|
License |
|
The model is released under the Apache License 2.0, allowing for both commercial and non-commercial use with proper attribution. |
|
|
|
Contact Information |
|
For inquiries or feedback regarding this model, please contact [your contact information]. |
|
|
|
References |
|
Provide any relevant references, citations, or links to resources used in training or developing this model. |
|
|
|
|
|
|
|
- **Developed by:** [VAIBHAV VERMA] |
|
- **Model type:** [conversational AI] |
|
- **Language(s) (NLP):** [PYTHON] |
|
- **License:** [Apache License 2.0] |
|
- INSPIRED BY [OEvortex/vortex-3b ] |
|
|
|
|
|
|
|
## Uses |
|
|
|
The model can be utilized in various conversational applications across different domains and industries. Some potential uses include: |
|
|
|
Chatbots: Deploy the model as a chatbot to engage with users in natural language conversations, providing assistance, answering questions, and offering recommendations. |
|
|
|
Virtual Assistants: Integrate the model into virtual assistant applications to help users with tasks such as scheduling appointments, setting reminders, and retrieving information from the web. |
|
|
|
Customer Support Systems: Use the model to power customer support chat systems, where it can handle customer inquiries, troubleshoot issues, and escalate complex queries to human agents when necessary. |
|
|
|
Interactive Storytelling: Employ the model in interactive storytelling platforms to create immersive narrative experiences where users can engage with virtual characters and influence the plot through their interactions. |
|
|
|
Language Learning: Develop language learning applications that leverage the model to provide conversational practice and feedback to learners, helping them improve their language skills through realistic dialogue simulations. |
|
|
|
Social Media Engagement: Integrate the model into social media platforms to enhance user engagement by enabling automated responses to comments, messages, and posts, personalized recommendations, and conversational interactions. |
|
|
|
Healthcare Assistants: Adapt the model for use in healthcare applications, where it can assist patients with medical inquiries, provide health-related information, and offer support for mental health and wellness. |
|
|
|
Educational Tools: Incorporate the model into educational applications to create interactive tutoring systems, virtual classroom assistants, and language practice tools that engage students in conversational learning experiences. |
|
|
|
Note: |
|
This AI model marks my first deployment on the Hugging Face platform. I am grateful for the invaluable assistance provided by Vortex Bahi throughout the development and deployment process. Their guidance and support have been instrumental in bringing this project to fruition. |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_VAIBHAV22334455__JARVIS) |
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |35.78| |
|
|AI2 Reasoning Challenge (25-Shot)|32.08| |
|
|HellaSwag (10-Shot) |56.86| |
|
|MMLU (5-Shot) |27.15| |
|
|TruthfulQA (0-shot) |37.33| |
|
|Winogrande (5-shot) |60.14| |
|
|GSM8k (5-shot) | 1.14| |