File size: 6,772 Bytes
0b342ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
license: mit
datasets:
- meta-llama/Meta-Llama-3.1-405B-Instruct-evals
language:
- en
metrics:
- accuracy
- bleu
pipeline_tag: text-generation
tags:
- llama
- conversational
- text-generation
- emergency-response
- environmental-issues
---
This article provides a detailed overview of the TinyLlama-1.1B-Chat-v1.0-FineTuned-By-MixedIntelligence model, which has been fine-tuned for specific use cases in emergencies and environmental issues. The model was developed as part of a hackathon and is designed to assist in generating responses related to these domains.
Model Details
Model Description

The TinyLlama-1.1B-Chat-v1.0-FineTuned-By-MixedIntelligence model is a fine-tuned version of the TinyLlama-1.1B-Chat-v1.0, optimized for generating text in response to queries related to emergencies and environmental issues. This model was trained on synthetic data generated using the Meta-Llama-3.1-405B-Instruct-Turbo model, with the fine-tuning process conducted on Kaggle using T4 x2 GPUs in about 2 Hours Approx.

    Developed by: Mixed Intelligence Team, led by Umar Majeed
     Umar Majeed (Team Lead) www.linkedin.com/in/umarmajeedofficial
     Mixed Intelligence Team Members:
        Moazzan Hassan https://www.linkedin.com/in/moazzan-hassan/
        Shahroz Butt https://www.linkedin.com/in/shahroz-butt-69a813211?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app
        Sidra Hammed https://www.linkedin.com/in/sidra-hameed-8s122000?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app
        Muskan Liaqat https://www.linkedin.com/in/muskan-liaquat-838880308?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app
        Sana Qaisar https://www.linkedin.com/in/sana-qaisar-03b354316/

Model Sources

    Repository: https://huggingface.co/umarmajeedofficial/TinyLlama-1.1B-Chat-v1.0-FineTuned-By-MixedIntelligence
    

Uses
Direct Use

The TinyLlama-1.1B-Chat-v1.0-FineTuned-By-MixedIntelligence model is intended to be used directly in applications requiring text generation related to emergencies and environmental issues. It is suitable for chatbot implementations, emergency response systems, and educational tools focusing on environmental awareness.
Downstream Use

The model can be further fine-tuned or integrated into larger systems where specific domain knowledge or custom applications are required.
Out-of-Scope Use

The model is not suitable for general-purpose text generation tasks unrelated to its fine-tuned domain. Misuse of generating harmful or misleading information is strongly discouraged.
Bias, Risks, and Limitations

The TinyLlama-1.1B-Chat-v1.0-FineTuned-By-MixedIntelligence model, while fine-tuned on specific data, may still exhibit biases present in the original TinyLlama model. Users should be aware of potential biases, especially in sensitive contexts such as emergency responses.
Recommendations

    Awareness: Users should be mindful of the model's limitations and biases.
    Testing: It is recommended to test the model thoroughly in the intended environment before deployment.

How to Get Started with the Model

To get started with the MyFriend model, use the code snippet below

    import torch
    from transformers import pipeline

    pipe = pipeline("text-generation", model="uumarmajeedofficial/TinyLlama-1.1B-Chat-v1.0-FineTuned-By-MixedIntelligence", torch_dtype=torch.bfloat16, device_map="auto")

    messages = [
    {
        "role": "system",
        "content": "You are an emergency response assistant with expertise in environmental issues.",
    },
    {"role": "user", "content": "What should I do during a heat wave?"},
    ]
    prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
    print(outputs[0]["generated_text"])





Training Details
Training Data

The model was fine-tuned using synthetic data generated from the Meta-Llama-3.1-405B-Instruct-Turbo model. This data was specifically designed to cover a wide range of scenarios related to emergencies and environmental issues which contain approx 2002 Questions Answers Pairs.
Training Procedure

    Preprocessing: The data was preprocessed to ensure relevance and quality for the fine-tuning task.
    Training Hyperparameters: The model was trained using a mixed precision training regime on Kaggle with T4 x2 GPUs.

Evaluation
Testing Data, Factors & Metrics
Testing Data

Testing was conducted on a subset of the synthetic data generated, with evaluations focusing on the model's ability to provide accurate and contextually appropriate responses in emergency and environmental scenarios.
Metrics

    Accuracy: The model's ability to generate correct information.
    Relevance: The relevance of the generated text to the input query.
    Bias Analysis: Evaluation of potential biases in the responses.

Results

The TinyLlama-1.1B-Chat-v1.0-FineTuned-By-MixedIntelligence model showed strong performance in generating accurate and relevant responses within its fine-tuned domain. Further details on evaluation metrics and results can be found in the repository.
Environmental Impact

The environmental impact of training the MyFriend model was minimized by leveraging efficient hardware and cloud resources. The model was trained on Kaggle with T4 x2 GPUs, balancing performance and energy consumption.

    Hardware Type: T4 x2 GPUs
    Hours used: Approximately 2 hours
    Cloud Provider: Kaggle
   

Technical Specifications
Model Architecture and Objective

The TinyLlama-1.1B-Chat-v1.0-FineTuned-By-MixedIntelligence model is built on the TinyLlama architecture, with a focus on conversational text generation.
Compute Infrastructure

    Hardware: T4 x2 GPUs
    Software: The model was fine-tuned using the Hugging Face Transformers library.



Model Card Authors

    Umar Majeed (Team Lead) www.linkedin.com/in/umarmajeedofficial
    Mixed Intelligence Team Members:
        Moazzan Hassan https://www.linkedin.com/in/moazzan-hassan/
        Shahroz Butt https://www.linkedin.com/in/shahroz-butt-69a813211?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app
        Sidra Hammed https://www.linkedin.com/in/sidra-hameed-8s122000?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app
        Muskan Liaqat https://www.linkedin.com/in/muskan-liaquat-838880308?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app
        Sana Qaisar https://www.linkedin.com/in/sana-qaisar-03b354316/

Model Card Contact

For further information or inquiries, please contact Umar Majeed via Hugging Face.