Mudasir692 commited on
Commit
8a7f1e1
1 Parent(s): cd8c20c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
+ # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
+ {}
5
+ ---
6
+
7
+ # Model Card for Model ID
8
+
9
+ <!-- Provide a quick summary of what the model is/does. -->
10
+
11
+ This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
12
+
13
+ ## Model Details
14
+
15
+ ### Model Description
16
+
17
+ <!-- Provide a longer summary of what this model is. -->
18
+
19
+
20
+
21
+ - **Developed by:** Mudasir692
22
+ - **Model type:** transformer
23
+ - **Language(s) (NLP):** python
24
+ - **License:** MIT
25
+ - **Finetuned from model [optional]:** Peguses
26
+
27
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
28
+
29
+
30
+ ## Bias, Risks, and Limitations
31
+
32
+ Model might not generate coherent summary to large extent.
33
+
34
+ [More Information Needed]
35
+
36
+ ### Recommendations
37
+
38
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
39
+
40
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
41
+
42
+ ## How to Get Started with the Model
43
+
44
+ import torch
45
+ from transformers import PegasusForConditionalGeneration, PegasusTokenizer
46
+
47
+ # Load the saved model and tokenizer
48
+ model_path = "peguses_chat_sum"
49
+ device = torch.device("cpu")
50
+
51
+ # Load the model and tokenizer from the saved directory
52
+ model = PegasusForConditionalGeneration.from_pretrained(model_path)
53
+ tokenizer = PegasusTokenizer.from_pretrained(model_path)
54
+
55
+ # Move the model to the correct device
56
+ model = model.to(device)
57
+
58
+
59
+ # Define the inference function
60
+ def inference(input_text):
61
+ # Tokenize input text
62
+ inputs = tokenizer(input_text, return_tensors="pt", truncation=True, padding=True)
63
+ input_ids = inputs["input_ids"].to(device)
64
+ attention_mask = inputs["attention_mask"].to(device)
65
+
66
+ # Generate summary
67
+ model.eval()
68
+ with torch.no_grad():
69
+ outputs = model.generate(
70
+ input_ids=input_ids,
71
+ attention_mask=attention_mask,
72
+ max_new_tokens=150,
73
+ num_beams=8,
74
+ early_stopping=True,
75
+ )
76
+
77
+ # Decode the generated text
78
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
79
+ return generated_text
80
+
81
+
82
+ # Test with a sample input
83
+ test_input = """
84
+ #Person1#: Hey Alice, congratulations on your promotion!
85
+ #Person2#: Thank you so much! It means a lot to me. I’m still processing it, honestly.
86
+ #Person1#: You totally deserve it. Your hard work finally paid off. Let’s celebrate this weekend.
87
+ #Person2#: That sounds amazing. Dinner on me, okay?
88
+ #Person1#: Sure! Just let me know where and when. Oh, by the way, did you tell your family?
89
+ #Person2#: Yes, they were so excited. Mom’s already planning to bake a cake.
90
+ #Person1#: That’s wonderful! I’ll bring a gift too. It’s such a big milestone for you.
91
+ #Person2#: You’re the best. Thanks for always being so supportive.
92
+ """
93
+ print("Summary Text:", inference(test_input))
94
+
95
+