Update README.md
Browse files
README.md
CHANGED
@@ -14,4 +14,87 @@ widget:
|
|
14 |
- text: "When was the second satellite for the BeiDou-2 system launched?"
|
15 |
context: "In April 2007, the first satellite of BeiDou-2, namely Compass-M1 (to validate frequencies for the BeiDou-2 constellation) was successfully put into its working orbit. The second BeiDou-2 constellation satellite Compass-G2 was launched on 15 April 2009. On 15 January 2010, the official website of the BeiDou Navigation Satellite System went online, and the system's third satellite (Compass-G1) was carried into its orbit by a Long March 3C rocket on 17 January 2010. On 2 June 2010, the fourth satellite was launched successfully into orbit. The fifth orbiter was launched into space from Xichang Satellite Launch Center by an LM-3I carrier rocket on 1 August 2010. Three months later, on 1 November 2010, the sixth satellite was sent into orbit by LM-3C. Another satellite, the Beidou-2\/Compass IGSO-5 (fifth inclined geosynchonous orbit) satellite, was launched from the Xichang Satellite Launch Center by a Long March-3A on 1 December 2011 (UTC)."
|
16 |
example_title: "BeiDou_Navigation_Satellite_System"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
---
|
|
|
14 |
- text: "When was the second satellite for the BeiDou-2 system launched?"
|
15 |
context: "In April 2007, the first satellite of BeiDou-2, namely Compass-M1 (to validate frequencies for the BeiDou-2 constellation) was successfully put into its working orbit. The second BeiDou-2 constellation satellite Compass-G2 was launched on 15 April 2009. On 15 January 2010, the official website of the BeiDou Navigation Satellite System went online, and the system's third satellite (Compass-G1) was carried into its orbit by a Long March 3C rocket on 17 January 2010. On 2 June 2010, the fourth satellite was launched successfully into orbit. The fifth orbiter was launched into space from Xichang Satellite Launch Center by an LM-3I carrier rocket on 1 August 2010. Three months later, on 1 November 2010, the sixth satellite was sent into orbit by LM-3C. Another satellite, the Beidou-2\/Compass IGSO-5 (fifth inclined geosynchonous orbit) satellite, was launched from the Xichang Satellite Launch Center by a Long March-3A on 1 December 2011 (UTC)."
|
16 |
example_title: "BeiDou_Navigation_Satellite_System"
|
17 |
+
---
|
18 |
+
|
19 |
+
|
20 |
+
# Model Card: Fine-tuned DistilBERT-base-uncased for Question and Answering V2
|
21 |
+
|
22 |
+
## Model Description
|
23 |
+
|
24 |
+
|
25 |
+
|
26 |
+
## Overview
|
27 |
+
|
28 |
+
The fine-tuned model presented here is an enhanced iteration of the DistilBERT-base-uncased model, meticulously trained on an updated dataset. Leveraging the underlying architecture of DistilBERT, a compact variant of BERT optimized for efficiency, this model is tailor-made for natural language processing tasks with a primary focus on question answering. Its training involved exposure to a diverse and contemporary dataset, ensuring its adaptability to a wide range of linguistic nuances and semantic intricacies. The fine-tuning process refines the model's understanding of context, allowing it to excel in tasks that require nuanced comprehension and contextual reasoning, making it a robust solution for question and answering applications in natural language processing.
|
29 |
+
## Intended Use
|
30 |
+
|
31 |
+
This fine-tuned DistilBERT-base-uncased model is designed for versatile natural language processing applications. Its adaptability makes it well-suited for a broad range of tasks, including but not limited to text classification, sentiment analysis, and named entity recognition. Users are strongly advised to conduct a comprehensive performance assessment tailored to their specific tasks and datasets to ascertain its suitability for their particular use case. The model's efficacy and robustness can vary across different applications, and evaluating its performance on targeted tasks is crucial for optimal results.
|
32 |
+
|
33 |
+
In this specific instance, the model underwent training with a focus on enhancing its performance in question and answering tasks. The training process was optimized to improve the model's understanding of contextual information and its ability to generate accurate and relevant responses in question-answering scenarios. Users seeking to leverage the model for similar applications are encouraged to evaluate its performance in the context of question and answering benchmarks to ensure alignment with their intended use case.
|
34 |
+
|
35 |
+
## Training Data
|
36 |
+
|
37 |
+
The model was fine-tuned on an updated dataset collected from diverse sources to enhance its performance on a broad range of natural language understanding tasks.
|
38 |
+
|
39 |
+
## Model Architecture
|
40 |
+
|
41 |
+
The underlying architecture of the model is rooted in DistilBERT-base-uncased, a variant designed to be both smaller and computationally more efficient than its precursor, BERT. This architecture optimization enables the model to retain a substantial portion of BERT's performance capabilities while demanding significantly fewer computational resources. DistilBERT achieves this efficiency through a process of knowledge distillation, wherein the model is trained to mimic the behavior and knowledge of the larger BERT model, resulting in a streamlined yet effective representation of language understanding. This reduction in complexity makes the model particularly well-suited for scenarios where computational resources are constrained, without compromising on the quality of natural language processing tasks.
|
42 |
+
|
43 |
+
Moreover, the choice of DistilBERT as the base architecture aligns with the broader trend in developing models that strike a balance between performance and resource efficiency. Researchers and practitioners aiming for state-of-the-art results in natural language processing applications increasingly consider such distilled architectures due to their pragmatic benefits in deployment, inference speed, and overall versatility across various computational environments.
|
44 |
+
|
45 |
+
### How to Use
|
46 |
+
To use this model for medical text summarization, you can follow these steps:
|
47 |
+
|
48 |
+
|
49 |
+
```python
|
50 |
+
from transformers import pipeline
|
51 |
+
|
52 |
+
question = "What would to the carmine pigment if not used diligently?"
|
53 |
+
context = "The painters of the early Renaissance used two traditional lake pigments, made from mixing dye with either chalk or alum, kermes lake, made from kermes insects, and madder lake, made from the rubia tinctorum plant. With the arrival of cochineal, they had a third, carmine, which made a very fine crimson, though it had a tendency to change color if not used carefully. It was used by almost all the great painters of the 15th and 16th centuries, including Rembrandt, Vermeer, Rubens, Anthony van Dyck, Diego Vel\u00e1zquez and Tintoretto. Later it was used by Thomas Gainsborough, Seurat and J.M.W. Turner."
|
54 |
+
|
55 |
+
question_answerer = pipeline("question-answering", model="Falconsai/question_answering")
|
56 |
+
question_answerer(question=question, context=context)
|
57 |
+
|
58 |
+
|
59 |
+
```
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
|
64 |
+
```python
|
65 |
+
from transformers import AutoTokenizer
|
66 |
+
from transformers import AutoModelForQuestionAnswering
|
67 |
+
|
68 |
+
question = "On which date did Swansea City play its first Premier League game?"
|
69 |
+
context = "In 2011, a Welsh club participated in the Premier League for the first time after Swansea City gained promotion. The first Premier League match to be played outside England was Swansea City's home match at the Liberty Stadium against Wigan Athletic on 20 August 2011. In 2012\u201313, Swansea qualified for the Europa League by winning the League Cup. The number of Welsh clubs in the Premier League increased to two for the first time in 2013\u201314, as Cardiff City gained promotion, but Cardiff City was relegated after its maiden season."
|
70 |
+
|
71 |
+
tokenizer = AutoTokenizer.from_pretrained("Falconsai/question_answering")
|
72 |
+
inputs = tokenizer(question, context, return_tensors="pt")
|
73 |
+
|
74 |
+
model = AutoModelForQuestionAnswering.from_pretrained("Falconsai/question_answering")
|
75 |
+
with torch.no_grad():
|
76 |
+
outputs = model(**inputs)
|
77 |
+
|
78 |
+
answer_start_index = outputs.start_logits.argmax()
|
79 |
+
answer_end_index = outputs.end_logits.argmax()
|
80 |
+
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
|
81 |
+
tokenizer.decode(predict_answer_tokens)
|
82 |
+
|
83 |
+
```
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
|
88 |
+
## Ethical Considerations
|
89 |
+
|
90 |
+
Care has been taken to minimize biases in the training data. However, biases may still be present, and users are encouraged to evaluate the model's predictions for potential bias and fairness concerns, especially when applied to different demographic groups.
|
91 |
+
|
92 |
+
## Limitations
|
93 |
+
|
94 |
+
While this model performs well on standard benchmarks, it may not generalize optimally to all datasets or tasks. Users are advised to conduct thorough evaluation and testing in their specific use case.
|
95 |
+
|
96 |
+
## Contact Information
|
97 |
+
|
98 |
+
For inquiries or issues related to this model, please contact [https://falcons.ai/].
|
99 |
+
|
100 |
---
|