sekarmulyani
commited on
Commit
•
66528ff
1
Parent(s):
7af98f4
Update README.md
Browse files
README.md
CHANGED
@@ -25,9 +25,9 @@ metrics:
|
|
25 |
|
26 |
This project pertains to an additional endeavor conducted to test and evaluate the accuracy of the natural language model GPT-2 in generating beauty product reviews within the electronic commerce environment. The focus of this experiment lies in the outcomes of fine-tuning the GPT-2 model using a dataset comprising product reviews with pre-classified ratings in a question-answer format.
|
27 |
|
28 |
-
Despite the perplexity analysis indicating reasonable improvement in results, a profound observation revealed that the resultant model has not yet attained the desired level of accuracy. Even with a fine-tuning process spanning 20 epochs, the outcomes still exhibit
|
29 |
|
30 |
-
From this perspective, it can be inferred that the model necessitates further adjustments through the utilization of more diverse and accurate classification variations. The implications stemming from these findings run deeper, indicating that subjective evaluations from reviewers continue to exert substantial influence on the
|
31 |
|
32 |
> This project is oriented towards academic pursuits and is undertaken as a stipulated requirement for graduation within the Information System undergraduate program at Computer Science Faculty, Amikom University of Purwokerto.
|
33 |
---
|
|
|
25 |
|
26 |
This project pertains to an additional endeavor conducted to test and evaluate the accuracy of the natural language model GPT-2 in generating beauty product reviews within the electronic commerce environment. The focus of this experiment lies in the outcomes of fine-tuning the GPT-2 model using a dataset comprising product reviews with pre-classified ratings in a question-answer format.
|
27 |
|
28 |
+
Despite the perplexity analysis indicating reasonable improvement in results, a profound observation revealed that the resultant model has not yet attained the desired level of accuracy. Even with a fine-tuning process spanning 20 epochs, the outcomes still exhibit some degrees of inaccuracy on multiple occasions. It should be emphasized that the dataset utilized in this experiment did not undergo additional supervision to mitigate potential biases inherent in the product reviews.
|
29 |
|
30 |
+
From this perspective, it can be inferred that the model necessitates further adjustments through the utilization of more diverse and accurate classification variations. The implications stemming from these findings run deeper, indicating that subjective evaluations from reviewers continue to exert substantial influence on the generated texts. Consequently, a more meticulous and measured approach involving a variety of resources is imperative in the fine-tuning process to effectively reduce bias and enhance the model's accuracy in producing more objective and precise product reviews on e-commerce platforms.
|
31 |
|
32 |
> This project is oriented towards academic pursuits and is undertaken as a stipulated requirement for graduation within the Information System undergraduate program at Computer Science Faculty, Amikom University of Purwokerto.
|
33 |
---
|