--- license: mit datasets: - stanfordnlp/imdb - stanfordnlp/sst2 - Iliab/emotion_dataset - fancyzhx/ag_news - CogComp/trec - microsoft/ms_marco - CoIR-Retrieval/CodeSearchNet-go-queries-corpus - CoIR-Retrieval/CodeSearchNet-ccr-javascript-queries-corpus - KomeijiForce/CommonsenseQA-Explained-by-ChatGPT - Skylion007/openwebtext - takala/financial_phrasebank language: - fa - en - es - ru - de metrics: - accuracy - precision - f1 - recall - roc_auc - bleu - rouge - perplexity - mse library_name: transformers --- # Model Card for Model ID This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses ### Direct Use [More Information Needed] ### Downstream Use [optional] [More Information Needed] ### Out-of-Scope Use [More Information Needed] ## Bias, Risks, and Limitations [More Information Needed] ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data [More Information Needed] ### Training Procedure #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] #### Speeds, Sizes, Times [optional] [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data [More Information Needed] #### Factors [More Information Needed] #### Metrics [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] [More Information Needed] ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] from datasets import load_dataset, load_metric # بارگذاری مجموعه داده IMDB dataset = load_dataset('imdb') # بارگذاری معیارهای ارزیابی accuracy_metric = load_metric('accuracy') precision_metric = load_metric('precision') recall_metric = load_metric('recall') f1_metric = load_metric('f1') # نمونه‌ای از نحوه استفاده از معیارهای ارزیابی predictions = [0, 1, 1, 0] # پیش‌بینی‌ها references = [0, 1, 0, 0] # مقادیر واقعی accuracy = accuracy_metric.compute(predictions=predictions, references=references) precision = precision_metric.compute(predictions=predictions, references=references) recall = recall_metric.compute(predictions=predictions, references=references) f1 = f1_metric.compute(predictions=predictions, references=references) print(f"Accuracy: {accuracy['accuracy']}") print(f"Precision: {precision['precision']}") print(f"Recall: {recall['recall']}") print(f"F1 Score: {f1['f1']}") from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline # بارگذاری مدل و tokenizer model_name = "نام مدل آموزش‌دیده شما" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # ایجاد pipeline برای تحلیل احساسات sentiment_analysis = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) # تحلیل احساسات یک متن text = "I love using Hugging Face transformers!" result = sentiment_analysis(text) print(result) from transformers import AutoModelForCausalLM # بارگذاری مدل و tokenizer model_name = "نام مدل آموزش‌دیده شما" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # ایجاد pipeline برای تولید متن text_generation = pipeline("text-generation", model=model, tokenizer=tokenizer) # تولید متن prompt = "Once upon a time" generated_text = text_generation(prompt, max_length=50) print(generated_text) from transformers import AutoModelForSeq2SeqLM # بارگذاری مدل و tokenizer model_name = "نام مدل آموزش‌دیده شما" model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # ایجاد pipeline برای ترجمه translation = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer) # ترجمه یک متن text = "How are you?" translated_text = translation(text) print(translated_text) from transformers import AutoModelForQuestionAnswering # بارگذاری مدل و tokenizer model_name = "نام مدل آموزش‌دیده شما" model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # ایجاد pipeline برای پاسخ به سوالات question_answering = pipeline("question-answering", model=model, tokenizer=tokenizer) # پاسخ به یک سوال context = "Hugging Face is creating a tool that democratizes AI." question = "What is Hugging Face creating?" answer = question_answering(question=question, context=context) print(answer) from flask import Flask, request, jsonify from transformers import pipeline, AutoModelForSequenceClassification, AutoTokenizer app = Flask(__name__) # بارگذاری مدل و tokenizer model_name = "نام مدل آموزش‌دیده شما" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # ایجاد pipeline برای تحلیل احساسات sentiment_analysis = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) @app.route('/analyze', methods=['POST']) def analyze(): data = request.json text = data['text'] result = sentiment_analysis(text) return jsonify(result) if __name__ == '__main__': app.run(debug=True)