Fan21 commited on
Commit
45728f6
1 Parent(s): 4b3e1d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -7,8 +7,7 @@ pipeline_tag: image-to-text
7
  # git_20
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
-
11
- This model is fine-tuned with LLaMA with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-mt-lora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively respond generation in a mathematical context.
12
  ### Here is how to use it with texts in HuggingFace
13
  ```python
14
  from transformers import AutoModelForCausalLM
 
7
  # git_20
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
+ This model is fine-tuned with Microsoft GIT with 1 Nvidia A100-80G GPU.We extracted 100,000 student assignments containing teacher evaluations from 3 million student assignments as training data. The training data is divided into the iamge part of student assignments and the text part of teacher evaluation. git_20 consists of 18 layers and over 170 million parameters, consuming up to 0.7 gigabytes of disk space.The project aims to use multi-modal and multi-task deep learning models to create a machine learning pipeline that provides automatic diagnostic feedback for students' mathematical reasoning. Researchers can experiment with and finetune the model to help construct multimodel that can effectively provide automatic diagnostic feedback for students' mathematical reasoning.
 
11
  ### Here is how to use it with texts in HuggingFace
12
  ```python
13
  from transformers import AutoModelForCausalLM