LogCreative's picture
Update README.md
2c1bf28 verified
---
base_model: unsloth/llama-3-8b-Instruct
license: llama3
datasets:
- LogCreative/latex-pgfplots-instruct
language:
- en
metrics:
- code_eval
pipeline_tag: text-generation
tags:
- code
---
## Usage
This model is saved as [MLC LLM](https://llm.mlc.ai) format.
View the [installation guide of MLC LLM](https://llm.mlc.ai/docs/install/mlc_llm) for how to install the library.
Then use the following command to try the model:
```bash
mlc_llm chat .
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
The model is finetuned from Llama 3 LLM to provide more accurate results on generating LaTeX code of `pgfplots` package, which is based on the dataset [LogCreative/latex-pgfplots-instruct](https://huggingface.co/datasets/LogCreative/latex-pgfplots-instruct) extracted from the documentation of [`pgfplots`](https://github.com/pgf-tikz/pgfplots) LaTeX package.
- **Developed by:** [LogCreative](https://github.com/LogCreative)
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** Llama 3
- **Finetuned from model:** [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [LogCreative/llama-pgfplots-finetune](https://github.com/LogCreative/llama-pgfplots-finetune)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model is intended to generate the pgfplots LaTeX code according to the user's prompt.
It is suitable for users who are not familiar with the API provided in the `pgfplots` package
or does not want to consult the documentation for achieving the intention.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[PGFPlotsEdt](https://github.com/LogCreative/PGFPlotsEdt): A PGFPlots Statistic Graph Interactive Editor.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Any use outside the `pgfplots` package could only be of the performance of the base Llama 3 model.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model could not provide sufficient information on other LaTeX packages and could not guarantee the absolute correctness of the generated result.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
If you can not get the correct result from this model, you may need to consult the original `pgfplots` documentation for more information.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[LogCreative/latex-pgfplots-instruct](https://huggingface.co/datasets/LogCreative/latex-pgfplots-instruct): a datasets contains the instruction and corresponding output related to `pgfplots` and `pgfplotstable` LaTeX packages.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
This model is finetuned based on the dataset based on [`unsloth`](https://github.com/unslothai/unsloth) library.
#### Training Hyperparameters
- **Training regime:** bf16 mixed precision <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
The evaluation is based on the success compilation rate of the output LaTeX code in the test dataset.
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[LogCreative/latex-pgfplots-instruct](https://huggingface.co/datasets/LogCreative/latex-pgfplots-instruct): the test part of this dataset only contains instructions only related to the `pgfplots` package.
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
When testing, the prompt prefix is added to tell the model what role it is and what the requested response format is to only output the code without any explanation.
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Success compilation rate:
$$\frac{\text{\#Success compilation}}{\text{\#Total compilation}}\times 100\%$$
The uncessful compilation is rather LaTeX failure or the timeout case (compilation time > 20s).
### Results
The test is based upon unquantized model which is in fp16 precision.
- Llama 3: 34%
- **This model: 52% (+18%)**
#### Summary
This model is expected to output the LaTeX code output related to the `pgfplots` package with less error compared to the baseline Llama 3 model.
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
- **Hardware Type:** Nvidia A100 80G
- **Hours used:** 1h = 10min training + 50min testing
- **Cloud Provider:** Private infrastructure
- **Carbon Emitted:** 0.11kg CO2 eq.
### Framework versions
- PEFT 0.11.1
- MLC LLM nightly_cu122-0.1.dev1404
- MLC AI nightly_cu122-0.15.dev404
- Unsloth 2024.6