File size: 8,853 Bytes
1960f40 845f07a 029d5f3 845f07a 029d5f3 845f07a 029d5f3 845f07a 029d5f3 845f07a 029d5f3 845f07a 029d5f3 845f07a 029d5f3 845f07a 029d5f3 845f07a 029d5f3 845f07a 029d5f3 845f07a 1960f40 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
license: llama3
---
see our paper in https://arxiv.org/abs/2405.17743
github repo: https://github.com/Cardinal-Operations/ORLM
## Model Details
LLaMA-3-8B-ORLM is fully fine-tuned on the OR-Instruct data and built with Meta [LLaMA-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model.
More training details can be seen at https://arxiv.org/abs/2405.17743
## Model Usage
Prompting Template:
```text
Below is an operations research question. Build a mathematical model and corresponding python code using `coptpy` that appropriately addresses the question.
# Question:
{Question}
# Response:
```
Please replace the `{Question}` with any natural language OR question.
## Examples
```text
--------------------------OR question wrapped in the template--------------------------
Below is an operations research question. Build a mathematical model and corresponding python code using `coptpy` that appropriately addresses the question.
# Question:
A lab has 1000 units of medicinal ingredients to make two pills, a large pill and a small pill. A large pill requires 3 units of medicinal ingredients and 2 units of filler. A small pill requires 2 units of medicinal ingredients and 1 unit of filler. The lab has to make at least 100 large pills. However, since small pills are more popular at least 60% of the total number of pills must be small. How many of each should be made to minimize the total number of filler material needed?
# Response:
--------------------------top-1 completion output by LLaMA-3-8B-ORLM under greedy decoding--------------------------
## Mathematical Model:
To solve the lab's pill production problem, we need to construct a linear programming model to minimize the total number of filler material needed while satisfying the production requirements for the pills and the proportion constraint. Here is the model formulation:
### Decision Variables:
- \(x\): Number of large pills to be made.
- \(y\): Number of small pills to be made.
### Objective Function:
- Minimize the total number of filler material needed: \[Z = 2x + y\]
### Constraints:
1. Medicinal ingredient usage constraint: \[3x + 2y \leq 1000\]
2. Large pill production requirement: \[x \geq 100\]
3. Proportion constraint for small pills: \[y \geq 0.6(x + y)\]
4. Non-negativity constraint: \[x, y \geq 0\]
### Integer Constraints:
- \(x, y\) must be integers, as the number of pills cannot be fractional.
Using this model, we can apply linear programming techniques to find the optimal number of large and small pills to be made, minimizing the total number of filler material needed while satisfying the production requirements for the pills and the proportion constraint. This will provide the lab with an economically efficient pill production strategy.
## Python Code Solution Using `coptpy`:
Here is a Python script using the `coptpy` library to solve the problem:
\`\`\`python
import coptpy as cp
from coptpy import COPT
# Create a COPT environment
env = cp.Envr()
# Create the model
model = env.createModel("PillProduction")
# Add decision variables
x = model.addVar(vtype=COPT.INTEGER, name="x") # Number of large pills
y = model.addVar(vtype=COPT.INTEGER, name="y") # Number of small pills
# Objective function: Minimize the total number of filler material needed
model.setObjective(2 * x + y, sense=COPT.MINIMIZE)
# Constraints
model.addConstr(3 * x + 2 * y <= 1000, name="MaterialLimit")
model.addConstr(x >= 100, name="LargePillRequirement")
model.addConstr(y >= 0.6 * (x + y), name="SmallPillProportion")
# Solve the model
model.solve()
# Output results
if model.status == COPT.OPTIMAL:
print("Minimum total number of filler material needed: {:.0f} units".format(model.objval))
print("Number of large pills to be made: {:.0f}".format(x.x))
print("Number of small pills to be made: {:.0f}".format(y.x))
else:
print("No optimal solution found.")
\`\`\`
In this script, we first create a `COPT` environment and model. Then, we add two integer decision variables `x` and `y`, representing the number of large and small pills to be made, respectively.
Next, we set the objective function to minimize the total number of filler material needed, which is `2 * x + y`.
Then, we add three constraints:
1. Medicinal ingredient usage constraint: `3 * x + 2 * y <= 1000`
2. Large pill production requirement: `x >= 100`
3. Proportion constraint for small pills: `y >= 0.6 * (x + y)`
Finally, we solve the model and output the results of the optimal solution, including the minimum total number of filler material needed and the number of large and small pills to be made.
This script provides a complete example of using the `coptpy` library to solve the lab's pill production problem, while satisfying all the constraints mentioned in the problem.
```
## Performances
Below is the comparison of performance on the NL4OPT, MAMO, and IndustryOR benchmarks. Values marked with a <sup>*</sup> are directly copied from original papers, with blanks where data were not reported. The highest results are highlighted in bold.
| **Method** | **NL4OPT** | **MAMO EasyLP** | **MAMO ComplexLP** | **IndustryOR** | **Micro Avg** | **Macro Avg** |
|------------------------------------------------|-------------------------|-----------------------|----------------------|-------------------|-----------------|-----------------|
| *Methods based on PLMs* | | | | | | |
| `tag-BART` | 47.9%<sup>*</sup> | - | - | - | - | - |
| *Methods based on GPT-3.5* | | | | | | |
| `Standard` | 42.4%<sup>*</sup> | - | - | - | - | - |
| `Reflexion` | 50.7%<sup>*</sup> | - | - | - | - | - |
| `Chain-of-Experts` | 58.9%<sup>*</sup> | - | - | - | - | - |
| *Methods based on GPT-4* | | | | | | |
| `Standard` | 47.3%<sup>*</sup> | 66.5%<sup>*</sup> | 14.6%<sup>*</sup> | 28.0% | 50.2% | 39.1% |
| `Reflexion` | 53.0%<sup>*</sup> | - | - | - | - | - |
| `Chain-of-Experts` | 64.2%<sup>*</sup> | - | - | - | - | - |
| `OptiMUS` | 78.8%<sup>*</sup> | - | - | - | - | - |
| *ORLMs based on open-source LLMs* | | | | | | |
| `ORLM-Mistral-7B` | 84.4% | 81.4% | 32.0% | 27.0% | 68.8% | 56.2% |
| `ORLM-Deepseek-Math-7B-Base` | **86.5%** | 82.2% | **37.9%** | 33.0% | 71.2% | 59.9% |
| `ORLM-LLaMA-3-8B` | 85.7% | **82.3%** | 37.4% | **38.0%** | **71.4%** | **60.8%** |
## Citation
```bibtex
@article{tang2024orlm,
title={ORLM: Training Large Language Models for Optimization Modeling},
author={Tang, Zhengyang and Huang, Chenyu and Zheng, Xin and Hu, Shixi and Wang, Zizhuo and Ge, Dongdong and Wang, Benyou},
journal={arXiv preprint arXiv:2405.17743},
year={2024}
}
```
```bibtex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
## License
The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). |