YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

ORLM-LLaMA-3-8B - GGUF

Original model description:

license: llama3

see our paper in https://arxiv.org/abs/2405.17743

github repo: https://github.com/Cardinal-Operations/ORLM

Model Details

LLaMA-3-8B-ORLM is fully fine-tuned on the OR-Instruct data and built with Meta LLaMA-3-8B model. More training details can be seen at https://arxiv.org/abs/2405.17743

Model Usage

Prompting Template:

Below is an operations research question. Build a mathematical model and corresponding python code using `coptpy` that appropriately addresses the question.

# Question:
{Question}

# Response:

Please replace the {Question} with any natural language OR question.

Examples

--------------------------OR question wrapped in the template--------------------------
Below is an operations research question. Build a mathematical model and corresponding python code using `coptpy` that appropriately addresses the question.

# Question:
A lab has 1000 units of medicinal ingredients to make two pills, a large pill and a small pill. A large pill requires 3 units of medicinal ingredients and 2 units of filler. A small pill requires 2 units of medicinal ingredients and 1 unit of filler. The lab has to make at least 100 large pills. However, since small pills are more popular at least 60% of the total number of pills must be small. How many of each should be made to minimize the total number of filler material needed?

# Response:
--------------------------top-1 completion output by LLaMA-3-8B-ORLM under greedy decoding--------------------------
## Mathematical Model:
To solve the lab's pill production problem, we need to construct a linear programming model to minimize the total number of filler material needed while satisfying the production requirements for the pills and the proportion constraint. Here is the model formulation:

### Decision Variables:
- \(x\): Number of large pills to be made.
- \(y\): Number of small pills to be made.

### Objective Function:
- Minimize the total number of filler material needed: \[Z = 2x + y\]

### Constraints:
1. Medicinal ingredient usage constraint: \[3x + 2y \leq 1000\]
2. Large pill production requirement: \[x \geq 100\]
3. Proportion constraint for small pills: \[y \geq 0.6(x + y)\]
4. Non-negativity constraint: \[x, y \geq 0\]

### Integer Constraints:
- \(x, y\) must be integers, as the number of pills cannot be fractional.

Using this model, we can apply linear programming techniques to find the optimal number of large and small pills to be made, minimizing the total number of filler material needed while satisfying the production requirements for the pills and the proportion constraint. This will provide the lab with an economically efficient pill production strategy.

## Python Code Solution Using `coptpy`:
Here is a Python script using the `coptpy` library to solve the problem:

\`\`\`python
import coptpy as cp
from coptpy import COPT

# Create a COPT environment
env = cp.Envr()

# Create the model
model = env.createModel("PillProduction")

# Add decision variables
x = model.addVar(vtype=COPT.INTEGER, name="x")  # Number of large pills
y = model.addVar(vtype=COPT.INTEGER, name="y")  # Number of small pills

# Objective function: Minimize the total number of filler material needed
model.setObjective(2 * x + y, sense=COPT.MINIMIZE)

# Constraints
model.addConstr(3 * x + 2 * y <= 1000, name="MaterialLimit")
model.addConstr(x >= 100, name="LargePillRequirement")
model.addConstr(y >= 0.6 * (x + y), name="SmallPillProportion")

# Solve the model
model.solve()

# Output results
if model.status == COPT.OPTIMAL:
    print("Minimum total number of filler material needed: {:.0f} units".format(model.objval))
    print("Number of large pills to be made: {:.0f}".format(x.x))
    print("Number of small pills to be made: {:.0f}".format(y.x))
else:
    print("No optimal solution found.")
\`\`\`

In this script, we first create a `COPT` environment and model. Then, we add two integer decision variables `x` and `y`, representing the number of large and small pills to be made, respectively.

Next, we set the objective function to minimize the total number of filler material needed, which is `2 * x + y`.

Then, we add three constraints:
1. Medicinal ingredient usage constraint: `3 * x + 2 * y <= 1000`
2. Large pill production requirement: `x >= 100`
3. Proportion constraint for small pills: `y >= 0.6 * (x + y)`

Finally, we solve the model and output the results of the optimal solution, including the minimum total number of filler material needed and the number of large and small pills to be made.

This script provides a complete example of using the `coptpy` library to solve the lab's pill production problem, while satisfying all the constraints mentioned in the problem.

Performances

Below is the comparison of performance on the NL4OPT, MAMO, and IndustryOR benchmarks. Values marked with a * are directly copied from original papers, with blanks where data were not reported. The highest results are highlighted in bold.

Method NL4OPT MAMO EasyLP MAMO ComplexLP IndustryOR Micro Avg Macro Avg
Methods based on PLMs
tag-BART 47.9%* - - - - -
Methods based on GPT-3.5
Standard 42.4%* - - - - -
Reflexion 50.7%* - - - - -
Chain-of-Experts 58.9%* - - - - -
Methods based on GPT-4
Standard 47.3%* 66.5%* 14.6%* 28.0% 50.2% 39.1%
Reflexion 53.0%* - - - - -
Chain-of-Experts 64.2%* - - - - -
OptiMUS 78.8%* - - - - -
ORLMs based on open-source LLMs
ORLM-Mistral-7B 84.4% 81.4% 32.0% 27.0% 68.8% 56.2%
ORLM-Deepseek-Math-7B-Base 86.5% 82.2% 37.9% 33.0% 71.2% 59.9%
ORLM-LLaMA-3-8B 85.7% 82.3% 37.4% 38.0% 71.4% 60.8%

Citation

@article{tang2024orlm,
  title={ORLM: Training Large Language Models for Optimization Modeling},
  author={Tang, Zhengyang and Huang, Chenyu and Zheng, Xin and Hu, Shixi and Wang, Zizhuo and Ge, Dongdong and Wang, Benyou},
  journal={arXiv preprint arXiv:2405.17743},
  year={2024}
}
@article{llama3modelcard,
  title={Llama 3 Model Card},
  author={AI@Meta},
  year={2024},
  url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}

License

The use of this model is governed by the META LLAMA 3 COMMUNITY LICENSE AGREEMENT.

Downloads last month
26
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .