MoR
This model card for our paper Mixture of Structural-and-Textual Retrieval over Text-rich Graph Knowledge Bases
Running the Evaluation and Reranking Script
Installation
To set up the environment, you can install dependencies using Conda or pip:
Using Conda
conda env create -f mor_env.yml
conda activate your_env_name # Replace with actual environment name
Using pip
pip install -r requirements.txt
Inference
To run the inference script, execute the following command in the terminal:
bash eval_mor.sh
This script will automatically process three datasets using the pre-trained planning graph generator and the pre-trained reranker.
Training (Train MoR from Scratch)
Step1: Training the planning graph generator
bash train_planner.sh
Step2: Train mixed traversal to collect candidates (note: there is no training process for reasoning)
bash run_reasoning.sh
Step3: Training the reranker
bash train_reranker.sh
Generating training data of Planner
We provide codes to generate your own training data to finetune the Planner by using different LLMs.
If you are using Azure API
python script.py --model "model_name" \
--dataset_name "dataset_name" \
--azure_api_key "your_azure_key" \
--azure_endpoint "your_azure_endpoint" \
--azure_api_version "your_azure_version"
If you are using OpenAI API
python script.py --model "model_name" \
--dataset_name "dataset_name" \
--openai_api_key "your_openai_key" \
--openai_endpoint "your_openai_endpoint"
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for GagaLey/MoR
Base model
meta-llama/Llama-3.2-3B-Instruct