NetConfEval / README.md
cjwangee's picture
Update README.md (#3)
c148b68 verified
metadata
license: mit
configs:
  - config_name: Formal Specification Translation
    data_files: step_1_spec_translation.jsonl
  - config_name: Translation Conflict Detection
    data_files: step_1_spec_conflict.jsonl
  - config_name: Routing Code Generation
    data_files: step_2_code_gen.jsonl
  - config_name: Configuration Generation
    data_files: step_3_low_level.jsonl

NetConfEval: Can LLMs Facilitate Network Configuration?

What is it?

We present a set of benchmarks (NetConfEval) to examine the effectiveness of different models in facilitating and automating network configuration described in our paper "NetConfEval: Can LLMs Facilitate Network Configuration?".

📜 Paper - GitHub Repository

This repository contains pre-generated datasets for each of the benchmark task, so that they can be used independently from our testing environment.

Generation scripts can be found here.

Translating High-Level Requirements to a Formal Specification Format

This dataset evaluates LLMs' ability to translate network operators' requirements into a formal specification. For instance, the input information can be converted into a simple data structure to specify the reachability, waypoints, and load-balancing policies in a network.

The dataset step_1_spec_translation.jsonl contains five iterations of data extracted from a Config2Spec policy dataset.

Dataset Format

Each line of the output .jsonl file contains the following fields:

  • iteration: incremental index of the iteration
  • max_n_requirements: number of total requirements in the dataset
  • chunk: the batch identifier when chunking the total requirements
  • batch_size: number of requirements in a batch
  • n_policy_types: total number of policy types: (e.g., 2 if reachability and waypoint are used)
  • description: textual description of the supported requirements, can be used as system prompt
  • human_language: the input specifications in human language
  • expected: the expected JSON data structure translated from the human_language

Conflict Detection

In this dataset, we test LLMs' ability to detect a "simple conflict" during formal specification translation. A common case for a "simple conflict" is when two requirements explicitly include contradictory information. For instance, a requirement specifies s1 to reach h2 while another requirement prevents s1 from reaching h2.

The dataset step_1_spec_conflict.jsonl contains five iterations of data extracted from a Config2Spec policy dataset. A "simple conflict" is inserted in each even batch (0, 2, ...).

Dataset Format

Each line of the output .jsonl file contains the following fields:

  • iteration: incremental index of the iteration
  • max_n_requirements: number of total requirements in the dataset
  • chunk: the batch identifier when chunking the total requirements
  • batch_size: number of requirements in a batch
  • n_policy_types: total number of policy types: (e.g., 2 if reachability and waypoint are used)
  • conflict_exists: a boolean indicating whether the conflict is present in the requirements
  • description: textual description of the supported requirements, can be used as system prompt
  • human_language: the input specifications in human language
  • expected: the expected JSON data structure translated from the human_language

Developing Routing Algorithms

Traffic engineering is a critical yet complex problem in network management, particularly in large networks. Our dataset asks the models to create functions that compute routing paths based on specific network requirements (the shortest path, reachability, waypoint, load balancing).

The dataset contains both the input user prompt (without preliminary system prompts) in the prompt column and a series of test cases to run on the generated code in the tests column.

To run the tests, you need to JSON decode the tests field. This will give you a dict with an incremental index as key and the test body as value. It is recommended to run the tests in order, following the index key. You need the pytest package to run the tests.

After extracting the test body:

  • Replace the # ~function_code~ placeholder with the code generated by the LLM;
  • Save the resulting string into a .py file in your filesystem, for example test_file.py;
  • Run python3 -m pytest --lf --tb=short test_file.py -vv.

The above procedure is implemented in NetConfEval through the netconfeval/verifiers/step_2_verifier_detailed.py class.

Dataset Format

Each line of the output .jsonl file contains the following fields:

  • prompt: the type of instruction given to the model to generate the code, can be basic or no_detail
  • policy: the type of policy that the generated function should implement, can be shortest_path, reachability, waypoint or loadbalancing
  • prompt: the human textual instructions fed to the model to generate code
  • tests: JSON-encoded test cases (to run using pytest) to verify code correctness

Generating Low-level Configurations

This dataset explores the problem of transforming high-level requirements into detailed, low-level configurations suitable for installation on network devices. We handpicked four network scenarios publicly available in the Kathará Network Emulator repository. The selection encompasses the most widespread protocols and consists of two OSPF networks (one single-area network and one multi-area network), a RIP network, a BGP network featuring a basic peering between two routers, and a small fat-tree datacenter network running a made-up version of RIFT. All these scenarios (aside from RIFT) leverage FRRouting as the routing suite.

The dataset step_3_low_level.jsonl contains both the input user prompt (without preliminary system prompts) in the prompt column and the corresponding configuration for each device in the result column.

To compare the generated LLM configuration with the expected one, we suggest to:

  • JSON decode the result column, this will give you a Dict with the device name as key and the expected configuration as value (in string);
  • Take the LLM output and, for each device, run the same formatting command in the vtysh using the FRRouting container;
  • Compare the two outputs using difflib.SequenceMatcher.

The above procedure is implemented in NetConfEval in the netconfeval/step_3_low_level.py script.

Dataset Format

Each line of the output .jsonl file contains the following fields:

  • scenario_name: the name of the network scenario for which generate configurations, can be ospf_simple, ospf_multiarea, rip, bgp_simple, or rift
  • prompt: the human textual instructions fed to the model to generate low-level configurations
  • result: JSON data structure with the expected configuration (value of the JSON) for each device (key of the JSON)