Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 3 missing columns ({'qtype', 'solution', 'choices'})

This happened while the json dataset builder was generating data using

hf://datasets/Junetheriver/OpsEval/data/dev/Log Analysis.json (at revision e26d10a4df0217c661ae92a6dbadf16f860271a0)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              question: string
              id: string
              answer: string
              to
              {'solution': Value(dtype='string', id=None), 'choices': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'id': Value(dtype='string', id=None), 'qtype': Value(dtype='int64', id=None), 'question': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 3 missing columns ({'qtype', 'solution', 'choices'})
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/Junetheriver/OpsEval/data/dev/Log Analysis.json (at revision e26d10a4df0217c661ae92a6dbadf16f860271a0)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Open a discussion for direct support.

choices
sequence
question
string
qtype
int64
id
string
solution
string
answer
string
[ "GTP-U", "GTP-C", "NG-AP.", "S1AP" ]
5G AN 和 AMF 之间最上层协议是:() A: GTP-U B: GTP-C C: NG-AP. D: S1AP
0
5G Communication-0
C
[ "SMF", "AMF", "UPF", "sbc" ]
PCF通知哪个网元创建语音专有QoS Flow A: SMF B: AMF C: UPF D: sbc
0
5G Communication-1
A
[ "N1", "N2", "N3", "N6" ]
和EPC接口Sgi功能类似的5GC接口是? A: N1 B: N2 C: N3 D: N6
0
5G Communication-2
D
[ "正确", "错误" ]
option3 的控制面是通过gnodeb传递的 A: 正确 B: 错误
0
5G Communication-3
B
[ "70", "80", "90", "100" ]
为了减少混乱的方位角带来的网络干扰不确定性,应尽量保证各扇区天线的夹角为120度,最低要求不能小于()度 A: 70 B: 80 C: 90 D: 100
0
5G Communication-4
C
null
日志样例:<182>Dec 3 13:42:12 BH_GD1 info logger: [ssl_acc] 127.0.0.1 - - [03/Dec/2017:13:42:12 +0800] "/iControl/iControlPortal.cgi" 200 769,如何通过正则表达式解析出时间字段time”Dec 3 13:42:12“、访问状态字段status“200”?
null
Log Analysis-0
null
<\d+>(?<time>\w+\s+\d+\s+\d+:\d+:\d+).*\s+(?<status>\d+)\s+\d+
null
常见的日志等级有哪些?
null
Log Analysis-1
null
EMERG(紧急)、ALERT(警告)、CRIT(严重)、ERR(错误)、WARNING(提醒)、NOTICE(注意)、INFO(信息)、DEBUG(调试)
null
用正则提取这条日志的字段 2023-11-25 13:52:33,493 DEBUG yotta-frontend-actor-system-akka.actor.default-dispatcher-12 dbcp2.PoolableConnectionFactory: Failed to validate a poolable connection. com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 248,702,880 milliseconds ago. The last packet sent successfully to the server was 248,702,880 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
null
Log Analysis-2
null
(?<timestamp>\d+-\d+-\d+ \d+:\d+:\d+,\d+)\s+(?<loglevel>\S+)\s+(?<thread>\S+)\s+(?<class>[^:]+)[\s\S]*?\s+(?<exception_class>\S+Exception)
null
如何开始业务系统日志分析
null
Log Analysis-3
null
首先需要了解该业务系统主要功能,以及相应的业务运行逻辑架构,其次从业务运维人员处获得运维知识库,找到常见问题,以及梳理告警关键字,并建立错误关键字告警和反应业务健康度的黄金指标如饱和度(满没满),延时(耗时)和并发负载等内容。最后将该系统涉及的数据库、中间件、主机或容器等应用一并形成监控指标体系。
null
在日志解析中,多行日志是什么?如何处理?
null
Log Analysis-4
null
多行日志指的是跨越多行的日志条目,解析时需要将其组合成单个条目以便于处理和分析。
null
Evalute these conmands which execate sucestully CREATE SEQUENCE ord_seq INCREMENT BY 1 START WITH 1 MAXVALUE 100000 CYCLE CACHE 5000; Which two statements are true atout the ORD_ITEMS table and the ORD_SEQ sequence? A. Any user inserting rows into table ORD_ITEMS must have been granted access to sequence ORD_SEQ. B. Colunn ORD_NO gets the next number from squence ORD_SEQ whenever a row is inserted into ORD_ITEMS and no explict value is given for ORD_NO. C. Sepuence ORD_SEQ cycles back to 1 after every 5000 numbers and can cycle 20 times D. IF sequence ORD_SEQ is dropped then the default value for column ORD_NO will be NULL for rows inserted into ORD_ITEMS. E. Sequence ORD_SEQ is guaranteed not to genenate duplicate numbers.
null
Oracle Database-0
A,B
null
Which three statements are true about the Oracle join and ANSI Join syntax? A. The SQL:1999 compliant ANSI Join syntax supports creation of a Cartesian product of two tables. B. The Orade join syntax performs less well than the SQL:1999 compliant ANST join syntax. C. The SQL:1999 compliant ANSI join syntax supports natural Joins. D. The Orade join syntax perfoms better than the SQL:1999 compliant ANSI join syntax. E. The Orade join syntax supports creation of a Cartesian product of two tables. F. The Oracle join syntax suports natural joins. G. The Orade join syntax only supports right outer joins.
null
Oracle Database-1
C,D,E
null
52、Which three are true about a whole database backup? A. It can be created only by using RMAN B. It is the only possible backup type for a database in NOARCHIVELOG mode C. It can be consistent. D. It can consist of either backup sets or image copies. E. It can be inconsistent F. It always includes all data files, the current control file, the server parameter file, and archived redo logs.
null
Oracle Database-2
C,D,E
null
5、You plan to perform cross-platform PDB transport using XTTS. Which two are true? A. A backup of the PDB must exist, taken using the BACKUP command with the ro PLATFORM clause B. The source PDB can be in MOUNT or OPEN state C. The source PDB must be in MOUNT statE. D. The source PDB must not be an application root. E. Automatic conversion of endianess occurs. F. The source and target platforms must have the same endianess
null
Oracle Database-3
A,F
null
Which three statements are true about an ORDER BY clause? A. By default an ORDER BY clause sorts rows in descending order. B. An ORDER BY clause can perform a linguistic sort. C. An ORDER BY clause can perform a binary sort. D. By default an ORDER BY clause sorts rows in ascending order. E. An ORDER BY clause will always precede a HAVING clause if both are used in the same top-level query. F. An ORDER BY clause always sorts NULL values last.
null
Oracle Database-4
B,C,D
[ "Plug the device into a USB port.", "Install drivers.", "Put the device in pairing mode and open Bluetooth settings on the tablet, then tap the trackpad.", "Go to Settings to configure speed and scrolling features." ]
One of your coworkers has purchased an external Bluetooth trackpad to use with their tablet. They've turned to you, the company IT person, to install and configure it for them. What actions will you need to take? (Choose two.)
null
Wired Network-0
Analyzing each choice: A: Plug the device into a USB port - This is incorrect. Bluetooth devices do not need to be plugged into a USB port as they connect wirelessly. B: Install drivers - This may or may not be necessary. Some devices will require drivers to be installed, but many modern devices and operating systems will automatically handle this. C: Put the device in pairing mode and open Bluetooth settings on the tablet, then tap the trackpad - This is correct. To connect a Bluetooth device, you typically need to put it in pairing mode and then connect to it from the device you want to use it with. D: Go to Settings to configure speed and scrolling features - This is also correct. Once the device is connected, you may need to adjust settings like speed and scrolling to suit the user's preferences.
C,D
[ "Bluetooth", "IrDA", "RJ-45", "Wi-Fi" ]
What types of networking will smart cameras often have built into them? (Choose two.)
null
Wired Network-1
A: Bluetooth - This is a possible answer. Bluetooth is a wireless technology that enables data exchange over short distances. It is commonly used in many devices, including smart cameras, for sharing data. B: IrDA - This is not a likely answer. IrDA stands for Infrared Data Association, a group of device manufacturers that developed a standard for transmitting data via infrared light waves. While it was popular in the past, it has largely been replaced by Wi-Fi and Bluetooth in most modern devices, including smart cameras. C: RJ-45 - This is not a likely answer. RJ-45 is a type of connector commonly used for Ethernet networking. While it's possible for a smart camera to have an RJ-45 port for wired networking, it's not as common as wireless options like Wi-Fi and Bluetooth, especially considering the flexibility and convenience of wireless connections. D: Wi-Fi - This is a possible answer. Wi-Fi is a wireless networking technology that uses radio waves to provide wireless high-speed Internet and network connections. It is commonly built into smart cameras to allow them to connect to home networks and the internet.
A,D
[ "Single-mode cables use an LED light source, whereas multimode cables use a laser.", "Single-mode cables can span longer distances than multimode cables.", "Single-mode cables have a smaller core filament than multimode cables.", "Single-mode cables have a smaller bend radius than multimode, making them easier to install.", "Single-mode fiber-optic cables require a ground, whereas multimode cables do not." ]
Which of the following statements about single-mode fiber-optic cable are true? (Choose all that apply.)
null
Wired Network-2
Single-mode cables have a smaller core filament and can span longer distances than multimode cables. Single-mode cables also use a laser light source, have a larger bend radius, and do not require a ground
B,C
[ "Speaker", "SGT-Reflector", "Listener", "SGT-Sender" ]
Which of the following are modes of SXP peers? (Choose two.)
null
Wired Network-3
**Explanation:** Every SXP peer session has a speaker and a listener. A speaker sends the mappings of IP addresses to SGTs. The listener receives those updates and records them. A peer can be configured to be both a speaker and a listener for the same peer if both support it. It may have numerous peers as well
A,C
[ "Enable IPv6 on each interface using an [ipv6 address](vol1_gloss.xhtml#gloss_243) interface subcommand.", "Enable support for both versions with the ip versions 4 6 global command.", "Additionally enable IPv6 routing using the ipv6 unicast-routing global command.", "Migrate to dual-stack routing using the ip routing dual-stack global command." ]
Router R1 currently supports IPv4, routing packets in and out all its interfaces. R1’s configuration needs to be migrated to support dual-stack operation, routing both IPv4 and IPv6. Which of the following tasks must be performed before the router can also support routing IPv6 packets? (Choose two answers.)
null
Wired Network-4
Of the four answers, the two correct answers show the minimal required configuration to support IPv6 on a Cisco router: enabling IPv6 routing ( **ipv6 unicast-routing** ) and enabling IPv6 on each interface, typically by adding a unicast address to each interface ( **ipv6 address…** ). The two incorrect answers list nonexistent commands
A,C

OpsEval Dataset

Website | Reporting Issues

Introduction

The OpsEval dataset represents a pioneering effort in the evaluation of Artificial Intelligence for IT Operations (AIOps), focusing on the application of Large Language Models (LLMs) within this domain. In an era where IT operations are increasingly reliant on AI technologies for automation and efficiency, understanding the performance of LLMs in operational tasks becomes crucial. OpsEval offers a comprehensive task-oriented benchmark specifically designed for assessing LLMs in various crucial IT Ops scenarios.

This dataset is motivated by the emerging trend of utilizing AI in automated IT operations, as predicted by Gartner, and the remarkable capabilities exhibited by LLMs in NLP-related tasks. OpsEval aims to bridge the gap in evaluating these models' performance in AIOps tasks, including root cause analysis of failures, generation of operations and maintenance scripts, and summarizing alert information.

Highlights

  • Comprehensive Evaluation: OpsEval includes 7184 multi-choice questions and 1736 question-answering (QA) formats, available in both English and Chinese, making it one of the most extensive benchmarks in the AIOps domain.
  • Task-Oriented Design: The benchmark is tailored to assess LLMs' proficiency across different crucial scenarios and ability levels, offering a nuanced view of model performance in operational contexts.
  • Expert-Reviewed: To ensure the reliability of our evaluation, dozens of domain experts have manually reviewed our questions, providing a solid foundation for the benchmark's credibility.
  • Open-Sourced and Dynamic Leaderboard: We have open-sourced 20% of the test QA to facilitate preliminary evaluations by researchers. An online leaderboard, updated in real-time, captures the performance of emerging LLMs, ensuring the benchmark remains current and relevant.

Dataset Structure

Here is a brief overview of the dataset structure:

  • /dev/ - Examples for few-shot in-context learning.
  • /test/ - Test sets of OpsEval.

Dataset Informations

Dataset Name Open-Sourced Size
Wired Network 1563
Oracle Database 395
5G Communication 349
Log Analysis 310

Website

For evaluation results on the full OpsEval dataset, please checkout our official website OpsEval Leaderboard.

Paper

For a detailed description of the dataset, its structure, and its applications, please refer to our paper available at: OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models

Citation

Please use the following citation when referencing the OpsEval dataset in your research:

@misc{liu2024opseval,
      title={OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models}, 
      author={Yuhe Liu and Changhua Pei and Longlong Xu and Bohan Chen and Mingze Sun and Zhirui Zhang and Yongqian Sun and Shenglin Zhang and Kun Wang and Haiming Zhang and Jianhui Li and Gaogang Xie and Xidao Wen and Xiaohui Nie and Minghua Ma and Dan Pei},
      year={2024},
      eprint={2310.07637},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
Downloads last month
0
Edit dataset card