Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 4 new columns ({'B', 'A', 'D', 'C'}) and 1 missing columns ({'qtype'}). This happened while the json dataset builder was generating data using hf://datasets/Junetheriver/OpsEval/data/dev/China Mobile Zhejiang.json (at revision fb7630c77a60b26ea1ceb8c86d5909b920a2f057) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast solution: string question: string id: string choices: list<item: string> child 0, item: string A: string D: string answer: string B: string C: string to {'solution': Value(dtype='string', id=None), 'qtype': Value(dtype='int64', id=None), 'question': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'choices': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'answer': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1534, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1155, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 4 new columns ({'B', 'A', 'D', 'C'}) and 1 missing columns ({'qtype'}). This happened while the json dataset builder was generating data using hf://datasets/Junetheriver/OpsEval/data/dev/China Mobile Zhejiang.json (at revision fb7630c77a60b26ea1ceb8c86d5909b920a2f057) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
question
string | answer
string | solution
string | id
string | qtype
int64 | choices
sequence |
---|---|---|---|---|---|
5G AN 和 AMF 之间最上层协议是:()
A: GTP-U
B: GTP-C
C: NG-AP.
D: S1AP | C | 5G Communication-0 | 0 | [
"GTP-U",
"GTP-C",
"NG-AP.",
"S1AP"
] |
|
PCF通知哪个网元创建语音专有QoS Flow
A: SMF
B: AMF
C: UPF
D: sbc | A | 5G Communication-1 | 0 | [
"SMF",
"AMF",
"UPF",
"sbc"
] |
|
和EPC接口Sgi功能类似的5GC接口是?
A: N1
B: N2
C: N3
D: N6 | D | 5G Communication-2 | 0 | [
"N1",
"N2",
"N3",
"N6"
] |
|
option3 的控制面是通过gnodeb传递的
A: 正确
B: 错误 | B | 5G Communication-3 | 0 | [
"正确",
"错误"
] |
|
为了减少混乱的方位角带来的网络干扰不确定性,应尽量保证各扇区天线的夹角为120度,最低要求不能小于()度
A: 70
B: 80
C: 90
D: 100 | C | 5G Communication-4 | 0 | [
"70",
"80",
"90",
"100"
] |
|
当系统出现性能瓶颈时,运维人员首先应该考虑的是什么? | B | zjyd运维能力-知识召回02 | null | [
"增加服务器硬件资源",
"优化程序代码",
"升级操作系统",
"重新设计系统架构"
] |
|
在运维工作中,以下哪项不是“监控和日志分析”的主要目的? | B | zjyd运维能力-知识召回03 | null | [
"实时检测系统运行状态",
"分析用户行为",
"预测系统故障",
"提高系统安全性"
] |
|
在运维工作中,以下哪项措施对于提高系统稳定性最有效? | B | zjyd运维能力-知识召回04 | null | [
"频繁更新软件版本",
"定期进行系统备份",
"忽视安全漏洞",
"随意更改服务器配置"
] |
|
主要服务于基于Java平台的项目构建、依赖管理和项目信息管理的工具是( )。 | D | pufa-0 | null | [
"eclipse",
"ant",
"make",
"maven"
] |
|
使用gcc对文件进行编译时,参数( )表示对生成的代码进行优化。 | B | pufa-1 | null | [
"–o",
"–O",
"–c",
"-w"
] |
|
在shell 中变量的赋值有四种方法,其中,采用name=12 的方法称() 。 | B | pufa-2 | null | [
"直接赋值",
"使用read 命令",
"使用命令行参数",
"使用命令的输出"
] |
|
下列哪一项不是问题管理的真正目标: | C | lenovo-aiops-cmdb-itsm-1 | null | [
"防止问题及相关故障",
"在问题生命周期内管理问题",
"为用户恢复服务",
"最小化重复发生故障的影响"
] |
|
在一个大型电子商务平台上线新功能后,用户开始报告无法完成购物交易的问题。一开始,这只是零星的用户反馈,但随着时间的推移,越来越多的用户遇到相同的问题。用户支持团队陆续收到了大量关于购物交易失败的事故报告。这引起了公司的关注,因为购物交易是该平台的核心功能之一。请问,以下哪个描述更符合问题管理的角度而不是事故管理? | A | lenovo-aiops-cmdb-itsm-3 | null | [
"团队应该深入分析为什么购物交易会失败,采取措施确保问题不再发生。",
"公司应立即采取措施解决交易失败的问题,以尽快恢复受影响的服务。",
"用户支持团队应该尽快处理事故报告,以减轻用户的不满和损失。",
"公司应该在购物交易失败的情况下提供赔偿,以平息用户的不满。"
] |
|
在ITIL配置管理中,以下哪一项是配置管理的目标之一? | B | lenovo-aiops-cmdb-itsm-4 | null | [
"提高事件管理的效率",
"提供准确的配置信息来支持所有服务管理流程",
"最大程度减小反复出现事故的发生",
"加速系统变更的速度"
] |
|
日志样例:<182>Dec 3 13:42:12 BH_GD1 info logger: [ssl_acc] 127.0.0.1 - - [03/Dec/2017:13:42:12 +0800] "/iControl/iControlPortal.cgi" 200 769,如何通过正则表达式解析出时间字段time”Dec 3 13:42:12“、访问状态字段status“200”? | <\d+>(?<time>\w+\s+\d+\s+\d+:\d+:\d+).*\s+(?<status>\d+)\s+\d+ | null | Log Analysis-0 | null | null |
常见的日志等级有哪些? | EMERG(紧急)、ALERT(警告)、CRIT(严重)、ERR(错误)、WARNING(提醒)、NOTICE(注意)、INFO(信息)、DEBUG(调试) | null | Log Analysis-1 | null | null |
用正则提取这条日志的字段
2023-11-25 13:52:33,493 DEBUG yotta-frontend-actor-system-akka.actor.default-dispatcher-12 dbcp2.PoolableConnectionFactory: Failed to validate a poolable connection.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 248,702,880 milliseconds ago. The last packet sent successfully to the server was 248,702,880 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. |
(?<timestamp>\d+-\d+-\d+ \d+:\d+:\d+,\d+)\s+(?<loglevel>\S+)\s+(?<thread>\S+)\s+(?<class>[^:]+)[\s\S]*?\s+(?<exception_class>\S+Exception) | null | Log Analysis-2 | null | null |
如何开始业务系统日志分析 | 首先需要了解该业务系统主要功能,以及相应的业务运行逻辑架构,其次从业务运维人员处获得运维知识库,找到常见问题,以及梳理告警关键字,并建立错误关键字告警和反应业务健康度的黄金指标如饱和度(满没满),延时(耗时)和并发负载等内容。最后将该系统涉及的数据库、中间件、主机或容器等应用一并形成监控指标体系。 | null | Log Analysis-3 | null | null |
在日志解析中,多行日志是什么?如何处理? | 多行日志指的是跨越多行的日志条目,解析时需要将其组合成单个条目以便于处理和分析。 | null | Log Analysis-4 | null | null |
什么原因能导致Zabbix agent (active)类型监控项无法采集到任何数据? | D | zabbix-1 | null | [
"主机上没有创建Zabbix Agent接口",
"Zabbix Agent没有监听默认端口",
"Zabbix Agent配置文件的Server参数中的Zabbix Server IP不正确",
"Zabbix Agent配置文件中定义的主机名,与主机配置中的主机名不匹配"
] |
|
Zabbix Server服务启动失败,可以从哪里查找启动失败的详细信息? | D | zabbix-2 | null | [
"Web Server log file",
"Zabbix Agent log file",
"Windows Event Log",
"Zabbix Server log file",
"Zabbix Server configuration file"
] |
|
Zabbix Server服务无法启动,日志中有“out of memory”报错,如何处理? | D | zabbix-3 | null | [
"增加Zabbix Server所在操作系统物理内存的值",
"增加Zabbix Agent配置文件中的“BufferSize”参数的值",
"增加Zabbix Proxy配置文件中的“CacheSize”参数的值",
"增加Zabbix Server配置文件中的“CacheSize”参数的值"
] |
|
Evalute these conmands which execate sucestully
CREATE SEQUENCE ord_seq
INCREMENT BY 1
START WITH 1
MAXVALUE 100000
CYCLE
CACHE 5000;
Which two statements are true atout the ORD_ITEMS table and the ORD_SEQ sequence?
A. Any user inserting rows into table ORD_ITEMS must have been granted access to sequence
ORD_SEQ.
B. Colunn ORD_NO gets the next number from squence ORD_SEQ whenever a row is inserted into
ORD_ITEMS and no explict value is given for ORD_NO.
C. Sepuence ORD_SEQ cycles back to 1 after every 5000 numbers and can cycle 20 times
D. IF sequence ORD_SEQ is dropped then the default value for column ORD_NO will be NULL for
rows inserted into ORD_ITEMS.
E. Sequence ORD_SEQ is guaranteed not to genenate duplicate numbers. | A,B | Oracle Database-0 | null | null |
|
Which three statements are true about the Oracle join and ANSI Join syntax?
A. The SQL:1999 compliant ANSI Join syntax supports creation of a Cartesian product of two tables.
B. The Orade join syntax performs less well than the SQL:1999 compliant ANST join syntax.
C. The SQL:1999 compliant ANSI join syntax supports natural Joins.
D. The Orade join syntax perfoms better than the SQL:1999 compliant ANSI join syntax.
E. The Orade join syntax supports creation of a Cartesian product of two tables.
F. The Oracle join syntax suports natural joins.
G. The Orade join syntax only supports right outer joins. | C,D,E | Oracle Database-1 | null | null |
|
52、Which three are true about a whole database backup?
A. It can be created only by using RMAN
B. It is the only possible backup type for a database in NOARCHIVELOG mode
C. It can be consistent.
D. It can consist of either backup sets or image copies.
E. It can be inconsistent
F. It always includes all data files, the current control file, the server parameter file, and archived
redo logs.
| C,D,E | Oracle Database-2 | null | null |
|
5、You plan to perform cross-platform PDB transport using XTTS.
Which two are true?
A. A backup of the PDB must exist, taken using the BACKUP command with the ro PLATFORM
clause
B. The source PDB can be in MOUNT or OPEN state
C. The source PDB must be in MOUNT statE.
D. The source PDB must not be an application root.
E. Automatic conversion of endianess occurs.
F. The source and target platforms must have the same endianess | A,F | Oracle Database-3 | null | null |
|
Which three statements are true about an ORDER BY clause?
A. By default an ORDER BY clause sorts rows in descending order.
B. An ORDER BY clause can perform a linguistic sort.
C. An ORDER BY clause can perform a binary sort.
D. By default an ORDER BY clause sorts rows in ascending order.
E. An ORDER BY clause will always precede a HAVING clause if both are used in the same top-level
query.
F. An ORDER BY clause always sorts NULL values last. | B,C,D | Oracle Database-4 | null | null |
|
21. 下面对国家秘密定级和范围的描述中,哪项不符合《保守国家秘密法》 要求? | C | 保守国家秘密法第第二十条机关、单位对是否属于国家秘密或者属于何种密级不明确或者有争议的,由国家保密行政管理部门或者省、自治区、且辖市保密行政管理部门确定。 | gtja-security-0 | null | [
"国家秘密及其密级的具体范围,由国家保密工作部门分别会同外交、公安、国家安全和其他中央有关机关规定。",
"各级国家机关、单位对所产生的国家秘密事项,应当按照国家秘密及其密级具体范围的规定确定密级。",
"对是否属于国家机密和属F何种密级不明确的事项,可由各单位自行参考国家要求确定和定级,然后报国家保密工作部门确定。",
"对是否属于国家秘密和属于何种密级不明确的事项。由国家保密工作部门,省、自治区、直辖市的保密工作部门。省、自治区政府所在地的市和经国务院批准的较大的市的保密工作部门审定或者国家保密工作部门审定的机关确定。"
] |
22. 为防范网络欺诈确保交易安全,网银系统首先要求用户安全登录,然后使用”智能卡短信认证”模式进行网上转账等交易。在此场景中用到下列哪些鉴别方法? | A | 基于你所知道的(你所知道的),如知识、口令、密码;基于你所拥有的(你所拥有的),如身份证、信用卡、钥匙、智能卡、令牌;基于你的个人特征(你是什么),指纹,笔迹,声音,手型,脸型,视网膜虹膜。 | gtja-security-1 | null | [
"实体”所知”以及实体”所有”的鉴别方法。",
"实体”所有”以及实体”特征”的鉴别方法。",
"实体”所知”以及实体”特征”的鉴别方法。",
"实体”所有”以及实体”行为”的鉴别方法。"
] |
23. 关于数据库恢复技术,下列说法不正确的是? | D | D选项事务的操作称为回滚。 | gtja-security-2 | null | [
"数据库恢复技术的实现主要依推各种数据的冗余和恢复机制技术来解决,当数据库中数据被破坏时,可以利用冗余数据来进行修义。",
"数据库管理员定期地将整个数据库或部分数据库文件备份到磁带或另一个磁盘上保存起来,是数据库恢复中采用的基本技术。",
"日志文件在数据库恢中起着非常重要的作用,可以用来进行事物故障恢复和系统故障恢复,并协助后备副本进行介质故障快复。",
"计算机系统发生故障导致数据未存储到固定在储器上,利用日志文件中故障发生前数据的值,将数据库恢复到故障发生前的完整状态,这对事务的操作称为提交。"
] |
One of your coworkers has purchased an external Bluetooth trackpad to use with their tablet. They've turned to you, the company IT person, to install and configure it for them. What actions will you need to take? (Choose two.) | C,D | Analyzing each choice:
A: Plug the device into a USB port - This is incorrect. Bluetooth devices do not need to be plugged into a USB port as they connect wirelessly.
B: Install drivers - This may or may not be necessary. Some devices will require drivers to be installed, but many modern devices and operating systems will automatically handle this.
C: Put the device in pairing mode and open Bluetooth settings on the tablet, then tap the trackpad - This is correct. To connect a Bluetooth device, you typically need to put it in pairing mode and then connect to it from the device you want to use it with.
D: Go to Settings to configure speed and scrolling features - This is also correct. Once the device is connected, you may need to adjust settings like speed and scrolling to suit the user's preferences. | Wired Network-0 | null | [
"Plug the device into a USB port.",
"Install drivers.",
"Put the device in pairing mode and open Bluetooth settings on the tablet, then tap the trackpad.",
"Go to Settings to configure speed and scrolling features."
] |
What types of networking will smart cameras often have built into them? (Choose two.) | A,D | A: Bluetooth - This is a possible answer. Bluetooth is a wireless technology that enables data exchange over short distances. It is commonly used in many devices, including smart cameras, for sharing data.
B: IrDA - This is not a likely answer. IrDA stands for Infrared Data Association, a group of device manufacturers that developed a standard for transmitting data via infrared light waves. While it was popular in the past, it has largely been replaced by Wi-Fi and Bluetooth in most modern devices, including smart cameras.
C: RJ-45 - This is not a likely answer. RJ-45 is a type of connector commonly used for Ethernet networking. While it's possible for a smart camera to have an RJ-45 port for wired networking, it's not as common as wireless options like Wi-Fi and Bluetooth, especially considering the flexibility and convenience of wireless connections.
D: Wi-Fi - This is a possible answer. Wi-Fi is a wireless networking technology that uses radio waves to provide wireless high-speed Internet and network connections. It is commonly built into smart cameras to allow them to connect to home networks and the internet. | Wired Network-1 | null | [
"Bluetooth",
"IrDA",
"RJ-45",
"Wi-Fi"
] |
Which of the following statements about single-mode fiber-optic cable are true? (Choose all that apply.) | B,C | Single-mode cables have a smaller core filament and can span longer distances than multimode cables. Single-mode cables also use a laser light source, have a larger bend radius, and do not require a ground | Wired Network-2 | null | [
"Single-mode cables use an LED light source, whereas multimode cables use a laser.",
"Single-mode cables can span longer distances than multimode cables.",
"Single-mode cables have a smaller core filament than multimode cables.",
"Single-mode cables have a smaller bend radius than multimode, making them easier to install.",
"Single-mode fiber-optic cables require a ground, whereas multimode cables do not."
] |
Which of the following are modes of SXP peers? (Choose two.) | A,C | **Explanation:** Every SXP peer session has a speaker and a listener. A speaker sends the mappings of IP addresses to SGTs. The listener receives those updates and records them. A peer can be configured to be both a speaker and a listener for the same peer if both support it. It may have numerous peers as well | Wired Network-3 | null | [
"Speaker",
"SGT-Reflector",
"Listener",
"SGT-Sender"
] |
Router R1 currently supports IPv4, routing packets in and out all its interfaces. R1’s configuration needs to be migrated to support dual-stack operation, routing both IPv4 and IPv6. Which of the following tasks must be performed before the router can also support routing IPv6 packets? (Choose two answers.) | A,C | Of the four answers, the two correct answers show the minimal required configuration to support IPv6 on a Cisco router: enabling IPv6 routing ( **ipv6 unicast-routing** ) and enabling IPv6 on each interface, typically by adding a unicast address to each interface ( **ipv6 address…** ). The two incorrect answers list nonexistent commands | Wired Network-4 | null | [
"Enable IPv6 on each interface using an [ipv6 address](vol1_gloss.xhtml#gloss_243) interface subcommand.",
"Enable support for both versions with the ip versions 4 6 global command.",
"Additionally enable IPv6 routing using the ipv6 unicast-routing global command.",
"Migrate to dual-stack routing using the ip routing dual-stack global command."
] |
OpsEval Dataset
Introduction
The OpsEval dataset represents a pioneering effort in the evaluation of Artificial Intelligence for IT Operations (AIOps), focusing on the application of Large Language Models (LLMs) within this domain. In an era where IT operations are increasingly reliant on AI technologies for automation and efficiency, understanding the performance of LLMs in operational tasks becomes crucial. OpsEval offers a comprehensive task-oriented benchmark specifically designed for assessing LLMs in various crucial IT Ops scenarios.
This dataset is motivated by the emerging trend of utilizing AI in automated IT operations, as predicted by Gartner, and the remarkable capabilities exhibited by LLMs in NLP-related tasks. OpsEval aims to bridge the gap in evaluating these models' performance in AIOps tasks, including root cause analysis of failures, generation of operations and maintenance scripts, and summarizing alert information.
Highlights
- Comprehensive Evaluation: OpsEval includes 7184 multi-choice questions and 1736 question-answering (QA) formats, available in both English and Chinese, making it one of the most extensive benchmarks in the AIOps domain.
- Task-Oriented Design: The benchmark is tailored to assess LLMs' proficiency across different crucial scenarios and ability levels, offering a nuanced view of model performance in operational contexts.
- Expert-Reviewed: To ensure the reliability of our evaluation, dozens of domain experts have manually reviewed our questions, providing a solid foundation for the benchmark's credibility.
- Open-Sourced and Dynamic Leaderboard: We have open-sourced 20% of the test QA to facilitate preliminary evaluations by researchers. An online leaderboard, updated in real-time, captures the performance of emerging LLMs, ensuring the benchmark remains current and relevant.
Dataset Structure
Here is a brief overview of the dataset structure:
/dev/
- Examples for few-shot in-context learning./test/
- Test sets of OpsEval.
Dataset Informations
Dataset Name | Open-Sourced Size |
---|---|
Wired Network | 1563 |
Oracle Database | 395 |
5G Communication | 349 |
Log Analysis | 310 |
Website
For evaluation results on the full OpsEval dataset, please checkout our official website OpsEval Leaderboard.
Paper
For a detailed description of the dataset, its structure, and its applications, please refer to our paper available at: OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models
Citation
Please use the following citation when referencing the OpsEval dataset in your research:
@misc{liu2024opseval,
title={OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models},
author={Yuhe Liu and Changhua Pei and Longlong Xu and Bohan Chen and Mingze Sun and Zhirui Zhang and Yongqian Sun and Shenglin Zhang and Kun Wang and Haiming Zhang and Jianhui Li and Gaogang Xie and Xidao Wen and Xiaohui Nie and Minghua Ma and Dan Pei},
year={2024},
eprint={2310.07637},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
- Downloads last month
- 89