input
stringclasses
1 value
instruction
stringlengths
29
74
output
stringclasses
92 values
Can you explain the critical issue EX83 for Oracle Exadata?
EX83 issue apply to Storage servers upgraded to certain Exadata 23.1 versions while using Database In-Memory option and Exadata In-Memory Columnar Caching,Bug 35938811, Bug 36073771, Bug 36073739 - After storage servers are upgraded to certain Exadata 23.1 versions, wrong results may occur when the Exadata Smart Scan feature In-Memory Columnar Caching is enabled and In-Memory Columnar Caching or Storage Indexes are used to produce query results. Exadata In-Memory Columnar Caching is used when the Database In-Memory option is enabled. Fixed in Exadata 23.1.9. See Document 2986385.1 for details.Update date Revised 2023-Dec-09.
Can you explain the critical issue EX82 for Oracle Exadata?
EX82 issue apply to Database servers (VM guest only) running Exadata 23.1.0 through 23.1.6, or lower than 22.1.15 on systems with less than five storage servers,Bug 35909280, Bug 35909317 - High CPU utilization in a VM guest after the number of CPUs has been reduced online causes slowness in systemd-udevd service, which can lead to missing quorum disk device symbolic links on Exadata systems with fewer than five storage servers. Missing quorum disk device symbolic links prevent ASM disks from coming back online. Fixed in Exadata 23.1.7 and 22.1.16. See Document 2984504.1 for details.Update date 2023-Oct-31.
Can you explain the critical issue EX81 for Oracle Exadata?
EX81 issue apply to Database servers (KVM GUEST) on RoCE-based systems with irqbalance running and KVMHOST 19.3.0 through 22.1.14, or,Bug 35719844, Bug 35703260 - KVM guests incorrectly have the irqbalance service enabled and running, which causes excessive interrupt remapping, and conflicts with proper interrupt remapping performed by the cellirqbalance service. Excessive interrupt remapping in the KVM guest can introduce a race condition in qemu in the KVM host that can result in RDS connection hang across the entire cluster. Fixed in Exadata 23.1.6 and 22.1.15 for KVM HOST. See Document 2974522.1 for details.Update date 2023-Oct-02.
Can you explain the critical issue EX80 for Oracle Exadata?
EX80 issue apply to X8/X8M High Capacity (HC) storage servers, and older generation storage servers using 14TB disk drives as replacement drives.,Bug 35241309 - Exadata X8/X8M High Capacity (HC) Storage Servers may experience high rates of disk drive failure of certain 14TB disk drives. Fixed in Exadata 23.1.6 and 22.1.15. See Document 2974254.1 for details.Update date 2023-Sep-16.
Can you explain the critical issue EX79 for Oracle Exadata?
EX79 issue apply to X7-2 and X8-2/X8M-2 database servers updated to 23.1.0 through 23.1.2, or 22.1.8 through 22.1.11, or 21.2.21 through 21.2.24,Bug 35392787 - After Exadata System Software is updated on Exadata X7-2 or X8-2/X8M-2 database servers, Ethernet interfaces eth1 and eth2 are not configured during operating system startup due to a network card firmware update, making the client network unavailable. Fixed in Exadata 23.1.3 and 22.1.12. See Document 2957113.1 for details.Update date 2023-Jun-22.
Can you explain the critical issue EX78 for Oracle Exadata?
EX78 issue apply to Database servers (non-virtual or KVMHOST) on RoCE-based systems,Bug 34425639 - A race condition during terminated database process cleanup on RoCE-based (i.e. X8M and X9M) Exadata Database Service and on-premises Exadata systems may cause excessively high global cache (GC) waits, resulting in database-wide hang. Fixed in Exadata 22.1.2 and 21.2.15. See Document 2910156.1 for details.Update date 2022-Nov-18.
Can you explain the critical issue EX77 for Oracle Exadata?
EX77 issue apply to Storage servers running Exadata 21.2.0 through 21.2.14, or 22.1.0 through 22.1.1 with Persistent Storage Index feature enabled,Bug 34356665 - Exadata Persistent Storage Index feature may cause wrong results, or repeated Exadata Offload Server process crash leading to disabled smart scans. This issue can occur when using Transparent Data Encryption (TDE) and performing a tablespace or table rekey operation, or when transitioning to begin using TDE. Fixed in Exadata 22.1.2 and 21.2.15. See Document 2892410.1 for details.Update date 2022-Aug-29.
Can you explain the critical issue EX25 for Oracle Exadata?
EX25 issue apply to Exadata Storage Server X4 with certain 1.2TB High Performance disk drives running Exadata Storage Server 12.1.2.2.1 or lower, or 12.1.2.3.0,Bug 22139731 - Older High Performance disk drives in certain Exadata storage servers may experience high rates of failure, unrecoverable media errors, confinement, or higher read latencies. Fixed in Exadata 12.1.2.3.1 and 12.1.2.2.2. See Document 2073916.1 for additional details.Update date 2015-Dec-10.
Can you explain the critical issue EX24 for Oracle Exadata?
EX24 issue apply to Exadata Storage Server 12.1.2.1.0 and 12.1.2.1.1,After replacing a failed system disk (disk 0 or disk 1), the new disk is not correctly configured leaving the system vulnerable to the other system disk failing. Fixed in Exadata 12.1.2.1.2. See Document 2032402.1 for additional details.Update date 2015-Jul-17.
Can you explain the critical issue EX23 for Oracle Exadata?
EX23 issue apply to Exadata Storage Server 12.1.2.1.0 and 12.1.2.1.1,Bug 21174310 - Wrong results, ORA-1438 errors, or other internal errors are possible from smart scan offloaded queries against HCC or OLTP compressed tables stored on Exadata storage for databases upgraded from Oracle Database 11.2 to 12.1. Fixed in Exadata 12.1.2.1.2. See Document 2032464.1 for additional details.Update date 2015-Jul-17.
Can you explain the critical issue EX22 for Oracle Exadata?
EX22 issue apply to Exadata Storage Server and database server 12.1.2.1.0,Bug 20509822 - "ntpd -x" does not slew leap seconds Fixed in Exadata 12.1.2.1.1. See Document 1986986.1 for additional details.Update date 2015-Jun-25.
Can you explain the critical issue EX21 for Oracle Exadata?
EX21 issue apply to Exadata Storage Server 12.1.1.1.2 and 12.1.2.1.1,Bug 20830449 - In rare circumstances when a disk media error occurs while synchronous I/O is performed, on disk data may become corrupt. Fixed in Exadata 12.1.2.1.2. See Document 2009841.1 for additional details.Update date 2015-May-19.
Can you explain the critical issue DB53 for Oracle Exadata?
DB53 issue apply to Exadata systems with fewer than 5 storage servers and high redundancy disk groups with no quorum disks running Grid Infrastructure 19.23,Bug 36427106 - Rolling storage server maintenance, including storage server software update, cannot be performed because grid disks cannot be safely deactivated when Grid Infrastructure version is 19.23 (i.e., Grid Infrastructure home is running April 2024 Release Update), and a high redundancy ASM disk group exists, and there are fewer than 5 storage servers (i.e., the disk group has fewer than 5 failure groups). See Document 3023891.1 for details.Update date 2023-May-23.
Can you explain the critical issue DB52 for Oracle Exadata?
DB52 issue apply to Grid Infrastructure and Database 19, 18, 12.2.0.1,Bug 35784008 - A database process stuck in an uninterruptible state (D state) when instance eviction and reconfiguration occurs can resume and issue database I/O after the instance is evicted, which can cause database corruption. This issue only affects Exadata systems. Fixed in 19.21.0.0.231017, 18.24.0.0.231017, and 12.2.0.1.231017. See Document 2984495.1 for details.Update date 2023-Oct-31.
Can you explain the critical issue DB51 for Oracle Exadata?
DB51 issue apply to Autonomous Health Framework (AHF) / EXAchk 23.4.0,Bug 35411465 - EXAchk may delete ORACLE_HOME directory on remote nodes in the cluster when the Oracle operating system user uses a login banner which lists ORACLE_HOME. Fixed in AHF 23.4.2. See Document 2951642.1 for details.Update date 2023-Jun-08.
Can you explain the critical issue DB50 for Oracle Exadata?
DB50 issue apply to Grid Infrastructure and Database 19 and 21,Issue #1 - Bug 33610957, Bug 34534868 - During operating system startup Oracle Clusterware may not start because the CSS daemon (OCSSD process ocssd.bin) cannot be set to run with real-time scheduler priority. Issue #1 - Fixed in 19.17.0.0.221018 and 21.9.0.0.230117. Issue #2 - requires Patch 34672698. See Document 2903663.1 for additional details.Update date 2022-Oct-21.
Can you explain the critical issue DB49 for Oracle Exadata?
DB49 issue apply to Grid Infrastructure 19.10.0.0.210119, 18.13.0.0.210119, and 12.2.0.1.210119,Bug 30118419 - When a fabric switch (RoCE or InfiniBand) restarts, one or more Clusterware node evictions may occur. Fabric switch restart can be due to manual reboot, or as part of switch software update. Fixed in 19.11.0.0.210420 and 18.14.0.0.210420. See Document 2772815.1 for additional details.Update date 2021-May-03.
Can you explain the critical issue DB48 for Oracle Exadata?
DB48 issue apply to Grid Infrastructure 19.10.0.0.210119,Bug 30227028 - Clusterware reconfiguration may hang on one or more nodes in a cluster with three or more nodes when one node is shutdown abruptly or encounters a node eviction, which can cause cluster-wide outage. Fixed in 19.11.0.0.210420. See Document 2770035.1 for additional details.Update date 2021-Apr-27.
Can you explain the critical issue DB47 for Oracle Exadata?
DB47 issue apply to Grid Infrastructure and Database 19, 18, 12.2.0.1, 12.1.0.2, and 11.2.0.4,Bug 31228670 - A lost write may occur if, while a database instance remains up, a disk is taken offline (due to planned maintenance or an unplanned event), brought back online, and then again taken offline due to a write error. Fixed in October 2020 Release Updates (RUs) for 19, 18, 12.2.0.1, and 12.1.0.2. See Document 2680856.1 for additional details.Update date 2020-Jun-15.
Can you explain the critical issue DB46 for Oracle Exadata?
DB46 issue apply to Grid Infrastructure and Database 18, 12.2.0.1, and 12.1.0.2,Bug 26950644 - A race condition exists where the status of an ASM disk is OFFLINE in a database instance, but ONLINE in the ASM instance. This situation can lead to database file or online redo log corruption caused by a lost write. Fixed in 18.10.0.0.200414. See Document 2670069.1 for additional details.Update date 2020-May-13.
Can you explain the critical issue DB45 for Oracle Exadata?
DB45 issue apply to Database 12.1.0.2 with Grid Infrastructure 12.2.0.1 or higher,Bug 23481669 - when using Grid Infrastructure 12.2.0.1 or higher, rolling storage server software updates or other storage server maintenance that take cell services offline, 12.1.0.2 databases may experience ASM disk online operation failure due to ORA-15333, or smart scan failure due to ORA-600: [kcfis_translation_use_blockio:volume lookup] or ORA-27607. Fixed in 12.1.0.2.200714. See Document 2654124.1 for additional details.Update date 2020-Apr-16.
Can you explain the critical issue DB44 for Oracle Exadata?
DB44 issue apply to Grid Infrastructure 19, 18.4.0.0.181016 or later 18, and 12.2.0.1.181016 or later 12.2.0.1,Bug 30655657 - corruption may occur due to lost writes when files are stored in an ACFS file system that contains ACFS snapshots (read-only(RO) or read-write(RW)). This issue causes database corruption when database files (data files, archived log files) are stored in ACFS. This issue does not affect database control files or redo log files stored in ACFS. Fixed in January 2020 Release Updates (RUs) and Release Update Revisions (RURs). See Document 2622600.1 for additional details.Update date 2020-Jan-09.
Can you explain the critical issue DB43 for Oracle Exadata?
DB43 issue apply to Grid Infrastructure 19,Bug 30363621 - a kernel panic may occur on a database server running Grid Infrastructure 19c when ACFS file systems are used. Fixed in 19.6.0.0.200114. See Document 2608246.1 for additional details.Update date 2019-Nov-07.
Can you explain the critical issue EX76 for Oracle Exadata?
EX76 issue apply to X8-8 InfiniBand-based database servers running Exadata 21.2.0 through 21.2.12,Bug 34146716 - Exadata X8-8 InfiniBand-based database servers will experience node eviction when an InfiniBand switch is rebooted, including during InfiniBand switch software update. Fixed in Exadata 21.2.13. See Document 2875747.1 for details.Update date 2022-Jun-10.
Can you explain the critical issue EX75 for Oracle Exadata?
EX75 issue apply to Database servers (non-virtual or VM guest) running Exadata 21.2.10, 21.2.9, 20.1.20, or 20.1.19,Bug 34034299 - On Exadata Database Service and on-premises Exadata systems, database instances may fail to start with error ORA-600 [pesldl03_MMap: errno 1 errmsg Operation not permitted] when PL/SQL compiled in NATIVE mode attempts execution during startup. This issue typically occurs when a database has all PL/SQL compiled in NATIVE mode and the Exadata software on the database server is updated. Fixed in Exadata 21.2.11 and 20.1.21. See Document 2875248.1 for details.Update date 2022-Jun-08.
Can you explain the critical issue EX74 for Oracle Exadata?
EX74 issue apply to Database server KVMHOST running Exadata 21.2.0 through 21.2.4, or 20.1.0 through 20.1.12,Bug 33089236 (and related tracking bugs 33089240 and 33089241) - On virtual KVM-based Exadata systems, including on-premises Exadata and Exadata Database Service X8M and X9M, a VM guest may reboot when the number of online CPUs is altered. Fixed in Exadata 21.2.5 and 20.1.13. See Document 2864281.1 for details.Update date 2022-May-11.
Can you explain the critical issue EX73 for Oracle Exadata?
EX73 issue apply to Database servers and storage servers X7 and later hardware running ILOM version 5,Bug 33580624 - On X7 and later hardware running ILOM version 5, when patchmgr rolling update is performed, and ILOM firmware is upgraded to a higher version during the update, pre-staging of the new ILOM software may fail on multiple servers, causing unexpected simultaneous server reboot. Fixed in Exadata 21.2.8. See Document 2849592.1 for details.Update date 2022-Feb-17.
Can you explain the critical issue EX72 for Oracle Exadata?
EX72 issue apply to Database servers on RoCE-based systems running Exadata 21.2.2 through 21.2.7,Bug 33703438 - A database node (a physical server or a virtual KVM guest) on a RoCE-based (i.e. X8M or X9M) Exadata system does not automatically restart if it is evicted by Oracle Clusterware Cluster Synchronization Services (CSS). The evicted node will remain unavailable until manual action is taken to reset the node. Fixed in Exadata 21.2.8. See Document 2833252.1 for details.Update date 2022-Jan-14.
Can you explain the critical issue EX71 for Oracle Exadata?
EX71 issue apply to 8-socket database servers running Exadata version 21.2.0 through 21.2.6, 20.1.0 through 20.1.15, or 19.3.0 through 19.3.20,Bug 33532389 - Process CPU assignment on 8-socket database servers is unevenly distributed across NUMA nodes, which causes CPU utilization imbalance. On systems with moderate to heavy load, a subset of CPUs run at or close to 100% utilization, causing system instability and limiting overall system performance and scalability. Fixed in Exadata 21.2.7 and 20.1.16. See Document 2828878.1 for details.Update date 2021-Dec-20.
Can you explain the critical issue EX70 for Oracle Exadata?
EX70 issue apply to Storage servers running Exadata version 21.2.0 through 21.2.4 with flashCacheMode=WriteThrough,Bug 33380819 - Storage servers running Exadata version 21.2.0 through 21.2.4 with flash cache configured in WriteThrough mode do not use the flash cache, which results in hard disks servicing all database I/O operations, causing severe performance degradation and possible instance crash. Fixed in Exadata 21.2.5. See Document 2812622.1 for details.Update date 2021-Oct-06.
Can you explain the critical issue EX69 for Oracle Exadata?
EX69 issue apply to Database servers (non-virtual or KVMHOST) and storage servers running Exadata version 21.2.0 through 21.2.2, or 20.1.0 through 20.1.12, or 19.3.0 through 19.3.20,Bug 33042327 - Soft-offlined memory pages may be reused while marked poisoned when DIMM correctable errors (CE) occur. Exadata systems affected by this issue may experience node crashes, corruption, instance crashes, process kills, or internal errors. Fixed in Exadata 21.2.3 and 20.1.13. See Document 2803421.1 for details.Update date 2021-Sep-21.
Can you explain the critical issue EX68 for Oracle Exadata?
EX68 issue apply to High Capacity (HC) storage servers being updated from Exadata <=19.2 to Exadata >=19.3.6,Bug 32843838 - During or immediately after software update from Exadata 19.2, or lower, to Exadata 19.3.6, or higher, on High Capacity (HC) storage servers, the database may experience high write IO latency, which may cause severe database performance degradation, or database process crash, or database instance crash when a process like LGWR is affected. Fixed in Exadata 19.3.20, 20.1.12, and 21.2.2. See Document 2799270.1 for details.Update date 2021-Aug-13.
Can you explain the critical issue EX67 for Oracle Exadata?
EX67 issue apply to Database servers on RoCE-based systems running Exadata 21.2.0 through 21.2.2, 20.1.0 through 20.1.12, or 19.3.10 through 19.3.20,Bug 32404239 - Database instances may crash or database processes may fail due to an I/O buffer memory corruption. Fixed in Exadata 21.2.3 and 20.1.13. See Document 2748735.1 for details.Update date 2021-Feb-01.
Can you explain the critical issue EX66 for Oracle Exadata?
EX66 issue apply to X8M, X8, X7, X6, or X5 database servers updated to 2020-Sep, 2020-Oct, or 2020-Nov Exadata releases,Bug 32218558 - In rare circumstances the disk controller on X8M, X8, X7, X6, or X5 database servers may fail during disk controller firmware upgrade when Exadata software is updated. Fixed in Exadata 20.1.5, 19.3.15, 19.2.21, and 18.1.32. See Document 2737023.1 for details.Update date 2020-Dec-15.
Can you explain the critical issue EX65 for Oracle Exadata?
EX65 issue apply to X7 and X8 storage servers running Exadata version 19.2.0 through 19.2.4,Bug 31898529 - The use of diagnostic collection script sundiag.sh may result in flash card failure on Exadata X7 and X8 storage servers, which can cause disk group dismount and data loss, particularly when invoked using dcli across multiple storage servers. Fixed in Exadata 19.2.5. See Document 2720523.1 for details.Update date 2020-Oct-16.
Can you explain the critical issue EX64 for Oracle Exadata?
EX64 issue apply to Database servers and storage servers running Exadata 20.1.0 through 20.1.2, 19.3.0 through 19.3.12, 19.2.10 through 19.2.18, or 18.1.24 through 18.1.30,Bug 31784659, Bug 31648140 - RDS connection establishment may be slow or hang resulting in node eviction, slow or hung clusterware/ASM startup, offline ASM disks, and slow or hung database(s). RDS connection establishment occurs during normal activity in many circumstances, including after restart of a fabric switch, database server, or storage server. Fixed in Exadata 20.1.3, 19.3.13, 19.2.19, and 18.1.31. See Document 2719295.1 for details.Update date 2020-Oct-12.
Can you explain the critical issue EX63 for Oracle Exadata?
EX63 issue apply to Database server dom0 running Exadata 19.3.0 through 19.3.6, 19.2.0 through 19.2.12, 19.1.0 through 19.1.2, 18.1.5 through 18.1.26, or 12.2.1.1.7 through 12.2.1.1.8, or domU running Exadata 18.1.31 or lower,Bug 30131722, Bug 31003920, Bug 31477035 - On virtual Xen-based Exadata systems, IRQ migration from one CPU to another in a domU can result in RDS traffic stall in one or more domUs on the same database server, which leads to ASM or database instance outage, or node eviction. Fixed in Exadata 19.3.7, 19.2.13, and 18.1.27 for dom0. Fixed in uptrack-updates 20201030 for domU. See Document 2680290.1 for details.Update date 2020-Jun-15.
Can you explain the critical issue EX62 for Oracle Exadata?
EX62 issue apply to Database servers running Trace File Analyzer (TFA) 18.3.0 through 19.3.2,Bug 30767693 - Trace File Analyzer (TFA), part of Autonomous Health Framework (AHF), uses excessive kernel memory on database servers, which leads to severe performance degradation, or database server instability. Fixed in TFA/AHF 20.1. See Document 2672539.1 for details.Update date 2020-May-20.
Can you explain the critical issue EX61 for Oracle Exadata?
EX61 issue apply to Exadata X8M storage servers running 19.3.0 through 19.3.6,Bug 31139162 - Exadata X8M storage servers experience low or out-of-memory memory conditions, which leads to severe performance degradation, or cluster-wide instability. Fixed in Exadata 19.3.7. See Document 2668461.1 for details.Update date 2020-May-08.
Can you explain the critical issue EX60 for Oracle Exadata?
EX60 issue apply to Database server dom0 running Exadata 18.1.13 through 18.1.23, 19.1.0 through 19.2.9, or 19.3.0 through 19.3.3,Bug 30580836 - On a virtualized Xen-based Exadata system RDS connection reset in domU results in dom0 kernel memory leak, causing dom0 poor performance, dom0 out-of-memory conditions, or domU node eviction. Fixed in Exadata 19.3.4, 19.2.10, and 18.1.24. See Document 2636194.1 for details.Update date 2020-Feb-05.
Can you explain the critical issue EX59 for Oracle Exadata?
EX59 issue apply to Database server dom0 running Exadata 18.1.11 or lower,Bug 25571450 - A virtualized Exadata environment, including Exadata Cloud Service, may experience performance slowness or lost InfiniBand network connectivity resulting in clusterware node eviction. Fixed in Exadata 18.1.12. See Document 2631700.1 for details.Update date 2020-Jan-22.
Can you explain the critical issue EX58 for Oracle Exadata?
EX58 issue apply to Database servers and storage servers running Exadata 18.1.19 through 18.1.22, or 19.2.5 through 19.2.8.,Bug 30391350 - After database server reboot, databases open slowly due to excessive LGWR waits caused by slow RDS connection formation. Excessive LGWR waits may cause slowness on instances in other nodes in the cluster. Slowness lasting up to 20 minutes has been observed on Exadata quarter rack systems experiencing this issue. Fixed in Exadata 19.2.9 and 18.1.23. See Document 2622049.1 for details.Update date 2019-Dec-19.
Can you explain the critical issue EX57 for Oracle Exadata?
EX57 issue apply to Database servers and storage servers running Exadata version 19.3.0.,Bug 30388717 - Heavy RDMA traffic on the interconnect network (on either RoCE-based or InfiniBand-based systems) may cause a database server or storage server kernel to crash. Fixed in Exadata 19.3.1. See Document 2604830.1 for details.Update date 2019-Oct-28.
Can you explain the critical issue EX56 for Oracle Exadata?
EX56 issue apply to Database servers running Exadata version 18.1.16, 18.1.17, 18.1.18, 19.2.2, 19.2.3, and 19.2.4.,Bug 29797007 - database servers (non-OVM, domU, or dom0) running an Exadata release with kernel version 4.1.12-124.26.12 (i.e. Exadata 18.1.16, 18.1.17, 18.1.18, 19.2.2, 19.2.3, and 19.2.4) will incorrectly detect corruption and fail to mount an ext4 file system that has the meta_bg feature set. Fixed in Exadata 18.1.19 and 19.2.5. See Document 2570339.1 for details.Update date 2019-Aug-26.
Can you explain the critical issue EX55 for Oracle Exadata?
EX55 issue apply to Database servers running Exadata version 18.1.16 or 19.2.2 using InfiniBand active/active bonding.,Bug 29871722 - a daily cron job scheduled for approximately 3am on database servers that runs the /opt/oracle.cellos/genCellRoute.sh script causes RDS connection reset to storage servers. RDS connection reset can cause diskmon split-brain with regard to storage server connectivity, which can lead to database server eviction and reboot to resolve the diskmon split-brain condition. Fixed in Exadata 18.1.17 and 19.2.3. See Document 2560543.1 for details.Update date 2019-Jul-01.
Can you explain the critical issue EX54 for Oracle Exadata?
EX54 issue apply to High capacity storage servers with 8 TB or 10 TB disk drives updated to Exadata version 12.2.1.1.8, or to version >= 18.1.6.,Bug 29645783 - shortly after software upgrade hard disks may fail on High Capacity storage servers with 8 TB or 10 TB disk drives. If this issue occurs on partner disk drives on multiple storage servers when performing non-rolling upgrade, then one or more ASM disk groups may be forcibly dismounted and lost, requiring restore from backup. Fixed in Exadata 19.2.2.0.0.190524 and 18.1.16.0.0.190524. See Document 2546014.1 for details.Update date 2019-May-28.
Can you explain the critical issue EX53 for Oracle Exadata?
EX53 issue apply to High capacity storage servers running Exadata version 18.1.15 or 19.2.1,Bug 29750186 - A high capacity storage server may incorrectly confine disks when disk utilization exceeds 90%, which is likely to occur during automatic hard disk scrub. When this issue occurs on system disks on multiple partner storage servers at about the same time, then one or more ASM disk groups may dismount causing database or clusterware shutdown. Fixed in Exadata 19.2.1.0.0.190510 and 18.1.15.0.0.190510. See Document 2541851.1 for details.Update date 2019-May-13.
Can you explain the critical issue EX52 for Oracle Exadata?
EX52 issue apply to Storage servers running Exadata version <= 12.2 rolling updated to >= 18 while Grid Infrastructure is >= 18,Bug 29504682 - During or shortly after performing rolling storage server update, ASM or Database crashes when a process performing I/O aborts with error ORA-7445 [skgxpvrpc]. Fixed in Exadata 18.1 and Grid Infrastructure 18. See Document 2527592.1 for details.Update date 2019-Apr-05.
Can you explain the critical issue EX51 for Oracle Exadata?
EX51 issue apply to Storage servers running Exadata version 18.1.10, 18.1.11, or 18.1.12 using IORM to manage flash cache,Bug 29288067 - When I/O Resource Management (IORM) is configured to manage flash cache on storage servers, the cellsrv process may crash with error ORA-600 [FCGroupDesc::decLocalCnt_underflow]. If the cellsrv process crashes on multiple partner storage servers at the same time, then one or more ASM disk groups may dismount causing database or clusterware shutdown. Fixed in Exadata 18.1.13. See Document 2511918.1 for details.Update date 2019-Mar-05.
Can you explain the critical issue EX50 for Oracle Exadata?
EX50 issue apply to Database servers (non-OVM and OVM domU) running Exadata version 19.1,Bug 29164963 - Kernel service systemd-tmpfiles-clean.service may remove required socket files in /var/tmp/.oracle, which may cause database startup or connections to fail, or clusterware connection to fail on Exadata database servers running Oracle Linux 7 (i.e. Exadata 19.1). Fixed in systems updated using patchmgr/dbserver update utility 19.190104 and in systems deployed with OEDA Jan 2019 v190116. See Document 2498572.1 for details.Update date 2019-Jan-31.
Can you explain the critical issue EX48 for Oracle Exadata?
EX48 issue apply to Database servers and storage servers running Exadata version >= 12.2 and <= 18.1.7 using InfiniBand active/active bonding.,Bug 28020097 - In certain InfiniBand switch upgrade scenarios, InfiniBand switch upgrade may cause one or more database servers or storage servers to evict, which can lead to system-wide outage. Fixed in Exadata 18.1.8. See Document 2452724.1 for details.Update date 2018-Oct-11.
Can you explain the critical issue EX47 for Oracle Exadata?
EX47 issue apply to Storage servers with hard disk drives using IORM objective auto, low_latency, or balanced, and running one of the following versions: 12.1.2.3.7, or 12.2.1.1.3 through 12.2.1.1.6, or 18.1.0 through 18.1.4.,Bug 27754625 - IORM incorrectly calculates hard disk utilization, causing IOs to queue and limiting the number of IOs issued to hard disks, which can lead to poor database performance or voting disk write timeouts. Fixed in Exadata 18.1.5 and 12.2.1.1.7. See Document 2418926.1 for details.Update date 2018-Jul-08.
Can you explain the critical issue EX46 for Oracle Exadata?
EX46 issue apply to X6 and X7 storage servers running 12.1.2.3.1 through 12.2.1.1.7, or 18.1.0 through 18.1.5, with Flash cache configured in WriteBack mode or grid disks created directly on flash disks.,Bug 28078035 - A spike in IO requests to storage servers may lead to incorrect disk confinement/failure, cell restart, and/or cellsrv restart across multiple storage servers within a short time period. If the combination of storage servers affected causes all ASM disk partners to be offline, then ASM dismounts the affected disk groups. Fixed in Exadata 18.1.6 and 12.2.1.1.8. See Document 2413997.1 for details.Update date 2018-Jul-08.
Can you explain the critical issue EX45 for Oracle Exadata?
EX45 issue apply to Storage servers running Exadata version between 12.1.2.1.0 and 12.1.2.3.3, inclusive.,Bug 25128043 - When a flash disk fails a flash cache read miss does not result in populating the flash cache, causing reads to continually be satisfied by hard disk instead of flash cache. As a result, database performance read performance may suffer significantly. Fixed in Exadata 12.1.2.3.4. See Document 2400187.1 for details.Update date 2018-Jun-20.
Can you explain the critical issue EX44 for Oracle Exadata?
EX44 issue apply to Database servers running Oracle Linux UEK2, which is supplied with Exadata versions 12.1 and earlier,Bug 26424268 - Database servers running UEK2, which is supplied with Exadata version 12.1 and earlier, may experience system hang after long uptime or heavy file system use. Fixed in Exadata 12.2. See Document 2397314.1 for details.Update date 2018-Jun-26.
Can you explain the critical issue EX43 for Oracle Exadata?
EX43 issue apply to Database servers running Exadata version 12.2.1.1.0, 12.2.1.1.1, or 12.2.1.1.2 containing databases with Exafusion configured.,Bug 25730857 - When Exafusion is used foreground processes may hang when performing cache fusion operations. Fixed in Exadata 12.2.1.1.3. See Document 2396944.1 for details.Update date 2018-May-09.
Can you explain the critical issue EX42 for Oracle Exadata?
EX42 issue apply to Storage servers running 12.2.1.1.x lower than 12.2.1.1.6, or 18.1.x lower than 18.1.4, with flash cache configured in WriteBack mode using Grid Infrastructure 11.2.0.4 or 12.1.,Bug 27372426 - When flash cache is configured in WriteBack mode, a rare race condition may allow a flash disk failure to introduce corruption to ASM metadata primary and secondary extents. The corruption causes rebalance to hang or continually fail, or prevents the disk group from mounting. Fixed in Exadata 18.1.4.0.0 and 12.2.1.1.6. See Document 2356460.1 for details.Update date 2018-Feb-21.
Can you explain the critical issue EX41 for Oracle Exadata?
EX41 issue apply to X3, X4, and X5 generation storage servers with flash cache configured in WriteThrough mode update from 12.x, or lower, to 18.1.3.0.0, or lower 18.1.,Bug 27412236 - Flash cache configured in WriteThrough mode will enter state "critical - degraded" after storage server update from 12.x, or lower, to 18.1.3.0.0, or lower 18.1, on X3, X4, and X5 generation hardware. Fixed in Exadata 18.1.4.0.0. See Document 2354457.1 for details.Update date 2018-Feb-01.
Can you explain the critical issue EX39 for Oracle Exadata?
EX39 issue apply to NUMA-enabled, non-OVM 2-socket or 8-socket database servers running Exadata 12.2.1.1.0, 12.2.1.1.1, 12.2.1.1.2, or 18.1.0.0.0.,Bug 26798697 - Automatic NUMA memory balancing feature in UEK4 may cause the system to experience continuously high load or reduced performance. Updates from Exadata <= 12.1 currently require manual workaround. Updates from Exadata 12.2 or 18.1 fix this issue when target version is >= 12.2.1.1.3 or >= 18.1.1.0.0. Fixed in new >= 12.2.1.1.3 and >= 18.1.1.0.0 deployments. See Document 2319324.1 for details.Update date 2017-Oct-31.
Can you explain the critical issue EX38 for Oracle Exadata?
EX38 issue apply to Exadata systems with InfiniBand network partitioning configured running Exadata version 12.2.1.1.1, 12.2.1.1.0, or 12.1.2.3.5 or lower on database servers, or Grid Infrastructure version 12.2.0.1, or 12.1.0.2.161018 or higher.,Bug 23644359 and bug 26362821 - Excessive logging can occur on an InfiniBand switch and on a database server when InfiniBand network partitioning is configured with limited membership. Excessive logging may cause loss of important diagnostic information, and may cause premature failure of the solid-state drive on an InfiniBand switch. Fixed in Exadata 12.1.2.3.6 and 12.2.1.1.2; Fixed in Grid Infrastructure 12.1.0.2.171017 and 12.2.0.1.170919. See Document 2290724.1 for details.Update date 2017-Aug-18.
Can you explain the critical issue EX37 for Oracle Exadata?
EX37 issue apply to Exadata X6 storage servers with write-back flash cache enabled using the default flash firmware supplied with Exadata versions lower than 12.1.2.3.4.,Bug 25595250 - A flash predictive failure on an Exadata X6 storage server with write-back flash cache enabled may lead to corruption in primary and/or secondary ASM mirror copies, and may propagate to other storage servers during certain ASM rebalance operations. Fixed in Exadata 12.1.2.3.4. See Document 2242320.1 for details.Update date 2017-Apr-11.
Can you explain the critical issue EX36 for Oracle Exadata?
EX36 issue apply to Exadata X5 and X6 servers using the default disk controller firmware supplied with Exadata versions lower than 18.1.0.0.0.,Exadata X5 and X6 storage servers and database servers may unexpectedly power cycle and report critical alert "Disk controller was hung. Cell was power cycled to stop the hang." Fixed in Exadata 18.1.0.0.0.170915.1. See Document 2242282.1 for details.Update date 2017-Mar-10.
Can you explain the critical issue EX35 for Oracle Exadata?
EX35 issue apply to Database servers running Exadata <= 12.1.2.3.4 combined with InfiniBand switches version 2.2.4 or 2.2.2,Bug 25527184 - Fast node death detection (FNDD) may improperly cause node eviction when a database server (physical or domU) running Exadata version 12.1.2.3.4, or lower, is used with InfiniBand switch version 2.2.4 or 2.2.2. Fixed in Exadata 12.1.2.3.5. See Document 2234640.1 for details.Update date 2017-Feb-23?.
Can you explain the critical issue EX33 for Oracle Exadata?
EX33 issue apply to Exadata systems running 12.1.2.3.1, 12.1.2.3.0, 12.1.2.2.2, or lower.,Bug 22118109 - When under heavy InfiniBand network load, performance may be severely reduced, instance eviction may occur, or the database may hang due to RDS congestion. Fixed in Exadata 12.1.2.2.3, 12.1.2.3.2, and higher releases. See Document 2209863.1 for details.Update date 2016-Dec-21.
Can you explain the critical issue EX31 for Oracle Exadata?
EX31 issue apply to Exadata Storage Server running 12.1.2.2.0, or higher, that were updated from 12.1.2.1.3, or lower.,Bug 24899249 - After storage server update from 12.1.2.1.3, or lower, to 12.1.2.2.0, or higher, a latent inconsistency in cell disk metadata may cause failure of a CREATE GRIDDISK command to create a new grid disk, or an ALTER GRIDDISK command to expand an existing grid disk. The inconsistency may cause unrecoverable cell disk metadata corruption, which can result in the loss of cell disk content (i.e. grid disk, ASM disk, ASM disk group), requiring files to be restored from backup. Fixed in Exadata 12.1.2.3.3.161109. See Document 2195523.1 for details.Update date 2016-Oct-26.
Can you explain the critical issue EX30 for Oracle Exadata?
EX30 issue apply to Exadata database servers running 12.1.2.3.2 or lower,Bug 22521735 - The InfiniBand (IB) Subnet Manager (SM) can incorrectly remove a database server (or one of its HCAs) from the IB fabric because the host's response to an SM query is stalled. Fixed in Exadata 12.1.2.3.3. See Document 2176154.1 for additional details.Update date 2016-Oct-20.
Can you explain the critical issue EX27 for Oracle Exadata?
EX27 issue apply to Exadata Storage Server 12.1.1.1.0 to 12.1.2.1.2,Bug 21299486 - In rare circumstances running CellCLI command "list activerequest" can cause an IO hang, which may lead to database hang. Fixed in Exadata 12.1.2.1.3. See Document 2095255.1 for additional details.Update date 2016-Mar-09.
Can you explain the critical issue EX26 for Oracle Exadata?
EX26 issue apply to Exadata Storage Servers running 12.1.2.1.3 combined with Exadata database servers running Oracle VM,Bug 21668901 - Network messages over the InfiniBand network may fail to send between Exadata Storage Servers running version 12.1.2.1.3 and Exadata database servers running Oracle VM causing IO failures and hangs, resulting in ASM hanging during disk group mount, or Clusterware failing to start, or new OEDA VM deployments failing in step "Initialize Cluster Software". Fixed in Exadata 12.1.2.2.0. See Document 2105407.1 for additional details.Update date 2016-Feb-18.
Can you explain the critical issue DB42 for Oracle Exadata?
DB42 issue apply to Database 11.2.0.4, 12.1.0.2, 12.2.0.1, and 18,Bug 28305362 - logical data corruption may occur with the following sequence: 1) A backup set level 0 backup is restored; 2) One or more incremental level 1 backups are created without creating a new level 0; 3) One of those incremental level 1 backups are used for restore or recovery. Fixed in 11.2.0.4.190115, 12.1.0.2.190115, 12.2.0.1.190115, and 18.5.0.0.190115. See Document 2426886.1 for additional details.Update date 2018-Aug-23.
Can you explain the critical issue DB41 for Oracle Exadata?
DB41 issue apply to Grid Infrastructure 12.1.0.2.0, 12.2.0.1, and 18,Bug 27068526, Bug 25785073 - diagsnap and other components may attempt to get stack traces of critical clusterware processes (e.g. ocssd.bin) causing those processes to hang, which leads to node eviction. See Document 2342114.1 for additional details.Update date 2018-Jan-11.
Can you explain the critical issue DB40 for Oracle Exadata?
DB40 issue apply to X4-8 and later generation 8-socket database servers running Exadata,Bug 26762083, Bug 26626720 - Certain operations appear to hang or progress slowly, such as ASM rebalance or instance startup. Slow instance startup may have secondary effect causing clusterwide performance impact to sessions connected to other instances. This can occur when fix for bug 25256170 is installed in Database or Grid Infrastructure home. Fixed in 12.1.0.2.171017. See Document 2308117.1 for additional details.Update date 2017-Sep-20.
Can you explain the critical issue DB39 for Oracle Exadata?
DB39 issue apply to Systems with Database 12.1.0.2.170718 or 12.1.0.2.170814 combined with earlier 12.1.0.2 or 12.2.0.1 Grid Infrastructure.,Bug 26555609 - All instances of a database may crash with error ORA-600 [kszprocskgxpmap3] after a storage server goes offline, including offline conditions due to planned maintenance. This can occur when fix for bug 25256170 is installed in Database home but not in Grid Infrastructure home. Fixed in 12.1.0.2.171017. See Document 2305283.1 for additional details.Update date 2017-Sep-20.
Can you explain the critical issue DB38 for Oracle Exadata?
DB38 issue apply to Grid Infrastructure 18, 12.2.0.1, or 12.1.0.2 with fix for bug 25967985 and/or bug 26526726 and/or bug 27502420 installed.,Bug 27502420 - ASM process may fail with ORA-600: [17114]. Fixed in 12.1.0.2.181016, 12.2.0.1.181016, and 18.4.0.0.181016. See Document 2296023.1 for additional details.Update date 2017-Aug-18.
Can you explain the critical issue DB37 for Oracle Exadata?
DB37 issue apply to Systems using ACFS Encryption,Bug 25375360 - Database memory corruption and/or block corruption on disk may occur intermittently if ASM Cluster File System (ACFS) encryption is used. Note that this issue occurs whether or not database files are stored in ACFS. Fixed in 12.2.0.1.171017 and 12.1.0.2.170418. See Document 2232665.1 for additional details.Update date 2017-Feb-13.
Can you explain the critical issue DB36 for Oracle Exadata?
DB36 issue apply to Oracle RAC databases using parallel direct load insert running Oracle Database 12.1.0.2, 12.1.0.1, 11.2.0.4, or 11.2.0.3.,Bug 20509482 - In a RAC database, parallel direct load insert of some contiguous data blocks are overwritten by several parallel query Worker processes, which can lead to ORA-600 [3020], ORA-752 and wrong results, or RMAN ORA-600 [krcrfr_nohist] when performing fast incremental backup, which can lead to database hang. Fixed in 12.1.0.2.9 and 11.2.0.4.20. See Document 2139374.1 for additional details.Update date 2016-May-20.
Can you explain the critical issue DB35 for Oracle Exadata?
DB35 issue apply to Grid Infrastructure 12.1.0.2.11 through 12.1.0.2.13 combined with storage servers running Exadata lower than 12.1.2.2.0,Bug 21495198, Bug 21864236 - Cellsrv process on storage servers running a version lower than 12.1.2.2.0 may crash when performing a cell-to-cell offload operation when fix for bug 21218243 is installed in the Grid Infrastructure home. Fixed in 12.1.0.2.160119. See Document 2074393.1 for additional details.Update date 2015-Nov-09.
Can you explain the critical issue DB34 for Oracle Exadata?
DB34 issue apply to Oracle Database 12.1.0.2.7 through 12.1.0.2.12,Bug 21620471 - During apply of Database Patch for Engineered Systems and DB In-Memory (i.e. DBBP), the datapatch utility skips running necessary SQL files, leaving some fixes unapplied or partially applied. Fixed in 12.1.0.2.13. However, previous application of one of the affected releases requires action. See Document 2069046.1 for additional details.Update date 2015-Nov-02.
Can you explain the critical issue DB33 for Oracle Exadata?
DB33 issue apply to Oracle Database 12.1 upgraded from 11g using compressed tables,Bug 21923026 - 12c databases that have been upgraded from 11g can be exposed to corruption on compressed tables during media recovery while applying redo generated for those tables when the source database was running on the 11g software version. This issue applies to any media recovery operation, including physical standby database redo apply, recovery operations that occur during RMAN recovery or user-managed recovery, and flashback database, where redo generated with 11g software is applied to a database running 12.1 software. See Document 2058461.1 for additional details.Update date 2015-Sep-29.
Can you explain the critical issue DB32 for Oracle Exadata?
DB32 issue apply to ASM 12.1.0.2 with normal redundancy disk group(s),Bug 20217875 - When an ASM disk group is configured normal redundancy and Oracle Clusterware is started while an Exadata Storage Server is offline, Oracle is unable to perform I/O to an online ASM mirror extent when the primary extent resides in the offline Exadata Storage Server. Fixed in 12.1.0.2.5. See Document 2057731.1 for additional details.Update date 2015-Sep-29.
Can you explain the critical issue DB31 for Oracle Exadata?
DB31 issue apply to ASM 12.1.0.2,Bug 21281532 - ASM rebalance interrupted with errors ORA-600 [kfdAtbUpdate_11_02] and ORA-600 [kfdAtUnlock00]. Fixed in 12.1.0.2.11. See Document 2031709.1 for additional details.Update date 2015-Jul-24.
Can you explain the critical issue DB30 for Oracle Exadata?
DB30 issue apply to Oracle Grid Infrastructure and Database 12.1.0.2,Bug 20904530 - During disk resync ORA-600 [kfdsBlk_verCb] reported due to corruption in ASM staleness registry. See Document 2028222.1 for additional details.Update date 2015-Jul-24.
Can you explain the critical issue DB29 for Oracle Exadata?
DB29 issue apply to ACFS with Grid Infrastructure 12.1.0.2,Bug 20688221 - Data corruption may occur because ASM file extent map for the ACFS volume is not properly locked during initialization IO. Fixed in 12.1.0.2.13. See Document 2022172.1 for additional details.Update date 2015-Jun-25.
Can you explain the critical issue DB27 for Oracle Exadata?
DB27 issue apply to 11.2.0.4 databases running with Grid Infrastructure 12.1 (12.1.0.2 or 12.1.0.1),Bug 20361671 - Databases running version 11.2.0.4 running with Grid Infrastructure 12.1 (12.1.0.2 or 12.1.0.1) will crash with ASMB process reporting ORA-15064, ORA-3115 if a disk health update is received, such as when a cell disk is marked predictive failure. Fixed in 12.1.0.2.7. See Document 2004572.1 for additional details.Update date 2015-May-06.
Can you explain the critical issue DB26 for Oracle Exadata?
DB26 issue apply to 11.2.0.4.7 (Database Patch for Exadata 18552960 May2014),Bug 19464000 - ASM resilvering processes continually crash with error ORA-7445 [kfdxWork()+5283] or error ORA-7445 [__intel_new_memset()+1467] preventing successful ASM resilver and filling up dump destination with trace and core files. Fixed in 11.2.0.4.8.Update date 2014-Sep-06.
Describe the Oracle Exadata critical issue labeled IB9
IB9 issue apply to Sun Datacenter InfiniBand Switch 36 software 2.2.15-1 and lower,Bug 30303098 - Excessive network traffic on the management network, such as that caused by a broadcast storm, can lead to simultaneous reset of all InfiniBand switches, causing an InfiniBand network outage and/or database node evictions. Fixed in 2.2.16. See Document 2773989.1 for details.Update date 2021-May-04.
Describe the Oracle Exadata critical issue labeled IB8
IB8 issue apply to Sun Datacenter InfiniBand Switch 36 software 2.2.6, 2.2.7, and 2.2.8 with network partitioning configured.,Bug 27611791 - When InfiniBand network partitioning is in use and the Primary subnet manager (SM) receives a constant flood of P_Key violation traps, the current partition policy may be reset to the default partition policy, which causes connection failure amongst servers communicating in the non-default partition(s). This can lead to dismounted ASM disk groups and node evictions. Fixed in 2.2.9. See Document 2413649.1 for details.Update date 2018-Jun-29.
Describe the Oracle Exadata critical issue labeled IB7
IB7 issue apply to Sun Datacenter InfiniBand Switch 36 software 2.1.5 and earlier.,Bug 17482244 - If the Primary subnet manager (SM) on an InfiniBand switch becomes overloaded one of the standby SMs will become a second Primary SM. After the original Primary SM becomes active again and the switches negotiate to return back to having a single Primary SM, some database servers and/or storage servers may no longer be able to communicate on the InfiniBand network, which can lead to node eviction or cluster outage. Fixed in 2.1.6. See Document 2400204.1 for details.Update date 2018-Jun-14.
Describe the Oracle Exadata critical issue labeled IB6
IB6 issue apply to Sun Datacenter InfiniBand Switch 36 software 2.1.x through 2.2.7.,Bug 26678971 - The real time clock (RTC) on an InfiniBand switch may become corrupt, which may cause the switch to become unbootable when attempting to upgrade. Fixed in 2.2.8. See Document 2309926.1 for details.Update date 2017-Sep-27.
Describe the Oracle Exadata critical issue labeled EX87
EX87 issue apply to Storage servers running Exadata version 23.1.0 through 23.1.11 with Database 19c,Bug 36309367 - Queries using Exadata Smart Scan with Exadata version 23.1.0 through 23.1.11 may produce duplicate rows (wrong results) when the table contains chained rows. Fixed in Exadata 23.1.12. See Document 3012268.1 for details.Update date 2024-Mar-22.
Describe the Oracle Exadata critical issue labeled EX86
EX86 issue apply to Exadata systems deployed using OEDA released in Dec2023,Bug 36217335 - Flash cache on Exadata systems deployed using Oracle Exadata Deployment Assistant (OEDA) released in December 2023 is substantially smaller than expected (approximately 5% of expected size), which can lead to database IO performance far below the capabilities of the system, especially for OLTP workloads. Fixed in OEDA Jan2024. See Document 3012053.1 for details.Update date 2024-Mar-22.
Describe the Oracle Exadata critical issue labeled EX85
EX85 issue apply to X7/X8/X8M storage servers running 23.1.11.0.0.240201,Bug 36140446 - Flash disk(s) on Exadata X7 and X8/X8M storage servers running Exadata 23.1.10 or 23.1.11 are not properly configured after flash disk replacement or fresh imaging due to a timing issue between cell services and the multipathd service. Flash disks affected by this issue do not have a CELLDISK object configured, thus no flash cache, flash log, and/or grid disks are configured on the flash disk, resulting in the storage server missing or having smaller flash cache and flash log than expected, or missing grid disks. Fixed in Exadata 23.1.11.0.0.240208 and 23.1.10.0.0.240208. See Document 3003728.1 for details.Update date 2024-Feb-23.
Describe the Oracle Exadata critical issue labeled EX84
EX84 issue apply to Storage servers upgraded to Exadata version 22.1.15 through 22.1.17, or 23.1.0 through 23.1.8,Bug 36015805 - After storage servers are upgraded to Exadata version 22.1.15 through 22.1.17, or 23.1.0 through 23.1.8, the database may experience IO performance degradation when flash cache is under space pressure, which may cause significant database performance degradation. Fixed in Exadata 23.1.9 and 22.1.18. See Document 2992165.1 for details.Update date 2023-Dec-09.
Describe the Oracle Exadata critical issue labeled EX83
EX83 issue apply to Storage servers upgraded to certain Exadata 23.1 versions while using Database In-Memory option and Exadata In-Memory Columnar Caching,Bug 35938811, Bug 36073771, Bug 36073739 - After storage servers are upgraded to certain Exadata 23.1 versions, wrong results may occur when the Exadata Smart Scan feature In-Memory Columnar Caching is enabled and In-Memory Columnar Caching or Storage Indexes are used to produce query results. Exadata In-Memory Columnar Caching is used when the Database In-Memory option is enabled. Fixed in Exadata 23.1.9. See Document 2986385.1 for details.Update date Revised 2023-Dec-09.
Describe the Oracle Exadata critical issue labeled EX82
EX82 issue apply to Database servers (VM guest only) running Exadata 23.1.0 through 23.1.6, or lower than 22.1.15 on systems with less than five storage servers,Bug 35909280, Bug 35909317 - High CPU utilization in a VM guest after the number of CPUs has been reduced online causes slowness in systemd-udevd service, which can lead to missing quorum disk device symbolic links on Exadata systems with fewer than five storage servers. Missing quorum disk device symbolic links prevent ASM disks from coming back online. Fixed in Exadata 23.1.7 and 22.1.16. See Document 2984504.1 for details.Update date 2023-Oct-31.
Describe the Oracle Exadata critical issue labeled EX81
EX81 issue apply to Database servers (KVM GUEST) on RoCE-based systems with irqbalance running and KVMHOST 19.3.0 through 22.1.14, or,Bug 35719844, Bug 35703260 - KVM guests incorrectly have the irqbalance service enabled and running, which causes excessive interrupt remapping, and conflicts with proper interrupt remapping performed by the cellirqbalance service. Excessive interrupt remapping in the KVM guest can introduce a race condition in qemu in the KVM host that can result in RDS connection hang across the entire cluster. Fixed in Exadata 23.1.6 and 22.1.15 for KVM HOST. See Document 2974522.1 for details.Update date 2023-Oct-02.
Describe the Oracle Exadata critical issue labeled EX80
EX80 issue apply to X8/X8M High Capacity (HC) storage servers, and older generation storage servers using 14TB disk drives as replacement drives.,Bug 35241309 - Exadata X8/X8M High Capacity (HC) Storage Servers may experience high rates of disk drive failure of certain 14TB disk drives. Fixed in Exadata 23.1.6 and 22.1.15. See Document 2974254.1 for details.Update date 2023-Sep-16.
Describe the Oracle Exadata critical issue labeled EX79
EX79 issue apply to X7-2 and X8-2/X8M-2 database servers updated to 23.1.0 through 23.1.2, or 22.1.8 through 22.1.11, or 21.2.21 through 21.2.24,Bug 35392787 - After Exadata System Software is updated on Exadata X7-2 or X8-2/X8M-2 database servers, Ethernet interfaces eth1 and eth2 are not configured during operating system startup due to a network card firmware update, making the client network unavailable. Fixed in Exadata 23.1.3 and 22.1.12. See Document 2957113.1 for details.Update date 2023-Jun-22.
Describe the Oracle Exadata critical issue labeled EX78
EX78 issue apply to Database servers (non-virtual or KVMHOST) on RoCE-based systems,Bug 34425639 - A race condition during terminated database process cleanup on RoCE-based (i.e. X8M and X9M) Exadata Database Service and on-premises Exadata systems may cause excessively high global cache (GC) waits, resulting in database-wide hang. Fixed in Exadata 22.1.2 and 21.2.15. See Document 2910156.1 for details.Update date 2022-Nov-18.
Describe the Oracle Exadata critical issue labeled EX77
EX77 issue apply to Storage servers running Exadata 21.2.0 through 21.2.14, or 22.1.0 through 22.1.1 with Persistent Storage Index feature enabled,Bug 34356665 - Exadata Persistent Storage Index feature may cause wrong results, or repeated Exadata Offload Server process crash leading to disabled smart scans. This issue can occur when using Transparent Data Encryption (TDE) and performing a tablespace or table rekey operation, or when transitioning to begin using TDE. Fixed in Exadata 22.1.2 and 21.2.15. See Document 2892410.1 for details.Update date 2022-Aug-29.
Describe the Oracle Exadata critical issue labeled EX25
EX25 issue apply to Exadata Storage Server X4 with certain 1.2TB High Performance disk drives running Exadata Storage Server 12.1.2.2.1 or lower, or 12.1.2.3.0,Bug 22139731 - Older High Performance disk drives in certain Exadata storage servers may experience high rates of failure, unrecoverable media errors, confinement, or higher read latencies. Fixed in Exadata 12.1.2.3.1 and 12.1.2.2.2. See Document 2073916.1 for additional details.Update date 2015-Dec-10.