diff --git a/test/dashboard.err b/test/dashboard.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard.log b/test/dashboard.log new file mode 100644 index 0000000000000000000000000000000000000000..b5b3bf52e566d4aeba994f9bcf49f5253b0d541a --- /dev/null +++ b/test/dashboard.log @@ -0,0 +1,88 @@ +2026-02-27 00:03:44,994 INFO utils.py:307 -- Get all modules by type: DashboardHeadModule +2026-02-27 00:03:45,535 INFO utils.py:340 -- Available modules: [] +2026-02-27 00:03:45,535 INFO head.py:235 -- DashboardHeadModules to load: None. +2026-02-27 00:03:45,536 INFO head.py:238 -- Loading DashboardHeadModule: . +2026-02-27 00:03:45,536 INFO head.py:242 -- Loaded 1 dashboard head modules: []. +2026-02-27 00:03:45,536 INFO utils.py:307 -- Get all modules by type: SubprocessModule +2026-02-27 00:03:45,538 INFO utils.py:340 -- Available modules: [, , , , , , , , ] +2026-02-27 00:03:45,539 INFO head.py:292 -- Loading SubprocessModule: . +2026-02-27 00:03:45,539 INFO head.py:292 -- Loading SubprocessModule: . +2026-02-27 00:03:45,539 INFO head.py:292 -- Loading SubprocessModule: . +2026-02-27 00:03:45,539 INFO head.py:292 -- Loading SubprocessModule: . +2026-02-27 00:03:45,539 INFO head.py:292 -- Loading SubprocessModule: . +2026-02-27 00:03:45,539 INFO head.py:292 -- Loading SubprocessModule: . +2026-02-27 00:03:45,539 INFO head.py:292 -- Loading SubprocessModule: . +2026-02-27 00:03:45,539 INFO head.py:292 -- Loading SubprocessModule: . +2026-02-27 00:03:45,539 INFO head.py:292 -- Loading SubprocessModule: . +2026-02-27 00:03:45,539 INFO head.py:296 -- Loaded 9 subprocess modules: [, , , , , , , , ]. +2026-02-27 00:03:47,104 INFO head.py:311 -- Starting dashboard metrics server on port 44227 +2026-02-27 00:03:47,110 INFO head.py:435 -- Initialize the http server. +2026-02-27 00:03:47,111 INFO http_server_head.py:111 -- Setup static dir for dashboard: /usr/local/lib/python3.12/dist-packages/ray/dashboard/client/build +2026-02-27 00:03:47,114 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version. +2026-02-27 00:03:47,140 INFO http_server_head.py:440 -- Dashboard head http address: 127.0.0.1:8265 +2026-02-27 00:03:47,140 INFO http_server_head.py:447 -- -> +2026-02-27 00:03:47,140 INFO http_server_head.py:447 -- -> +2026-02-27 00:03:47,140 INFO http_server_head.py:447 -- -> +2026-02-27 00:03:47,140 INFO http_server_head.py:447 -- -> +2026-02-27 00:03:47,140 INFO http_server_head.py:447 -- -> +2026-02-27 00:03:47,140 INFO http_server_head.py:447 -- -> +2026-02-27 00:03:47,140 INFO http_server_head.py:447 -- -> +2026-02-27 00:03:47,140 INFO http_server_head.py:447 -- PosixPath('/usr/local/lib/python3.12/dist-packages/ray/dashboard/client/build/static')> -> PosixPath('/usr/local/lib/python3.12/dist-packages/ray/dashboard/client/build/static')>> +2026-02-27 00:03:47,140 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713781eefc0> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713781ef100> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713781ef4c0> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753c4400> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753c45e0> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753c4860> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753eac00> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753eade0> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753eb060> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753eb240> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753eb380> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753eb4c0> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753eb600> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753eb740> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753eb880> +2026-02-27 00:03:47,141 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753eb9c0> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713753ebba0> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713749b8a40> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713749b8c20> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713749b9300> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713749b94e0> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x7713749b96c0> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b4f880> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b4f9c0> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b4fc40> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b4fd80> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b4fec0> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b74040> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b74180> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b742c0> +2026-02-27 00:03:47,142 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b74400> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b74540> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b74680> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b75260> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b75300> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b75580> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b75760> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b75a80> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b763e0> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b765c0> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b767a0> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b76980> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b76b60> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b76d40> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b76f20> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b77100> +2026-02-27 00:03:47,143 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b772e0> +2026-02-27 00:03:47,144 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b774c0> +2026-02-27 00:03:47,144 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b776a0> +2026-02-27 00:03:47,144 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b77880> +2026-02-27 00:03:47,144 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b77a60> +2026-02-27 00:03:47,144 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b77c40> +2026-02-27 00:03:47,144 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b77d80> +2026-02-27 00:03:47,144 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b845e0> +2026-02-27 00:03:47,144 INFO http_server_head.py:447 -- -> ._wrapper..parent_side_handler at 0x771373b84a40> +2026-02-27 00:03:47,144 INFO http_server_head.py:448 -- Registered 63 routes. +2026-02-27 00:03:47,144 INFO head.py:440 -- http server initialized at 127.0.0.1:8265 +2026-02-27 00:03:47,151 INFO usage_stats_head.py:200 -- Usage reporting is disabled. diff --git a/test/dashboard.out b/test/dashboard.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_DataHead.err b/test/dashboard_DataHead.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_DataHead.log b/test/dashboard_DataHead.log new file mode 100644 index 0000000000000000000000000000000000000000..e874ae6b4a11e46f0645eaad3cbdd32125d3ceb6 --- /dev/null +++ b/test/dashboard_DataHead.log @@ -0,0 +1,5 @@ +2026-02-27 00:03:46,553 INFO module.py:210 -- Starting module DataHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439', gcs_address='10.128.0.163:57355', session_name='session_2026-02-27_00-03-44_103874_1384', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets') +2026-02-27 00:03:46,553 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version. +2026-02-27 00:03:46,565 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets/dash_DataHead. +2026-02-27 00:03:46,565 INFO module.py:225 -- Module DataHead initialized, receiving messages... +2026-02-27 00:04:03,585 WARNING module.py:82 -- Parent process 1506 died. Exiting... diff --git a/test/dashboard_DataHead.out b/test/dashboard_DataHead.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_EventHead.err b/test/dashboard_EventHead.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_EventHead.log b/test/dashboard_EventHead.log new file mode 100644 index 0000000000000000000000000000000000000000..0d071f7247e7143670b7b3f72a2a67f69f7601ae --- /dev/null +++ b/test/dashboard_EventHead.log @@ -0,0 +1,6 @@ +2026-02-27 00:03:47,093 INFO module.py:210 -- Starting module EventHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439', gcs_address='10.128.0.163:57355', session_name='session_2026-02-27_00-03-44_103874_1384', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets') +2026-02-27 00:03:47,096 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version. +2026-02-27 00:03:47,099 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets/dash_EventHead. +2026-02-27 00:03:47,100 INFO event_utils.py:130 -- Monitor events logs modified after 1772148826.8917453 on /tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs/events, the source types are all. +2026-02-27 00:03:47,100 INFO module.py:225 -- Module EventHead initialized, receiving messages... +2026-02-27 00:04:03,129 WARNING module.py:82 -- Parent process 1506 died. Exiting... diff --git a/test/dashboard_EventHead.out b/test/dashboard_EventHead.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_JobHead.err b/test/dashboard_JobHead.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_JobHead.log b/test/dashboard_JobHead.log new file mode 100644 index 0000000000000000000000000000000000000000..607976fca39266e17494c2c3fd80312497b8eb47 --- /dev/null +++ b/test/dashboard_JobHead.log @@ -0,0 +1,5 @@ +2026-02-27 00:03:46,845 INFO module.py:210 -- Starting module JobHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439', gcs_address='10.128.0.163:57355', session_name='session_2026-02-27_00-03-44_103874_1384', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets') +2026-02-27 00:03:46,848 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version. +2026-02-27 00:03:46,853 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets/dash_JobHead. +2026-02-27 00:03:46,853 INFO module.py:225 -- Module JobHead initialized, receiving messages... +2026-02-27 00:04:02,879 WARNING module.py:82 -- Parent process 1506 died. Exiting... diff --git a/test/dashboard_JobHead.out b/test/dashboard_JobHead.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_MetricsHead.err b/test/dashboard_MetricsHead.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_MetricsHead.log b/test/dashboard_MetricsHead.log new file mode 100644 index 0000000000000000000000000000000000000000..05db035d1677667f8cf5fe01f5a7d4b07bfbc497 --- /dev/null +++ b/test/dashboard_MetricsHead.log @@ -0,0 +1,5 @@ +2026-02-27 00:03:46,752 INFO module.py:210 -- Starting module MetricsHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439', gcs_address='10.128.0.163:57355', session_name='session_2026-02-27_00-03-44_103874_1384', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets') +2026-02-27 00:03:46,752 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version. +2026-02-27 00:03:46,760 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets/dash_MetricsHead. +2026-02-27 00:03:46,838 INFO module.py:225 -- Module MetricsHead initialized, receiving messages... +2026-02-27 00:04:02,781 WARNING module.py:82 -- Parent process 1506 died. Exiting... diff --git a/test/dashboard_MetricsHead.out b/test/dashboard_MetricsHead.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_NodeHead.err b/test/dashboard_NodeHead.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_NodeHead.log b/test/dashboard_NodeHead.log new file mode 100644 index 0000000000000000000000000000000000000000..148e99e0bccaebfe795730fdc5792eb09da1580e --- /dev/null +++ b/test/dashboard_NodeHead.log @@ -0,0 +1,7 @@ +2026-02-27 00:03:46,975 INFO module.py:210 -- Starting module NodeHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439', gcs_address='10.128.0.163:57355', session_name='session_2026-02-27_00-03-44_103874_1384', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets') +2026-02-27 00:03:46,975 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version. +2026-02-27 00:03:46,980 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets/dash_NodeHead. +2026-02-27 00:03:46,981 INFO module.py:225 -- Module NodeHead initialized, receiving messages... +2026-02-27 00:03:46,987 INFO node_head.py:570 -- Getting all actor info from GCS. +2026-02-27 00:03:46,988 INFO node_head.py:587 -- Received 0 actor info from GCS. +2026-02-27 00:04:03,007 WARNING module.py:82 -- Parent process 1506 died. Exiting... diff --git a/test/dashboard_NodeHead.out b/test/dashboard_NodeHead.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_ReportHead.err b/test/dashboard_ReportHead.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_ReportHead.log b/test/dashboard_ReportHead.log new file mode 100644 index 0000000000000000000000000000000000000000..92eb8eae0d68689facb565fb9469df3dd258308a --- /dev/null +++ b/test/dashboard_ReportHead.log @@ -0,0 +1,5 @@ +2026-02-27 00:03:47,027 INFO module.py:210 -- Starting module ReportHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439', gcs_address='10.128.0.163:57355', session_name='session_2026-02-27_00-03-44_103874_1384', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets') +2026-02-27 00:03:47,030 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version. +2026-02-27 00:03:47,033 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets/dash_ReportHead. +2026-02-27 00:03:47,036 INFO module.py:225 -- Module ReportHead initialized, receiving messages... +2026-02-27 00:04:03,061 WARNING module.py:82 -- Parent process 1506 died. Exiting... diff --git a/test/dashboard_ReportHead.out b/test/dashboard_ReportHead.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_ServeHead.err b/test/dashboard_ServeHead.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_ServeHead.log b/test/dashboard_ServeHead.log new file mode 100644 index 0000000000000000000000000000000000000000..60cdb7ac1a1d15c51a610eb1cf284b999a1d1760 --- /dev/null +++ b/test/dashboard_ServeHead.log @@ -0,0 +1,5 @@ +2026-02-27 00:03:46,925 INFO module.py:210 -- Starting module ServeHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439', gcs_address='10.128.0.163:57355', session_name='session_2026-02-27_00-03-44_103874_1384', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets') +2026-02-27 00:03:46,930 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version. +2026-02-27 00:03:46,935 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets/dash_ServeHead. +2026-02-27 00:03:46,935 INFO module.py:225 -- Module ServeHead initialized, receiving messages... +2026-02-27 00:04:02,959 WARNING module.py:82 -- Parent process 1506 died. Exiting... diff --git a/test/dashboard_ServeHead.out b/test/dashboard_ServeHead.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_StateHead.err b/test/dashboard_StateHead.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_StateHead.log b/test/dashboard_StateHead.log new file mode 100644 index 0000000000000000000000000000000000000000..188caf620d5f324a53d92d31a6612b742b4f922e --- /dev/null +++ b/test/dashboard_StateHead.log @@ -0,0 +1,5 @@ +2026-02-27 00:03:46,954 INFO module.py:210 -- Starting module StateHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439', gcs_address='10.128.0.163:57355', session_name='session_2026-02-27_00-03-44_103874_1384', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets') +2026-02-27 00:03:46,958 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version. +2026-02-27 00:03:46,963 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets/dash_StateHead. +2026-02-27 00:03:46,964 INFO module.py:225 -- Module StateHead initialized, receiving messages... +2026-02-27 00:04:02,990 WARNING module.py:82 -- Parent process 1506 died. Exiting... diff --git a/test/dashboard_StateHead.out b/test/dashboard_StateHead.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_TrainHead.err b/test/dashboard_TrainHead.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_TrainHead.log b/test/dashboard_TrainHead.log new file mode 100644 index 0000000000000000000000000000000000000000..c55d764b61eecd89120454ea79db0f20992eb354 --- /dev/null +++ b/test/dashboard_TrainHead.log @@ -0,0 +1,5 @@ +2026-02-27 00:03:46,680 INFO module.py:210 -- Starting module TrainHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439', gcs_address='10.128.0.163:57355', session_name='session_2026-02-27_00-03-44_103874_1384', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets') +2026-02-27 00:03:46,680 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version. +2026-02-27 00:03:46,690 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-03-44_103874_1384/sockets/dash_TrainHead. +2026-02-27 00:03:46,691 INFO module.py:225 -- Module TrainHead initialized, receiving messages... +2026-02-27 00:04:02,715 WARNING module.py:82 -- Parent process 1506 died. Exiting... diff --git a/test/dashboard_TrainHead.out b/test/dashboard_TrainHead.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_agent.err b/test/dashboard_agent.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/dashboard_agent.log b/test/dashboard_agent.log new file mode 100644 index 0000000000000000000000000000000000000000..6d8988e32a067984c79f4cfd24e1af94c673271b --- /dev/null +++ b/test/dashboard_agent.log @@ -0,0 +1,77 @@ +2026-02-27 00:03:48,646 INFO agent.py:141 -- Dashboard agent grpc address: 10.128.0.163:51882 +2026-02-27 00:03:48,647 INFO utils.py:307 -- Get all modules by type: DashboardAgentModule +2026-02-27 00:03:48,982 INFO utils.py:340 -- Available modules: [, , , , , , ] +2026-02-27 00:03:48,982 INFO agent.py:160 -- Loading DashboardAgentModule: +2026-02-27 00:03:48,983 WARNING __init__.py:864 -- Overriding of current MeterProvider is not allowed +2026-02-27 00:03:48,984 INFO aggregator_agent.py:139 -- Event HTTP target is not enabled or publishing events to external HTTP service is disabled. Skipping sending events to external HTTP service. events_export_addr: +2026-02-27 00:03:48,985 WARNING __init__.py:864 -- Overriding of current MeterProvider is not allowed +2026-02-27 00:03:48,985 INFO agent.py:160 -- Loading DashboardAgentModule: +2026-02-27 00:03:48,985 INFO event_agent.py:48 -- Event agent cache buffer size: 10240 +2026-02-27 00:03:48,986 INFO agent.py:160 -- Loading DashboardAgentModule: +2026-02-27 00:03:48,986 INFO agent.py:160 -- Loading DashboardAgentModule: +2026-02-27 00:03:48,986 INFO agent.py:160 -- Loading DashboardAgentModule: +2026-02-27 00:03:48,986 INFO agent.py:160 -- Loading DashboardAgentModule: +2026-02-27 00:03:48,986 INFO agent.py:160 -- Loading DashboardAgentModule: +2026-02-27 00:03:48,993 WARNING __init__.py:864 -- Overriding of current MeterProvider is not allowed +2026-02-27 00:03:49,141 WARNING gpu_profile_manager.py:82 -- [GpuProfilingManager] `dynolog` is not installed, GPU profiling will not be available. +2026-02-27 00:03:49,141 WARNING gpu_profile_manager.py:125 -- [GpuProfilingManager] GPU profiling is disabled, skipping daemon setup. +2026-02-27 00:03:49,142 INFO agent.py:165 -- Loaded 7 modules. +2026-02-27 00:03:49,145 WARNING http_server_agent.py:70 -- Failed to bind to port 52365 (attempt 1/6). Retrying in 0.19s. Error: [Errno 98] error while attempting to bind on address ('10.128.0.163', 52365): [errno 98] address already in use +2026-02-27 00:03:49,341 WARNING http_server_agent.py:70 -- Failed to bind to port 52365 (attempt 2/6). Retrying in 0.22s. Error: [Errno 98] error while attempting to bind on address ('10.128.0.163', 52365): [errno 98] address already in use +2026-02-27 00:03:49,562 WARNING http_server_agent.py:70 -- Failed to bind to port 52365 (attempt 3/6). Retrying in 0.44s. Error: [Errno 98] error while attempting to bind on address ('10.128.0.163', 52365): [errno 98] address already in use +2026-02-27 00:03:50,003 WARNING http_server_agent.py:70 -- Failed to bind to port 52365 (attempt 4/6). Retrying in 0.85s. Error: [Errno 98] error while attempting to bind on address ('10.128.0.163', 52365): [errno 98] address already in use +2026-02-27 00:03:50,856 WARNING http_server_agent.py:70 -- Failed to bind to port 52365 (attempt 5/6). Retrying in 1.67s. Error: [Errno 98] error while attempting to bind on address ('10.128.0.163', 52365): [errno 98] address already in use +2026-02-27 00:03:52,524 ERROR http_server_agent.py:76 -- Agent port #52365 failed to bind after 6 attempts. +Traceback (most recent call last): + File "/usr/local/lib/python3.12/dist-packages/ray/dashboard/http_server_agent.py", line 50, in _start_site_with_retry + await site.start() + File "/usr/local/lib/python3.12/dist-packages/aiohttp/web_runner.py", line 121, in start + self._server = await loop.create_server( + ^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/usr/lib/python3.12/asyncio/base_events.py", line 1584, in create_server + raise OSError(err.errno, msg) from None +OSError: [Errno 98] error while attempting to bind on address ('10.128.0.163', 52365): [errno 98] address already in use +2026-02-27 00:03:52,526 ERROR agent.py:195 -- Failed to start HTTP server with exception: [Errno 98] error while attempting to bind on address ('10.128.0.163', 52365): [errno 98] address already in use. The agent will stay alive but the HTTP service will be disabled. +Traceback (most recent call last): + File "/usr/local/lib/python3.12/dist-packages/ray/dashboard/agent.py", line 188, in run + await self.http_server.start(modules) + File "/usr/local/lib/python3.12/dist-packages/ray/dashboard/http_server_agent.py", line 120, in start + site = await self._start_site_with_retry() + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/usr/local/lib/python3.12/dist-packages/ray/dashboard/http_server_agent.py", line 83, in _start_site_with_retry + raise last_exception + File "/usr/local/lib/python3.12/dist-packages/ray/dashboard/http_server_agent.py", line 50, in _start_site_with_retry + await site.start() + File "/usr/local/lib/python3.12/dist-packages/aiohttp/web_runner.py", line 121, in start + self._server = await loop.create_server( + ^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/usr/lib/python3.12/asyncio/base_events.py", line 1584, in create_server + raise OSError(err.errno, msg) from None +OSError: [Errno 98] error while attempting to bind on address ('10.128.0.163', 52365): [errno 98] address already in use +2026-02-27 00:03:52,527 INFO process_watcher.py:45 -- raylet pid is 1875 +2026-02-27 00:03:52,528 INFO process_watcher.py:65 -- check_parent_via_pipe +2026-02-27 00:03:52,528 INFO event_utils.py:130 -- Monitor events logs modified after 1772148828.733918 on /tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs/events, the source types are all. +2026-02-27 00:03:52,554 INFO gpu_providers.py:500 -- Using GPU Provider: NvidiaGpuProvider +2026-02-27 00:04:02,600 INFO agent.py:228 -- Terminated Raylet: ip=10.128.0.163, node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7. _check_parent_via_pipe: The parent is dead. +2026-02-27 00:04:02,600 ERROR process_watcher.py:115 -- Raylet is terminated. Termination is unexpected. Possible reasons include: (1) SIGKILL by the user or system OOM killer, (2) Invalid memory access from Raylet causing SIGSEGV or SIGBUS, (3) Other termination signals. Last 20 lines of the Raylet logs: + [2026-02-27 00:03:47,363 I 1875 1875] (raylet) accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 + [2026-02-27 00:03:47,439 I 1875 1875] (raylet) worker_pool.cc:750: [Eagerly] Start install runtime environment for job 01000000. + [2026-02-27 00:03:47,442 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1922, the token is 0 + [2026-02-27 00:03:47,445 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1923, the token is 1 + [2026-02-27 00:03:47,448 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1924, the token is 2 + [2026-02-27 00:03:47,451 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1925, the token is 3 + [2026-02-27 00:03:47,454 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1926, the token is 4 + [2026-02-27 00:03:47,457 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1927, the token is 5 + [2026-02-27 00:03:47,461 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1928, the token is 6 + [2026-02-27 00:03:47,466 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1929, the token is 7 + [2026-02-27 00:03:47,469 I 1875 1875] (raylet) runtime_env_agent_client.cc:350: Runtime Env Agent network error: NotFound: on_connect Connection refused, the server may be still starting or is already failed. Scheduling a retry in 1000ms... + [2026-02-27 00:03:48,199 I 1875 1890] (raylet) object_store.cc:37: Object store current usage 8e-09 / 9.52932 GB. + [2026-02-27 00:03:48,474 I 1875 1875] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000 + [2026-02-27 00:03:48,474 I 1875 1875] (raylet) worker_pool.cc:761: [Eagerly] Create runtime env successful for job 01000000. + [2026-02-27 00:03:48,572 I 1875 1875] (raylet) worker_pool.cc:740: Job 01000000 already started in worker pool. + [2026-02-27 00:03:49,568 I 1875 1875] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=1, has_creation_task_exception=false worker_id=29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111 job_id=NIL_ID + [2026-02-27 00:03:49,644 W 1875 1890] (raylet) store.cc:365: Disconnecting client due to connection error with code 2: End of file + [2026-02-27 00:03:51,471 I 1875 1875] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000 + [2026-02-27 00:03:51,474 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 2372, the token is 8 + [2026-02-27 00:03:53,363 I 1875 1875] (raylet) metrics_agent_client.cc:54: Exporter initialized. + diff --git a/test/dashboard_agent.out b/test/dashboard_agent.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/debug_state.txt b/test/debug_state.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e6ebaaa6811b5df4ac2e30474a9ded5e82a6b3b --- /dev/null +++ b/test/debug_state.txt @@ -0,0 +1,199 @@ +NodeManager: +Node ID: cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +Node name: 10.128.0.163 +InitialConfigResources: {GPU: 1, node:__internal_head__: 1, object_store_memory: 9.52932e+09, memory: 2.22351e+10, CPU: 8, node:10.128.0.163: 1, accelerator_type:L4: 1} +ClusterLeaseManager: +========== Node: cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 ================= +Infeasible queue length: 0 +Schedule queue length: 0 +Grant queue length: 0 +num_waiting_for_resource: 0 +num_waiting_for_plasma_memory: 0 +num_waiting_for_remote_node_resources: 0 +num_worker_not_started_by_job_config_not_exist: 0 +num_worker_not_started_by_registration_timeout: 0 +num_tasks_waiting_for_workers: 0 +num_cancelled_leases: 0 +cluster_resource_scheduler state: +Local id: -7930779791598977017 Local resources: {"total":{GPU: [10000], node:10.128.0.163: [10000], node:__internal_head__: [10000], CPU: [80000], memory: [222350872580000], object_store_memory: [95293231100000], accelerator_type:L4: [10000]}}, "available": {GPU: [10000], node:10.128.0.163: [10000], node:__internal_head__: [10000], CPU: [70000], memory: [222350872580000], object_store_memory: [95293231100000], accelerator_type:L4: [10000]}}, "labels":{"ray.io/accelerator-type":"L4","ray.io/node-id":"cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7",} is_draining: 0 is_idle: 0 Cluster resources (at most 20 nodes are shown): node id: -7930779791598977017{"total":{accelerator_type:L4: 10000, node:__internal_head__: 10000, node:10.128.0.163: 10000, object_store_memory: 95293231100000, CPU: 80000, GPU: 10000, memory: 222350872580000}}, "available": {CPU: 70000, node:10.128.0.163: 10000, node:__internal_head__: 10000, object_store_memory: 95293231100000, accelerator_type:L4: 10000, GPU: 10000, memory: 222350872580000}}, "labels":{"ray.io/node-id":"cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7","ray.io/accelerator-type":"L4",}, "is_draining": 0, "draining_deadline_timestamp_ms": -1} { "placement group locations": [], "node to bundles": []} +Waiting leases size: 0 +Number of granted lease arguments: 1 +Number of pinned lease arguments: 0 +Number of total spilled leases: 0 +Number of spilled waiting leases: 0 +Number of spilled unschedulable leases: 0 +Resource usage { + - (language=PYTHON actor_or_taskTaskRunner.__init__ pid=2410 worker_id=a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508): {CPU: 1} +} +Backlog Size per scheduling descriptor :{workerId: num backlogs}: + +Granted leases by scheduling class: + - {depth=1 function_descriptor={type=PythonFunctionDescriptor, module_name=main_ppo, class_name=TaskRunner, function_name=__init__, function_hash=bd0c197dfe784848a2857f58f0c85f47} scheduling_strategy=default_scheduling_strategy { +} + resource_set={CPU : 1, }label_selector={}}fallback_strategy=[]: 1/8 +================================================== + +ClusterResources: +LocalObjectManager: +- num pinned objects: 0 +- pinned objects size: 0 +- num objects pending restore: 0 +- num objects pending spill: 0 +- num bytes pending spill: 0 +- num bytes currently spilled: 0 +- cumulative spill requests: 0 +- cumulative restore requests: 0 +- spilled objects pending delete: 0 + +ObjectManager: +- num local objects: 0 +- num unfulfilled push requests: 0 +- num object pull requests: 0 +- num chunks received total: 0 +- num chunks received failed (all): 0 +- num chunks received failed / cancelled: 0 +- num chunks received failed / plasma error: 0 +Event stats: +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: +PushManager: +- num pushes remaining: 0 +- num chunks in flight: 0 +- num chunks remaining: 0 +- max chunks allowed: 409 +OwnershipBasedObjectDirectory: +- num listeners: 0 +- cumulative location updates: 0 +- num location updates per second: 0.000 +- num location lookups per second: 0.000 +- num locations added per second: 0.000 +- num locations removed per second: 0.000 +BufferPool: +- create buffer state map size: 0 +PullManager: +- num bytes available for pulled objects: 9529323110 +- num bytes being pulled (all): 0 +- num bytes being pulled / pinned: 0 +- get request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} +- wait request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} +- task request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} +- first get request bundle: N/A +- first wait request bundle: N/A +- first task request bundle: N/A +- num objects queued: 0 +- num objects actively pulled (all): 0 +- num objects actively pulled / pinned: 0 +- num bundles being pulled: 0 +- num pull retries: 0 +- max timeout seconds: 0 +- max timeout request is already processed. No entry. + +WorkerPool: +- registered jobs: 1 +- process_failed_job_config_missing: 0 +- process_failed_rate_limited: 0 +- process_failed_pending_registration: 0 +- process_failed_runtime_env_setup_failed: 0 +- num PYTHON workers: 8 +- num PYTHON drivers: 1 +- num PYTHON pending start requests: 0 +- num PYTHON pending registration requests: 0 +- num object spill callbacks queued: 0 +- num object restore queued: 0 +- num util functions queued: 0 +- num idle workers: 7 +LeaseDependencyManager: +- lease deps map size: 0 +- get req map size: 0 +- wait req map size: 0 +- local objects map size: 0 +WaitManager: +- num active wait requests: 0 +Subscriber: +Channel WORKER_REF_REMOVED_CHANNEL +- cumulative subscribe requests: 0 +- cumulative unsubscribe requests: 0 +- active subscribed publishers: 0 +- cumulative published messages: 0 +- cumulative processed messages: 0 +Channel WORKER_OBJECT_LOCATIONS_CHANNEL +- cumulative subscribe requests: 0 +- cumulative unsubscribe requests: 0 +- active subscribed publishers: 0 +- cumulative published messages: 0 +- cumulative processed messages: 0 +Channel WORKER_OBJECT_EVICTION +- cumulative subscribe requests: 0 +- cumulative unsubscribe requests: 0 +- active subscribed publishers: 0 +- cumulative published messages: 0 +- cumulative processed messages: 0 +num async plasma notifications: 0 +Event stats: +Global stats: 826 total (24 active) +Queueing time: mean = 10.27ms, max = 1099.92ms, min = 0.00ms, total = 8481.77ms +Execution time: mean = 8.39ms, total = 6929.63ms +Event stats: + RaySyncer.OnDemandBroadcasting - 100 total (1 active), Execution time: mean = 0.02ms, total = 1.77ms, Queueing time: mean = 0.14ms, max = 6.15ms, min = 0.02ms, total = 14.11ms + ObjectManager.UpdateAvailableMemory - 100 total (0 active), Execution time: mean = 0.00ms, total = 0.37ms, Queueing time: mean = 0.02ms, max = 0.21ms, min = 0.01ms, total = 2.41ms + NodeManager.CheckGC - 100 total (1 active), Execution time: mean = 0.00ms, total = 0.25ms, Queueing time: mean = 0.16ms, max = 6.17ms, min = 0.02ms, total = 15.50ms + NodeManagerService.grpc_server.ReportWorkerBacklog.HandleRequestImpl - 81 total (0 active), Execution time: mean = 0.07ms, total = 5.86ms, Queueing time: mean = 0.04ms, max = 0.47ms, min = 0.00ms, total = 2.88ms + NodeManagerService.grpc_server.ReportWorkerBacklog - 81 total (0 active), Execution time: mean = 0.31ms, total = 25.34ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + RayletWorkerPool.deadline_timer.kill_idle_workers - 50 total (1 active), Execution time: mean = 0.02ms, total = 0.95ms, Queueing time: mean = 0.12ms, max = 2.61ms, min = 0.02ms, total = 6.00ms + ClientConnection.async_read.ProcessMessageHeader - 41 total (9 active), Execution time: mean = 0.01ms, total = 0.21ms, Queueing time: mean = 56.87ms, max = 1099.92ms, min = 0.02ms, total = 2331.78ms + MemoryMonitor.CheckIsMemoryUsageAboveThreshold - 40 total (1 active), Execution time: mean = 0.23ms, total = 9.26ms, Queueing time: mean = 0.11ms, max = 3.20ms, min = 0.02ms, total = 4.44ms + ClientConnection.async_read.ProcessMessage - 32 total (0 active), Execution time: mean = 1.12ms, total = 35.92ms, Queueing time: mean = 0.01ms, max = 0.13ms, min = 0.00ms, total = 0.47ms + PeriodicalRunner.RunFnPeriodically - 14 total (0 active), Execution time: mean = 0.22ms, total = 3.15ms, Queueing time: mean = 6.03ms, max = 17.68ms, min = 0.05ms, total = 84.36ms + NodeManager.ScheduleAndGrantLeases - 11 total (1 active), Execution time: mean = 0.01ms, total = 0.16ms, Queueing time: mean = 0.06ms, max = 0.17ms, min = 0.02ms, total = 0.61ms + NodeManager.CheckForUnexpectedWorkerDisconnects - 11 total (1 active), Execution time: mean = 0.02ms, total = 0.18ms, Queueing time: mean = 0.05ms, max = 0.17ms, min = 0.01ms, total = 0.59ms + ClientConnection.async_write.DoAsyncWrites - 11 total (0 active), Execution time: mean = 0.00ms, total = 0.01ms, Queueing time: mean = 0.05ms, max = 0.31ms, min = 0.02ms, total = 0.53ms + NodeManagerService.grpc_server.GetSystemConfig.HandleRequestImpl - 10 total (0 active), Execution time: mean = 0.17ms, total = 1.74ms, Queueing time: mean = 0.08ms, max = 0.40ms, min = 0.01ms, total = 0.75ms + NodeManager.deadline_timer.flush_free_objects - 10 total (1 active), Execution time: mean = 0.00ms, total = 0.04ms, Queueing time: mean = 0.18ms, max = 1.58ms, min = 0.02ms, total = 1.81ms + ObjectManager.ObjectAdded - 10 total (0 active), Execution time: mean = 0.02ms, total = 0.19ms, Queueing time: mean = 0.13ms, max = 0.57ms, min = 0.02ms, total = 1.31ms + NodeManagerService.grpc_server.GetResourceLoad - 10 total (0 active), Execution time: mean = 0.44ms, total = 4.40ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + NodeManagerService.grpc_server.GetSystemConfig - 10 total (0 active), Execution time: mean = 0.65ms, total = 6.45ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ObjectManager.ObjectDeleted - 10 total (0 active), Execution time: mean = 0.02ms, total = 0.16ms, Queueing time: mean = 0.08ms, max = 0.38ms, min = 0.02ms, total = 0.78ms + NodeManager.deadline_timer.spill_objects_when_over_threshold - 10 total (1 active), Execution time: mean = 0.00ms, total = 0.02ms, Queueing time: mean = 0.18ms, max = 1.58ms, min = 0.02ms, total = 1.82ms + NodeManagerService.grpc_server.GetResourceLoad.HandleRequestImpl - 10 total (0 active), Execution time: mean = 0.14ms, total = 1.44ms, Queueing time: mean = 0.03ms, max = 0.05ms, min = 0.02ms, total = 0.34ms + ReporterService.grpc_client.HealthCheck - 7 total (0 active), Execution time: mean = 0.79ms, total = 5.53ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ReporterService.grpc_client.HealthCheck.OnReplyReceived - 7 total (0 active), Execution time: mean = 0.09ms, total = 0.65ms, Queueing time: mean = 0.04ms, max = 0.06ms, min = 0.02ms, total = 0.27ms + MetricsAgentClient.WaitForServerReadyWithRetry - 6 total (0 active), Execution time: mean = 0.20ms, total = 1.21ms, Queueing time: mean = 1000.03ms, max = 1000.07ms, min = 1000.01ms, total = 6000.21ms + - 4 total (0 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.05ms, max = 0.13ms, min = 0.02ms, total = 0.22ms + RaySyncer.BroadcastMessage - 4 total (0 active), Execution time: mean = 0.12ms, total = 0.49ms, Queueing time: mean = 0.00ms, max = 0.00ms, min = 0.00ms, total = 0.00ms + ClusterResourceManager.ResetRemoteNodeView - 4 total (1 active), Execution time: mean = 0.01ms, total = 0.03ms, Queueing time: mean = 0.03ms, max = 0.04ms, min = 0.03ms, total = 0.10ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 3 total (1 active), Execution time: mean = 737.29ms, total = 2211.87ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + event_loop_lag_probe - 3 total (0 active), Execution time: mean = 0.01ms, total = 0.03ms, Queueing time: mean = 1.95ms, max = 5.83ms, min = 0.00ms, total = 5.84ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 2 total (0 active), Execution time: mean = 0.20ms, total = 0.41ms, Queueing time: mean = 1.19ms, max = 2.36ms, min = 0.01ms, total = 2.37ms + RaySyncerRegister - 2 total (0 active), Execution time: mean = 0.02ms, total = 0.04ms, Queueing time: mean = 0.00ms, max = 0.00ms, min = 0.00ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll.OnReplyReceived - 2 total (0 active), Execution time: mean = 0.33ms, total = 0.67ms, Queueing time: mean = 0.28ms, max = 0.42ms, min = 0.14ms, total = 0.56ms + NodeManager.deadline_timer.record_metrics - 2 total (1 active), Execution time: mean = 0.15ms, total = 0.29ms, Queueing time: mean = 0.10ms, max = 0.21ms, min = 0.21ms, total = 0.21ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 2 total (0 active), Execution time: mean = 0.98ms, total = 1.96ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.ReportWorkerFailure.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.57ms, max = 0.57ms, min = 0.57ms, total = 0.57ms + CoreWorkerService.grpc_client.Exit - 1 total (0 active), Execution time: mean = 1.28ms, total = 1.28ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::NodeInfoGcsService.grpc_client.CheckAlive.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms + Subscriber.HandlePublishedMessage_GCS_WORKER_DELTA_CHANNEL - 1 total (0 active), Execution time: mean = 0.01ms, total = 0.01ms, Queueing time: mean = 0.21ms, max = 0.21ms, min = 0.21ms, total = 0.21ms + Subscriber.HandlePublishedMessage_GCS_JOB_CHANNEL - 1 total (0 active), Execution time: mean = 0.09ms, total = 0.09ms, Queueing time: mean = 0.47ms, max = 0.47ms, min = 0.47ms, total = 0.47ms + NodeManager.GCTaskFailureReason - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::NodeInfoGcsService.grpc_client.CheckAlive - 1 total (0 active), Execution time: mean = 0.73ms, total = 0.73ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::JobInfoGcsService.grpc_client.GetAllJobInfo - 1 total (0 active), Execution time: mean = 0.77ms, total = 0.77ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::JobInfoGcsService.grpc_client.AddJob - 1 total (0 active), Execution time: mean = 0.92ms, total = 0.92ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode - 1 total (0 active), Execution time: mean = 1.14ms, total = 1.14ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + NodeManagerService.grpc_server.RequestWorkerLease.HandleRequestImpl - 1 total (0 active), Execution time: mean = 0.25ms, total = 0.25ms, Queueing time: mean = 0.04ms, max = 0.04ms, min = 0.04ms, total = 0.04ms + ray::rpc::NodeInfoGcsService.grpc_client.GetAllNodeAddressAndLiveness.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.23ms, total = 0.23ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms + ray::rpc::JobInfoGcsService.grpc_client.GetAllJobInfo.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms + NodeManagerService.grpc_server.GetWorkerPIDs.HandleRequestImpl - 1 total (0 active), Execution time: mean = 0.12ms, total = 0.12ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms + ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig - 1 total (0 active), Execution time: mean = 0.79ms, total = 0.79ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + NodeManagerService.grpc_server.GetWorkerPIDs - 1 total (0 active), Execution time: mean = 0.40ms, total = 0.40ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.ReportWorkerFailure - 1 total (0 active), Execution time: mean = 2.88ms, total = 2.88ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.40ms, total = 0.40ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms + NodeManager.GcsCheckAlive - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig.OnReplyReceived - 1 total (0 active), Execution time: mean = 22.86ms, total = 22.86ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms + ray::rpc::NodeInfoGcsService.grpc_client.GetAllNodeAddressAndLiveness - 1 total (0 active), Execution time: mean = 0.82ms, total = 0.82ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + NodeManager.deadline_timer.print_event_loop_stats - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + NodeManager.deadline_timer.debug_state_dump - 1 total (1 active, 1 running), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorkerService.grpc_client.Exit.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.03ms, total = 0.03ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms + NodeManagerService.grpc_server.RequestWorkerLease - 1 total (0 active), Execution time: mean = 4574.75ms, total = 4574.75ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::JobInfoGcsService.grpc_client.AddJob.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.05ms, total = 0.05ms, Queueing time: mean = 0.04ms, max = 0.04ms, min = 0.04ms, total = 0.04ms +DebugString() time ms: 1 \ No newline at end of file diff --git a/test/events/event_AUTOSCALER.log b/test/events/event_AUTOSCALER.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_CORE_WORKER_1384.log b/test/events/event_CORE_WORKER_1384.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_CORE_WORKER_1922.log b/test/events/event_CORE_WORKER_1922.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_CORE_WORKER_1923.log b/test/events/event_CORE_WORKER_1923.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_CORE_WORKER_1924.log b/test/events/event_CORE_WORKER_1924.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_CORE_WORKER_1925.log b/test/events/event_CORE_WORKER_1925.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_CORE_WORKER_1926.log b/test/events/event_CORE_WORKER_1926.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_CORE_WORKER_1927.log b/test/events/event_CORE_WORKER_1927.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_CORE_WORKER_1928.log b/test/events/event_CORE_WORKER_1928.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_CORE_WORKER_1929.log b/test/events/event_CORE_WORKER_1929.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_CORE_WORKER_2410.log b/test/events/event_CORE_WORKER_2410.log new file mode 100644 index 0000000000000000000000000000000000000000..431178a6afd0ddfed67d8a599be437e205d3075d --- /dev/null +++ b/test/events/event_CORE_WORKER_2410.log @@ -0,0 +1 @@ +{"custom_fields":{"worker_id":"a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508"},"event_id":"7927057f1a3508400200893de0c9b7ed1726","host_name":"cs-01kje4289qf3k6pv20jzcef9t8","label":"RAY_FATAL_CHECK_FAILED","message":"src/ray/core_worker/task_execution/task_receiver.cc:132 (PID: 2410, TID: 2410, errno: 32 (Broken pipe)): An unexpected system state has occurred. You have likely discovered a bug in Ray. Please report this issue at https://github.com/ray-project/ray/issues and we'll work with you to fix it. Check failed: actor_creation_task_done_() Status not OK: IOError: Broken pipe \\n*** StackTrace Information ***\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x16bc06a) [0x7cd486fab06a] ray::operator<<()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray6RayLogD1Ev+0x488) [0x7cd486fad6b8] ray::RayLog::~RayLog()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb5fc1a) [0x7cd48644ec1a] ray::core::TaskReceiver::HandleTask()::{lambda()#1}::operator()()::{lambda()#1}::operator()()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb7b5b2) [0x7cd48646a5b2] ray::core::InboundRequest::Accept()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb6bc8b) [0x7cd48645ac8b] ray::core::NormalSchedulingQueue::ScheduleRequests()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1012fa8) [0x7cd486901fa8] EventTracker::RecordExecution()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1009f57) [0x7cd4868f8f57] std::_Function_handler<>::_M_invoke()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb8f31b) [0x7cd48647e31b] boost::asio::detail::executor_op<>::do_complete()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x16873db) [0x7cd486f763db] boost::asio::detail::scheduler::do_run_one()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1688d79) [0x7cd486f77d79] boost::asio::detail::scheduler::run()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1689482) [0x7cd486f78482] boost::asio::io_context::run()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray4core10CoreWorker20RunTaskExecutionLoopEv+0x127) [0x7cd4862fbaf7] ray::core::CoreWorker::RunTaskExecutionLoop()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray4core21CoreWorkerProcessImpl26RunWorkerTaskExecutionLoopEv+0x41) [0x7cd486351461] ray::core::CoreWorkerProcessImpl::RunWorkerTaskExecutionLoop()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray4core17CoreWorkerProcess20RunTaskExecutionLoopEv+0x1d) [0x7cd48635167d] ray::core::CoreWorkerProcess::RunTaskExecutionLoop()\\n/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x881e81) [0x7cd486170e81] __pyx_pw_3ray_7_raylet_10CoreWorker_5run_task_loop()\\nray::TaskRunner(PyObject_Vectorcall+0x36) [0x5627f6] PyObject_Vectorcall\\nray::TaskRunner(_PyEval_EvalFrameDefault+0x701) [0x54a2e1] _PyEval_EvalFrameDefault\\nray::TaskRunner(PyEval_EvalCode+0x99) [0x620799] PyEval_EvalCode\\nray::TaskRunner() [0x65c44b]\\nray::TaskRunner() [0x6574d6]\\nray::TaskRunner() [0x654145]\\nray::TaskRunner(_PyRun_SimpleFileObject+0x1a5) [0x653e15] _PyRun_SimpleFileObject\\nray::TaskRunner(_PyRun_AnyFileObject+0x47) [0x653927] _PyRun_AnyFileObject\\nray::TaskRunner(Py_RunMain+0x375) [0x650605] Py_RunMain\\nray::TaskRunner(Py_BytesMain+0x2d) [0x60962d] Py_BytesMain\\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7cd48eca3d90]\\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7cd48eca3e40] __libc_start_main\\nray::TaskRunner(_start+0x25) [0x6094a5] _start\\n","pid":"2410","severity":"FATAL","source_type":"CORE_WORKER","timestamp":1772150674} diff --git a/test/events/event_GCS.log b/test/events/event_GCS.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/events/event_RAYLET.log b/test/events/event_RAYLET.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_ACTOR.log b/test/export_events/event_EXPORT_ACTOR.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_DRIVER_JOB.log b/test/export_events/event_EXPORT_DRIVER_JOB.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_NODE.log b/test/export_events/event_EXPORT_NODE.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_TASK_1384.log b/test/export_events/event_EXPORT_TASK_1384.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_TASK_1922.log b/test/export_events/event_EXPORT_TASK_1922.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_TASK_1923.log b/test/export_events/event_EXPORT_TASK_1923.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_TASK_1924.log b/test/export_events/event_EXPORT_TASK_1924.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_TASK_1925.log b/test/export_events/event_EXPORT_TASK_1925.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_TASK_1926.log b/test/export_events/event_EXPORT_TASK_1926.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_TASK_1927.log b/test/export_events/event_EXPORT_TASK_1927.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_TASK_1928.log b/test/export_events/event_EXPORT_TASK_1928.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_TASK_1929.log b/test/export_events/event_EXPORT_TASK_1929.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/export_events/event_EXPORT_TASK_2410.log b/test/export_events/event_EXPORT_TASK_2410.log new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/gcs_server.err b/test/gcs_server.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/gcs_server.out b/test/gcs_server.out new file mode 100644 index 0000000000000000000000000000000000000000..5a2bade1246d1cbf8a3e834a0b612c16707e5045 --- /dev/null +++ b/test/gcs_server.out @@ -0,0 +1,168 @@ +[2026-02-27 00:03:44,162 I 1457 1457] (gcs_server) gcs_server_main.cc:89: Ray cluster metadata ray_version=2.52.1 ray_commit=4ebdc0abe5e5a551625fe7f87053c7e668a6ff74 +[2026-02-27 00:03:44,164 I 1457 1457] (gcs_server) event.cc:499: Ray Event initialized for GCS +[2026-02-27 00:03:44,164 I 1457 1457] (gcs_server) event.cc:499: Ray Event initialized for EXPORT_NODE +[2026-02-27 00:03:44,164 I 1457 1457] (gcs_server) event.cc:499: Ray Event initialized for EXPORT_ACTOR +[2026-02-27 00:03:44,164 I 1457 1457] (gcs_server) event.cc:499: Ray Event initialized for EXPORT_DRIVER_JOB +[2026-02-27 00:03:44,164 I 1457 1457] (gcs_server) event.cc:332: Set ray event level to warning +[2026-02-27 00:03:44,166 I 1457 1457] (gcs_server) event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:44,170 I 1457 1457] (gcs_server) gcs_server.cc:150: GCS storage type is StorageType::IN_MEMORY +[2026-02-27 00:03:44,170 I 1457 1457] (gcs_server) gcs_init_data.cc:46: Loading job table data. +[2026-02-27 00:03:44,170 I 1457 1457] (gcs_server) gcs_init_data.cc:56: Loading node table data. +[2026-02-27 00:03:44,170 I 1457 1457] (gcs_server) gcs_init_data.cc:76: Loading actor table data. +[2026-02-27 00:03:44,170 I 1457 1457] (gcs_server) gcs_init_data.cc:87: Loading actor task spec table data. +[2026-02-27 00:03:44,170 I 1457 1457] (gcs_server) gcs_init_data.cc:66: Loading placement group table data. +[2026-02-27 00:03:44,171 I 1457 1457] (gcs_server) gcs_init_data.cc:50: Finished loading job table data, size = 0 +[2026-02-27 00:03:44,171 I 1457 1457] (gcs_server) gcs_init_data.cc:60: Finished loading node table data, size = 0 +[2026-02-27 00:03:44,171 I 1457 1457] (gcs_server) gcs_init_data.cc:81: Finished loading actor table data, size = 0 +[2026-02-27 00:03:44,171 I 1457 1457] (gcs_server) gcs_init_data.cc:91: Finished loading actor task spec table data, size = 0 +[2026-02-27 00:03:44,171 I 1457 1457] (gcs_server) gcs_init_data.cc:70: Finished loading placement group table data, size = 0 +[2026-02-27 00:03:44,171 I 1457 1457] (gcs_server) gcs_server.cc:241: Generated new cluster ID. cluster_id=53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439 +[2026-02-27 00:03:44,171 I 1457 1457] (gcs_server) gcs_server.cc:753: Autoscaler V2 enabled: 0 +[2026-02-27 00:03:44,171 I 1457 1457] (gcs_server) metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:44,172 I 1457 1457] (gcs_server) grpc_server.cc:143: GcsServer server started, listening on port 57355. +[2026-02-27 00:03:44,308 I 1457 1457] (gcs_server) gcs_server.cc:926: Gcs Debug state: + +GcsNodeManager: +- RegisterNode request count: 0 +- DrainNode request count: 0 +- GetAllNodeInfo request count: 0 + +GcsActorManager: +- RegisterActor request count: 0 +- CreateActor request count: 0 +- GetActorInfo request count: 0 +- GetNamedActorInfo request count: 0 +- GetAllActorInfo request count: 0 +- KillActor request count: 0 +- ListNamedActors request count: 0 +- Registered actors count: 0 +- Destroyed actors count: 0 +- Named actors count: 0 +- Unresolved actors count: 0 +- Pending actors count: 0 +- Created actors count: 0 +- owners_: 0 +- actor_to_register_callbacks_: 0 +- actor_to_restart_for_lineage_reconstruction_callbacks_: 0 +- actor_to_create_callbacks_: 0 +- sorted_destroyed_actor_list_: 0 + +GcsResourceManager: +- GetAllAvailableResources request count: 0 +- GetAllTotalResources request count: 0 +- GetAllResourceUsage request count: 0 + +GcsPlacementGroupManager: +- CreatePlacementGroup request count: 0 +- RemovePlacementGroup request count: 0 +- GetPlacementGroup request count: 0 +- GetAllPlacementGroup request count: 0 +- WaitPlacementGroupUntilReady request count: 0 +- GetNamedPlacementGroup request count: 0 +- Scheduling pending placement group count: 0 +- Registered placement groups count: 0 +- Named placement group count: 0 +- Pending placement groups count: 0 +- Infeasible placement groups count: 0 + +Publisher: + +[runtime env manager] ID to URIs table: +[runtime env manager] URIs reference table: + +GcsTaskManager: +-Total num task events reported: 0 +-Total num status task events dropped: 0 +-Total num profile events dropped: 0 +-Current num of task events stored: 0 +-Total num of actor creation tasks: 0 +-Total num of actor tasks: 0 +-Total num of normal tasks: 0 +-Total num of driver tasks: 0 + +GcsAutoscalerStateManager: +- last_seen_autoscaler_state_version_: 0 +- last_cluster_resource_state_version_: 0 +- pending demands: + + + +[2026-02-27 00:03:44,308 I 1457 1457] (gcs_server) gcs_server.cc:940: Main service Event stats: + + +Global stats: 32 total (14 active) +Queueing time: mean = 17.31ms, max = 136.80ms, min = 0.00ms, total = 553.90ms +Execution time: mean = 8.59ms, total = 274.88ms +Event stats: + GcsInMemoryStore.Put - 9 total (6 active), Execution time: mean = 15.19ms, total = 136.75ms, Queueing time: mean = 15.18ms, max = 136.46ms, min = 0.01ms, total = 136.58ms + PeriodicalRunner.RunFnPeriodically - 5 total (2 active, 1 running), Execution time: mean = 0.05ms, total = 0.25ms, Queueing time: mean = 54.73ms, max = 136.80ms, min = 0.17ms, total = 273.66ms + GcsInMemoryStore.GetAll - 5 total (0 active), Execution time: mean = 0.01ms, total = 0.07ms, Queueing time: mean = 0.08ms, max = 0.09ms, min = 0.07ms, total = 0.42ms + event_loop_lag_probe - 2 total (0 active), Execution time: mean = 0.01ms, total = 0.01ms, Queueing time: mean = 3.94ms, max = 7.76ms, min = 0.12ms, total = 7.88ms + ReporterService.grpc_client.HealthCheck - 1 total (0 active), Execution time: mean = 0.99ms, total = 0.99ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ReporterService.grpc_client.HealthCheck.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.06ms, total = 0.06ms, Queueing time: mean = 135.34ms, max = 135.34ms, min = 135.34ms, total = 135.34ms + NodeInfoGcsService.grpc_server.GetClusterId - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + MetricsAgentClient.WaitForServerReadyWithRetry - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + GcsInitData::AsyncLoad - 1 total (0 active), Execution time: mean = 0.01ms, total = 0.01ms, Queueing time: mean = 0.00ms, max = 0.00ms, min = 0.00ms, total = 0.00ms + ClusterResourceManager.ResetRemoteNodeView - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + GcsInMemoryStore.Get - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms + NodeInfoGcsService.grpc_server.GetClusterId.HandleRequestImpl - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + GcsServer.GetOrGenerateClusterId.continuation - 1 total (0 active), Execution time: mean = 136.71ms, total = 136.71ms, Queueing time: mean = 0.00ms, max = 0.00ms, min = 0.00ms, total = 0.00ms + RayletLoadPulled - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + GCSServer.deadline_timer.metrics_report - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + + +[2026-02-27 00:03:44,308 I 1457 1457] (gcs_server) gcs_server.cc:944: task_io_context Event stats: + + +Global stats: 4 total (1 active) +Queueing time: mean = 0.07ms, max = 0.25ms, min = 0.02ms, total = 0.30ms +Execution time: mean = 0.04ms, total = 0.18ms +Event stats: + event_loop_lag_probe - 2 total (0 active), Execution time: mean = 0.08ms, total = 0.17ms, Queueing time: mean = 0.14ms, max = 0.25ms, min = 0.03ms, total = 0.28ms + GcsTaskManager.GcJobSummary - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.01ms, total = 0.01ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms + + +[2026-02-27 00:03:44,308 I 1457 1457] (gcs_server) gcs_server.cc:944: pubsub_io_context Event stats: + + +Global stats: 4 total (1 active) +Queueing time: mean = 0.08ms, max = 0.26ms, min = 0.02ms, total = 0.31ms +Execution time: mean = 0.04ms, total = 0.16ms +Event stats: + event_loop_lag_probe - 2 total (0 active), Execution time: mean = 0.08ms, total = 0.16ms, Queueing time: mean = 0.14ms, max = 0.26ms, min = 0.02ms, total = 0.28ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.01ms, total = 0.01ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms + + +[2026-02-27 00:03:44,308 I 1457 1457] (gcs_server) gcs_server.cc:944: ray_syncer_io_context Event stats: + + +Global stats: 4 total (0 active) +Queueing time: mean = 0.11ms, max = 0.33ms, min = 0.02ms, total = 0.43ms +Execution time: mean = 0.05ms, total = 0.18ms +Event stats: + event_loop_lag_probe - 2 total (0 active), Execution time: mean = 0.09ms, total = 0.18ms, Queueing time: mean = 0.18ms, max = 0.33ms, min = 0.02ms, total = 0.35ms + RaySyncerRegister - 2 total (0 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.04ms, max = 0.04ms, min = 0.04ms, total = 0.08ms + + +[2026-02-27 00:03:44,308 I 1457 1457] (gcs_server) gcs_server.cc:944: ray_event_io_context Event stats: + + +Global stats: 2 total (0 active) +Queueing time: mean = 0.21ms, max = 0.41ms, min = 0.01ms, total = 0.43ms +Execution time: mean = 0.06ms, total = 0.13ms +Event stats: + event_loop_lag_probe - 2 total (0 active), Execution time: mean = 0.06ms, total = 0.13ms, Queueing time: mean = 0.21ms, max = 0.41ms, min = 0.01ms, total = 0.43ms + + +[2026-02-27 00:03:47,358 I 1457 1457] (gcs_server) gcs_node_manager.cc:106: Registering new node. node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 node_name=10.128.0.163 node_address=10.128.0.163 +[2026-02-27 00:03:48,570 I 1457 1457] (gcs_server) gcs_job_manager.cc:116: Registering job. job_id=01000000 driver_pid=1384 +[2026-02-27 00:03:48,676 I 1457 1457] (gcs_server) gcs_actor_manager.cc:314: Registering actor job_id=01000000 actor_id=16034777c72931e7b5f9f46401000000 +[2026-02-27 00:03:48,677 I 1457 1457] (gcs_server) gcs_actor_manager.cc:318: Registered actor job_id=01000000 actor_id=16034777c72931e7b5f9f46401000000 +[2026-02-27 00:03:48,678 I 1457 1457] (gcs_server) gcs_actor_manager.cc:432: Creating actor job_id=01000000 actor_id=16034777c72931e7b5f9f46401000000 +[2026-02-27 00:03:48,678 I 1457 1457] (gcs_server) gcs_actor_scheduler.cc:298: Leasing worker for actor. actor_id=16034777c72931e7b5f9f46401000000 job_id=01000000 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:53,254 I 1457 1457] (gcs_server) gcs_actor_scheduler.cc:642: Finished leasing worker from cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 for actor 16034777c72931e7b5f9f46401000000, job id = 01000000 +[2026-02-27 00:03:53,254 I 1457 1457] (gcs_server) gcs_actor_scheduler.cc:444: Submitting actor creation task to worker. actor_id=16034777c72931e7b5f9f46401000000 worker_id=a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 job_id=01000000 +[2026-02-27 00:03:53,313 I 1457 1457] (gcs_server) ray_event_recorder.cc:40: Ray event recording is disabled. Skipping start exporting events. +[2026-02-27 00:03:53,313 I 1457 1457] (gcs_server) metrics_agent_client.cc:54: Exporter initialized. diff --git a/test/log_monitor.err b/test/log_monitor.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/log_monitor.log b/test/log_monitor.log new file mode 100644 index 0000000000000000000000000000000000000000..440f23993b495488fd3f2148ab96981365cd9885 --- /dev/null +++ b/test/log_monitor.log @@ -0,0 +1,22 @@ +2026-02-27 00:03:48,091 INFO log_monitor.py:169 -- Starting log monitor with [max open files=200], [is_autoscaler_v2=False] +2026-02-27 00:03:48,093 INFO log_monitor.py:291 -- Beginning to track file raylet.err +2026-02-27 00:03:48,096 INFO log_monitor.py:291 -- Beginning to track file monitor.log +2026-02-27 00:03:48,096 INFO log_monitor.py:291 -- Beginning to track file gcs_server.err +2026-02-27 00:03:48,207 INFO log_monitor.py:291 -- Beginning to track file worker-516ef7f6a0e041d45bbe172463ced38587189b2d71f6f946a6209c94-ffffffff-1924.out +2026-02-27 00:03:48,207 INFO log_monitor.py:291 -- Beginning to track file worker-e733c08422794fa1f48c127b914da8cccfd148485edc467f6b29e3fb-ffffffff-1923.err +2026-02-27 00:03:48,207 INFO log_monitor.py:291 -- Beginning to track file worker-e733c08422794fa1f48c127b914da8cccfd148485edc467f6b29e3fb-ffffffff-1923.out +2026-02-27 00:03:48,207 INFO log_monitor.py:291 -- Beginning to track file worker-516ef7f6a0e041d45bbe172463ced38587189b2d71f6f946a6209c94-ffffffff-1924.err +2026-02-27 00:03:48,413 INFO log_monitor.py:291 -- Beginning to track file worker-b3afa43aba26e08f32ca5353f9e473275e92cc25650459921012dbd8-ffffffff-1922.out +2026-02-27 00:03:48,414 INFO log_monitor.py:291 -- Beginning to track file worker-b3afa43aba26e08f32ca5353f9e473275e92cc25650459921012dbd8-ffffffff-1922.err +2026-02-27 00:03:48,516 INFO log_monitor.py:291 -- Beginning to track file worker-619263e4e6c37bf7cd040a19519e22767492ee9fb7eec61b73be6157-ffffffff-1927.err +2026-02-27 00:03:48,516 INFO log_monitor.py:291 -- Beginning to track file worker-cd2aaefc98dd93f1e1139da87bceb128f98c8a4c4e33a7ad5a7e5550-ffffffff-1926.err +2026-02-27 00:03:48,516 INFO log_monitor.py:291 -- Beginning to track file worker-cd2aaefc98dd93f1e1139da87bceb128f98c8a4c4e33a7ad5a7e5550-ffffffff-1926.out +2026-02-27 00:03:48,516 INFO log_monitor.py:291 -- Beginning to track file worker-23108ae3fe97ac4122983de9e3923572ea790562aebd4b71fc4accd2-ffffffff-1928.err +2026-02-27 00:03:48,516 INFO log_monitor.py:291 -- Beginning to track file worker-619263e4e6c37bf7cd040a19519e22767492ee9fb7eec61b73be6157-ffffffff-1927.out +2026-02-27 00:03:48,516 INFO log_monitor.py:291 -- Beginning to track file worker-d4b0d18f4fb7e6704589db155523933ef7b28ca4f0517743d5a2d52e-ffffffff-1925.err +2026-02-27 00:03:48,516 INFO log_monitor.py:291 -- Beginning to track file worker-23108ae3fe97ac4122983de9e3923572ea790562aebd4b71fc4accd2-ffffffff-1928.out +2026-02-27 00:03:48,517 INFO log_monitor.py:291 -- Beginning to track file worker-d4b0d18f4fb7e6704589db155523933ef7b28ca4f0517743d5a2d52e-ffffffff-1925.out +2026-02-27 00:03:48,618 INFO log_monitor.py:291 -- Beginning to track file worker-29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111-ffffffff-1929.out +2026-02-27 00:03:48,618 INFO log_monitor.py:291 -- Beginning to track file worker-29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111-ffffffff-1929.err +2026-02-27 00:03:53,262 INFO log_monitor.py:291 -- Beginning to track file worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508-01000000-2410.err +2026-02-27 00:03:53,262 INFO log_monitor.py:291 -- Beginning to track file worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508-01000000-2410.out diff --git a/test/log_monitor.out b/test/log_monitor.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/monitor.err b/test/monitor.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/monitor.log b/test/monitor.log new file mode 100644 index 0000000000000000000000000000000000000000..2455b980c1793f25573451d1567d7dd9fa29dfa5 --- /dev/null +++ b/test/monitor.log @@ -0,0 +1,88 @@ +2026-02-27 00:03:44,911 INFO monitor.py:729 -- Starting monitor using ray installation: /usr/local/lib/python3.12/dist-packages/ray/__init__.py +2026-02-27 00:03:44,911 INFO monitor.py:730 -- Ray version: 2.52.1 +2026-02-27 00:03:44,911 INFO monitor.py:731 -- Ray commit: 4ebdc0abe5e5a551625fe7f87053c7e668a6ff74 +2026-02-27 00:03:44,911 INFO monitor.py:732 -- Monitor started with command: ['/usr/local/lib/python3.12/dist-packages/ray/autoscaler/_private/monitor.py', '--logs-dir=/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs', '--logging-rotate-bytes=536870912', '--logging-rotate-backup-count=5', '--gcs-address=10.128.0.163:57355', '--stdout-filepath=/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs/monitor.out', '--stderr-filepath=/tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs/monitor.err', '--monitor-ip=10.128.0.163'] +2026-02-27 00:03:44,918 INFO monitor.py:161 -- session_name: session_2026-02-27_00-03-44_103874_1384 +2026-02-27 00:03:44,920 INFO monitor.py:193 -- Starting autoscaler metrics server on port 44217 +2026-02-27 00:03:44,929 INFO monitor.py:218 -- Monitor: Started +2026-02-27 00:03:44,939 INFO autoscaler.py:280 -- disable_node_updaters:False +2026-02-27 00:03:44,939 INFO autoscaler.py:289 -- disable_launch_config_check:True +2026-02-27 00:03:44,939 INFO autoscaler.py:301 -- foreground_node_launch:False +2026-02-27 00:03:44,939 INFO autoscaler.py:311 -- worker_liveness_check:True +2026-02-27 00:03:44,940 INFO autoscaler.py:361 -- StandardAutoscaler: {'cluster_name': 'default', 'max_workers': 0, 'upscaling_speed': 1.0, 'docker': {}, 'idle_timeout_minutes': 0, 'provider': {'type': 'readonly', 'use_node_id_as_ip': True, 'disable_launch_config_check': True}, 'auth': {}, 'available_node_types': {'ray.head.default': {'resources': {}, 'node_config': {}, 'max_workers': 0}}, 'head_node_type': 'ray.head.default', 'file_mounts': {}, 'cluster_synced_files': [], 'file_mounts_sync_continuously': False, 'rsync_exclude': [], 'rsync_filter': [], 'initialization_commands': [], 'setup_commands': [], 'head_setup_commands': [], 'worker_setup_commands': [], 'head_start_ray_commands': [], 'worker_start_ray_commands': []} +2026-02-27 00:03:44,942 INFO monitor.py:407 -- Autoscaler has not yet received load metrics. Waiting. +2026-02-27 00:03:49,946 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes. +2026-02-27 00:03:49,946 INFO autoscaler.py:408 -- +======== Autoscaler status: 2026-02-27 00:03:49.946481 ======== +Node status +--------------------------------------------------------------- +Active: + 1 node_cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +Pending: + (no pending nodes) +Recent failures: + (no failures) + +Resources +--------------------------------------------------------------- +Total Usage: + 1.0/8.0 CPU + 0.0/1.0 GPU + 0B/20.71GiB memory + 0B/8.87GiB object_store_memory + +From request_resources: + (none) +Pending Demands: + (no resource demands) +2026-02-27 00:03:49,947 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration. +2026-02-27 00:03:54,951 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes. +2026-02-27 00:03:54,951 INFO autoscaler.py:408 -- +======== Autoscaler status: 2026-02-27 00:03:54.951431 ======== +Node status +--------------------------------------------------------------- +Active: + 1 node_cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +Pending: + (no pending nodes) +Recent failures: + (no failures) + +Resources +--------------------------------------------------------------- +Total Usage: + 1.0/8.0 CPU + 0.0/1.0 GPU + 0B/20.71GiB memory + 0B/8.87GiB object_store_memory + +From request_resources: + (none) +Pending Demands: + (no resource demands) +2026-02-27 00:03:54,952 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration. +2026-02-27 00:03:59,955 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes. +2026-02-27 00:03:59,956 INFO autoscaler.py:408 -- +======== Autoscaler status: 2026-02-27 00:03:59.956153 ======== +Node status +--------------------------------------------------------------- +Active: + 1 node_cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +Pending: + (no pending nodes) +Recent failures: + (no failures) + +Resources +--------------------------------------------------------------- +Total Usage: + 1.0/8.0 CPU + 0.0/1.0 GPU + 0B/20.71GiB memory + 0B/8.87GiB object_store_memory + +From request_resources: + (none) +Pending Demands: + (no resource demands) +2026-02-27 00:03:59,957 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration. diff --git a/test/monitor.out b/test/monitor.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/nsight/worker_process_2410.nsys-rep b/test/nsight/worker_process_2410.nsys-rep new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/python-core-driver-01000000ffffffffffffffffffffffffffffffffffffffffffffffff_1384.log b/test/python-core-driver-01000000ffffffffffffffffffffffffffffffffffffffffffffffff_1384.log new file mode 100644 index 0000000000000000000000000000000000000000..83c5790515120ecf833710eb5bb68d0273071e7f --- /dev/null +++ b/test/python-core-driver-01000000ffffffffffffffffffffffffffffffffffffffffffffffff_1384.log @@ -0,0 +1,66 @@ +[2026-02-27 00:03:47,433 I 1384 1384] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 1384 +[2026-02-27 00:03:47,438 I 1384 1384] event.cc:499: Ray Event initialized for CORE_WORKER +[2026-02-27 00:03:47,438 I 1384 1384] event.cc:499: Ray Event initialized for EXPORT_TASK +[2026-02-27 00:03:47,438 I 1384 1384] event.cc:332: Set ray event level to warning +[2026-02-27 00:03:47,438 I 1384 1384] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:48,557 I 1384 1384] grpc_server.cc:143: driver server started, listening on port 50413. +[2026-02-27 00:03:48,568 I 1384 1384] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50413 worker_id=01000000ffffffffffffffffffffffffffffffffffffffffffffffff node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,568 I 1384 1384] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms. +[2026-02-27 00:03:48,570 I 1384 1920] core_worker.cc:455: Event stats: + + +Global stats: 11 total (9 active) +Queueing time: mean = 0.00ms, max = 0.03ms, min = 0.03ms, total = 0.03ms +Execution time: mean = 0.05ms, total = 0.57ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 6 total (5 active, 1 running), Execution time: mean = 0.00ms, total = 0.02ms, Queueing time: mean = 0.00ms, max = 0.03ms, min = 0.03ms, total = 0.03ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (0 active), Execution time: mean = 0.55ms, total = 0.55ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo.OnReplyReceived - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + +----------------- +Task execution event stats: + +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: + +----------------- +Task Event stats: + +IO Service Stats: + +Global stats: 4 total (1 active) +Queueing time: mean = 0.01ms, max = 0.02ms, min = 0.01ms, total = 0.03ms +Execution time: mean = 0.21ms, total = 0.82ms +Event stats: + CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.01ms, max = 0.01ms, min = 0.01ms, total = 0.01ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 0.61ms, total = 0.61ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.19ms, total = 0.19ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms +Other Stats: + gcs_grpc_in_progress:0 + event_aggregator_grpc_in_progress:0 + current number of task status events in buffer: 1 + current number of profile events in buffer: 0 + current number of dropped task attempts tracked: 0 + total task events sent: 0 MiB + total number of task attempts sent: 0 + total number of task attempts dropped reported: 0 + total number of sent failure: 0 + num status task events dropped: 0 + num profile task events dropped: 0 + num ray task events reported to aggregator: 0 + num ray task events failed to report to aggregator: 0 + num of task attempts dropped reported to aggregator: 0 + num of failed requests to aggregator: 0 + +[2026-02-27 00:03:48,571 I 1384 1384] metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:48,571 I 1384 1920] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,571 I 1384 1920] normal_task_submitter.cc:824: Number of alive nodes:1 +[2026-02-27 00:03:48,675 I 1384 1384] actor_task_submitter.cc:74: Set actor max pending calls to -1 actor_id=16034777c72931e7b5f9f46401000000 +[2026-02-27 00:03:48,690 I 1384 1920] actor_manager.cc:236: received notification on actor, state: PENDING_CREATION, ip address: , port: 0, num_restarts: 0, death context type=CONTEXT_NOT_SET actor_id=16034777c72931e7b5f9f46401000000 worker_id=NIL_ID node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:52,575 I 1384 1920] metrics_agent_client.cc:54: Exporter initialized. diff --git a/test/python-core-worker-23108ae3fe97ac4122983de9e3923572ea790562aebd4b71fc4accd2_1928.log b/test/python-core-worker-23108ae3fe97ac4122983de9e3923572ea790562aebd4b71fc4accd2_1928.log new file mode 100644 index 0000000000000000000000000000000000000000..532e9db2f5f95fbe47418f24e8b8c314c6bdaa1a --- /dev/null +++ b/test/python-core-worker-23108ae3fe97ac4122983de9e3923572ea790562aebd4b71fc4accd2_1928.log @@ -0,0 +1,101 @@ +[2026-02-27 00:03:48,439 I 1928 1928] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 1928 +[2026-02-27 00:03:48,445 I 1928 1928] event.cc:499: Ray Event initialized for CORE_WORKER +[2026-02-27 00:03:48,446 I 1928 1928] event.cc:499: Ray Event initialized for EXPORT_TASK +[2026-02-27 00:03:48,446 I 1928 1928] event.cc:332: Set ray event level to warning +[2026-02-27 00:03:48,446 I 1928 1928] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:48,450 I 1928 1928] grpc_server.cc:143: worker server started, listening on port 50227. +[2026-02-27 00:03:48,465 I 1928 1928] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50227 worker_id=23108ae3fe97ac4122983de9e3923572ea790562aebd4b71fc4accd2 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,466 I 1928 1928] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms. +[2026-02-27 00:03:48,470 I 1928 2123] core_worker.cc:455: Event stats: + + +Global stats: 8 total (6 active) +Queueing time: mean = 0.08ms, max = 0.56ms, min = 0.07ms, total = 0.63ms +Execution time: mean = 0.00ms, total = 0.03ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 3 total (1 active, 1 running), Execution time: mean = 0.01ms, total = 0.03ms, Queueing time: mean = 0.21ms, max = 0.56ms, min = 0.07ms, total = 0.63ms + CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + +----------------- +Task execution event stats: + +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: + +----------------- +Task Event stats: + +IO Service Stats: + +Global stats: 4 total (1 active) +Queueing time: mean = 0.01ms, max = 0.01ms, min = 0.01ms, total = 0.03ms +Execution time: mean = 0.28ms, total = 1.11ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.44ms, total = 0.44ms, Queueing time: mean = 0.01ms, max = 0.01ms, min = 0.01ms, total = 0.01ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 0.65ms, total = 0.65ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.01ms, max = 0.01ms, min = 0.01ms, total = 0.01ms +Other Stats: + gcs_grpc_in_progress:0 + event_aggregator_grpc_in_progress:0 + current number of task status events in buffer: 0 + current number of profile events in buffer: 0 + current number of dropped task attempts tracked: 0 + total task events sent: 0 MiB + total number of task attempts sent: 0 + total number of task attempts dropped reported: 0 + total number of sent failure: 0 + num status task events dropped: 0 + num profile task events dropped: 0 + num ray task events reported to aggregator: 0 + num ray task events failed to report to aggregator: 0 + num of task attempts dropped reported to aggregator: 0 + num of failed requests to aggregator: 0 + +[2026-02-27 00:03:48,472 I 1928 1928] core_worker.cc:515: Adjusted worker niceness to 15 +[2026-02-27 00:03:48,473 I 1928 1928] metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:48,475 I 1928 2123] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,475 I 1928 2123] normal_task_submitter.cc:824: Number of alive nodes:1 +[2026-02-27 00:03:53,478 I 1928 2123] metrics_agent_client.cc:54: Exporter initialized. +[2026-02-27 00:04:03,474 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:04,470 W 1928 2161] task_event_buffer.cc:838: [1] GCS or the event aggregator hasn't replied to the previous flush events call (likely overloaded). Skipping reporting task state events and retry later.[gcs_grpc_in_progress=1][event_aggregator_grpc_in_progress=0][cur_status_events_size=0][cur_profile_events_size=0] +[2026-02-27 00:04:04,474 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:05,474 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:06,474 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:07,474 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:08,475 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:09,475 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:10,475 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:11,475 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:12,475 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:13,476 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:14,476 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:15,476 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:16,476 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:17,476 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:18,476 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:19,477 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:20,477 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:21,477 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:22,477 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:23,477 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:24,477 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:25,478 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:26,478 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:27,478 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:28,478 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:29,478 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:30,479 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:31,479 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:32,479 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:33,479 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:34,479 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:35,479 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:36,480 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:37,480 I 1928 2123] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 diff --git a/test/python-core-worker-29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111_1929.log b/test/python-core-worker-29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111_1929.log new file mode 100644 index 0000000000000000000000000000000000000000..763086b68be4baa4701ca1f2efbd4a1d868da990 --- /dev/null +++ b/test/python-core-worker-29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111_1929.log @@ -0,0 +1,84 @@ +[2026-02-27 00:03:48,537 I 1929 1929] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 1929 +[2026-02-27 00:03:48,540 I 1929 1929] event.cc:499: Ray Event initialized for CORE_WORKER +[2026-02-27 00:03:48,540 I 1929 1929] event.cc:499: Ray Event initialized for EXPORT_TASK +[2026-02-27 00:03:48,540 I 1929 1929] event.cc:332: Set ray event level to warning +[2026-02-27 00:03:48,540 I 1929 1929] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:48,541 I 1929 1929] grpc_server.cc:143: worker server started, listening on port 50197. +[2026-02-27 00:03:48,553 I 1929 1929] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50197 worker_id=29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,553 I 1929 1929] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms. +[2026-02-27 00:03:48,555 I 1929 1929] core_worker.cc:515: Adjusted worker niceness to 15 +[2026-02-27 00:03:48,555 I 1929 1929] metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:48,555 I 1929 2259] core_worker.cc:455: Event stats: + + +Global stats: 12 total (10 active) +Queueing time: mean = 0.01ms, max = 0.07ms, min = 0.03ms, total = 0.10ms +Execution time: mean = 0.00ms, total = 0.03ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.00ms, total = 0.03ms, Queueing time: mean = 0.01ms, max = 0.07ms, min = 0.03ms, total = 0.10ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + +----------------- +Task execution event stats: + +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: + +----------------- +Task Event stats: + +IO Service Stats: + +Global stats: 4 total (1 active) +Queueing time: mean = 0.01ms, max = 0.02ms, min = 0.01ms, total = 0.03ms +Execution time: mean = 0.21ms, total = 0.86ms +Event stats: + CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.03ms, total = 0.03ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.23ms, total = 0.23ms, Queueing time: mean = 0.01ms, max = 0.01ms, min = 0.01ms, total = 0.01ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 0.59ms, total = 0.59ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Other Stats: + gcs_grpc_in_progress:0 + event_aggregator_grpc_in_progress:0 + current number of task status events in buffer: 0 + current number of profile events in buffer: 0 + current number of dropped task attempts tracked: 0 + total task events sent: 0 MiB + total number of task attempts sent: 0 + total number of task attempts dropped reported: 0 + total number of sent failure: 0 + num status task events dropped: 0 + num profile task events dropped: 0 + num ray task events reported to aggregator: 0 + num ray task events failed to report to aggregator: 0 + num of task attempts dropped reported to aggregator: 0 + num of failed requests to aggregator: 0 + +[2026-02-27 00:03:48,557 I 1929 2259] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,557 I 1929 2259] normal_task_submitter.cc:824: Number of alive nodes:1 +[2026-02-27 00:03:49,564 I 1929 2259] core_worker_shutdown_executor.cc:184: Executing handle exit: INTENDED_SYSTEM_EXIT - Worker exited because it was idle for a long time (timeout: -1ms) +[2026-02-27 00:03:49,564 I 1929 2259] core_worker_shutdown_executor.cc:94: Executing worker exit: INTENDED_SYSTEM_EXIT - Worker exited because it was idle for a long time (timeout: 10000ms) +[2026-02-27 00:03:49,564 I 1929 1929] core_worker_shutdown_executor.cc:128: Wait for currently executing tasks in the underlying thread pools to finish. +[2026-02-27 00:03:49,564 I 1929 1929] core_worker_shutdown_executor.cc:162: Releasing local references, then draining reference counter. +[2026-02-27 00:03:49,566 I 1929 1929] core_worker_shutdown_executor.cc:217: Try killing all child processes of this worker as it exits. Child process pids: +[2026-02-27 00:03:49,566 I 1929 1929] core_worker_shutdown_executor.cc:262: Sending disconnect message to the local raylet. +[2026-02-27 00:03:49,568 I 1929 1929] raylet_ipc_client.cc:135: RayletIpcClient::Disconnect, exit_type=INTENDED_SYSTEM_EXIT, exit_detail=Worker exited because it was idle for a long time, has creation_task_exception_pb_bytes=0 +[2026-02-27 00:03:49,568 I 1929 1929] core_worker_shutdown_executor.cc:279: Disconnected from the local raylet. +[2026-02-27 00:03:49,568 I 1929 1929] task_event_buffer.cc:491: Shutting down TaskEventBuffer. +[2026-02-27 00:03:49,569 I 1929 2267] task_event_buffer.cc:459: Task event buffer io service stopped. +[2026-02-27 00:03:49,569 I 1929 1929] core_worker_shutdown_executor.cc:54: Waiting for joining a core worker io thread. If it hangs here, there might be deadlock or a high load in the core worker io service. +[2026-02-27 00:03:49,569 I 1929 2259] core_worker_process.cc:194: Core worker main io service stopped. +[2026-02-27 00:03:49,644 I 1929 1929] core_worker_shutdown_executor.cc:72: Disconnecting a GCS client. +[2026-02-27 00:03:49,644 I 1929 1929] core_worker_shutdown_executor.cc:79: Core worker ready to be deallocated. +[2026-02-27 00:03:49,644 I 1929 1929] core_worker_process.cc:950: Task execution loop terminated. Removing the global worker. +[2026-02-27 00:03:49,644 I 1929 1929] core_worker.cc:539: Core worker is destructed +[2026-02-27 00:03:49,644 I 1929 1929] task_event_buffer.cc:491: Shutting down TaskEventBuffer. +[2026-02-27 00:03:49,645 I 1929 1929] core_worker_process.cc:846: Destructing CoreWorkerProcessImpl. pid: 1929 +[2026-02-27 00:03:49,645 I 1929 1929] stats.h:149: Stats module has shutdown. +[2026-02-27 00:03:49,666 W 1929 1929] core_worker_process.cc:860: The core worker process is not initialized yet or already shutdown. diff --git a/test/python-core-worker-516ef7f6a0e041d45bbe172463ced38587189b2d71f6f946a6209c94_1924.log b/test/python-core-worker-516ef7f6a0e041d45bbe172463ced38587189b2d71f6f946a6209c94_1924.log new file mode 100644 index 0000000000000000000000000000000000000000..3c89721b39b25d83129c105f0ffcb9f7f75e0b33 --- /dev/null +++ b/test/python-core-worker-516ef7f6a0e041d45bbe172463ced38587189b2d71f6f946a6209c94_1924.log @@ -0,0 +1,100 @@ +[2026-02-27 00:03:48,150 I 1924 1924] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 1924 +[2026-02-27 00:03:48,162 I 1924 1924] event.cc:499: Ray Event initialized for CORE_WORKER +[2026-02-27 00:03:48,163 I 1924 1924] event.cc:499: Ray Event initialized for EXPORT_TASK +[2026-02-27 00:03:48,163 I 1924 1924] event.cc:332: Set ray event level to warning +[2026-02-27 00:03:48,163 I 1924 1924] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:48,164 I 1924 1924] grpc_server.cc:143: worker server started, listening on port 50061. +[2026-02-27 00:03:48,195 I 1924 1924] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50061 worker_id=516ef7f6a0e041d45bbe172463ced38587189b2d71f6f946a6209c94 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,196 I 1924 1924] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms. +[2026-02-27 00:03:48,200 I 1924 1995] core_worker.cc:455: Event stats: + + +Global stats: 8 total (6 active) +Queueing time: mean = 0.28ms, max = 2.01ms, min = 0.25ms, total = 2.26ms +Execution time: mean = 0.00ms, total = 0.03ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 3 total (1 active, 1 running), Execution time: mean = 0.01ms, total = 0.03ms, Queueing time: mean = 0.75ms, max = 2.01ms, min = 0.25ms, total = 2.26ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + +----------------- +Task execution event stats: + +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: + +----------------- +Task Event stats: + +IO Service Stats: + +Global stats: 3 total (2 active) +Queueing time: mean = 0.58ms, max = 1.73ms, min = 1.73ms, total = 1.73ms +Execution time: mean = 0.08ms, total = 0.23ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.23ms, total = 0.23ms, Queueing time: mean = 1.73ms, max = 1.73ms, min = 1.73ms, total = 1.73ms + CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Other Stats: + gcs_grpc_in_progress:1 + event_aggregator_grpc_in_progress:0 + current number of task status events in buffer: 0 + current number of profile events in buffer: 0 + current number of dropped task attempts tracked: 0 + total task events sent: 0 MiB + total number of task attempts sent: 0 + total number of task attempts dropped reported: 0 + total number of sent failure: 0 + num status task events dropped: 0 + num profile task events dropped: 0 + num ray task events reported to aggregator: 0 + num ray task events failed to report to aggregator: 0 + num of task attempts dropped reported to aggregator: 0 + num of failed requests to aggregator: 0 + +[2026-02-27 00:03:48,202 I 1924 1924] core_worker.cc:515: Adjusted worker niceness to 15 +[2026-02-27 00:03:48,203 I 1924 1924] metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:48,207 I 1924 1995] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,207 I 1924 1995] normal_task_submitter.cc:824: Number of alive nodes:1 +[2026-02-27 00:03:53,217 I 1924 1995] metrics_agent_client.cc:54: Exporter initialized. +[2026-02-27 00:04:03,205 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:04,202 W 1924 2008] task_event_buffer.cc:838: [1] GCS or the event aggregator hasn't replied to the previous flush events call (likely overloaded). Skipping reporting task state events and retry later.[gcs_grpc_in_progress=1][event_aggregator_grpc_in_progress=0][cur_status_events_size=0][cur_profile_events_size=0] +[2026-02-27 00:04:04,205 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:05,205 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:06,205 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:07,206 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:08,206 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:09,206 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:10,206 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:11,206 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:12,206 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:13,207 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:14,207 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:15,207 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:16,207 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:17,207 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:18,207 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:19,208 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:20,208 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:21,208 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:22,208 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:23,208 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:24,208 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:25,209 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:26,209 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:27,209 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:28,209 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:29,209 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:30,210 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:31,210 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:32,210 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:33,210 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:34,210 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:35,211 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:36,211 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:37,211 I 1924 1995] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 diff --git a/test/python-core-worker-619263e4e6c37bf7cd040a19519e22767492ee9fb7eec61b73be6157_1927.log b/test/python-core-worker-619263e4e6c37bf7cd040a19519e22767492ee9fb7eec61b73be6157_1927.log new file mode 100644 index 0000000000000000000000000000000000000000..119db956d83c8255b99589a2363e92ac1be70fb8 --- /dev/null +++ b/test/python-core-worker-619263e4e6c37bf7cd040a19519e22767492ee9fb7eec61b73be6157_1927.log @@ -0,0 +1,102 @@ +[2026-02-27 00:03:48,511 I 1927 1927] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 1927 +[2026-02-27 00:03:48,514 I 1927 1927] event.cc:499: Ray Event initialized for CORE_WORKER +[2026-02-27 00:03:48,514 I 1927 1927] event.cc:499: Ray Event initialized for EXPORT_TASK +[2026-02-27 00:03:48,514 I 1927 1927] event.cc:332: Set ray event level to warning +[2026-02-27 00:03:48,514 I 1927 1927] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:48,516 I 1927 1927] grpc_server.cc:143: worker server started, listening on port 50005. +[2026-02-27 00:03:48,530 I 1927 1927] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50005 worker_id=619263e4e6c37bf7cd040a19519e22767492ee9fb7eec61b73be6157 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,531 I 1927 1927] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms. +[2026-02-27 00:03:48,533 I 1927 1927] core_worker.cc:515: Adjusted worker niceness to 15 +[2026-02-27 00:03:48,533 I 1927 1927] metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:48,533 I 1927 2227] core_worker.cc:455: Event stats: + + +Global stats: 13 total (11 active) +Queueing time: mean = 0.04ms, max = 0.40ms, min = 0.06ms, total = 0.46ms +Execution time: mean = 0.00ms, total = 0.03ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.00ms, total = 0.03ms, Queueing time: mean = 0.07ms, max = 0.40ms, min = 0.06ms, total = 0.46ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ReporterService.grpc_client.HealthCheck - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + +----------------- +Task execution event stats: + +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: + +----------------- +Task Event stats: + +IO Service Stats: + +Global stats: 4 total (1 active) +Queueing time: mean = 0.02ms, max = 0.05ms, min = 0.02ms, total = 0.07ms +Execution time: mean = 0.23ms, total = 0.92ms +Event stats: + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.03ms, total = 0.03ms, Queueing time: mean = 0.05ms, max = 0.05ms, min = 0.05ms, total = 0.05ms + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.23ms, total = 0.23ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms + CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 0.66ms, total = 0.66ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Other Stats: + gcs_grpc_in_progress:0 + event_aggregator_grpc_in_progress:0 + current number of task status events in buffer: 0 + current number of profile events in buffer: 0 + current number of dropped task attempts tracked: 0 + total task events sent: 0 MiB + total number of task attempts sent: 0 + total number of task attempts dropped reported: 0 + total number of sent failure: 0 + num status task events dropped: 0 + num profile task events dropped: 0 + num ray task events reported to aggregator: 0 + num ray task events failed to report to aggregator: 0 + num of task attempts dropped reported to aggregator: 0 + num of failed requests to aggregator: 0 + +[2026-02-27 00:03:48,535 I 1927 2227] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,535 I 1927 2227] normal_task_submitter.cc:824: Number of alive nodes:1 +[2026-02-27 00:03:52,558 I 1927 2227] metrics_agent_client.cc:54: Exporter initialized. +[2026-02-27 00:04:03,536 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:04,534 W 1927 2252] task_event_buffer.cc:838: [1] GCS or the event aggregator hasn't replied to the previous flush events call (likely overloaded). Skipping reporting task state events and retry later.[gcs_grpc_in_progress=1][event_aggregator_grpc_in_progress=0][cur_status_events_size=0][cur_profile_events_size=0] +[2026-02-27 00:04:04,536 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:05,536 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:06,537 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:07,537 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:08,537 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:09,537 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:10,537 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:11,537 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:12,537 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:13,538 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:14,538 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:15,538 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:16,538 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:17,538 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:18,538 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:19,539 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:20,539 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:21,539 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:22,539 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:23,539 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:24,539 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:25,539 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:26,539 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:27,540 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:28,540 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:29,540 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:30,540 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:31,540 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:32,540 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:33,540 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:34,541 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:35,541 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:36,541 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:37,541 I 1927 2227] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 diff --git a/test/python-core-worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508_2410.log b/test/python-core-worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508_2410.log new file mode 100644 index 0000000000000000000000000000000000000000..a99510307da29b2930e807ae42a8edb6428a6546 --- /dev/null +++ b/test/python-core-worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508_2410.log @@ -0,0 +1,130 @@ +[2026-02-27 00:03:53,236 I 2410 2410] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 2410 +[2026-02-27 00:03:53,239 I 2410 2410] event.cc:499: Ray Event initialized for CORE_WORKER +[2026-02-27 00:03:53,239 I 2410 2410] event.cc:499: Ray Event initialized for EXPORT_TASK +[2026-02-27 00:03:53,239 I 2410 2410] event.cc:332: Set ray event level to warning +[2026-02-27 00:03:53,239 I 2410 2410] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:53,241 I 2410 2410] grpc_server.cc:143: worker server started, listening on port 50337. +[2026-02-27 00:03:53,251 I 2410 2410] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50337 worker_id=a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:53,251 I 2410 2410] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms. +[2026-02-27 00:03:53,253 I 2410 2410] core_worker.cc:515: Adjusted worker niceness to 15 +[2026-02-27 00:03:53,253 I 2410 2410] metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:53,253 I 2410 2471] core_worker.cc:455: Event stats: + + +Global stats: 12 total (10 active) +Queueing time: mean = 0.01ms, max = 0.06ms, min = 0.03ms, total = 0.09ms +Execution time: mean = 0.00ms, total = 0.03ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.00ms, total = 0.03ms, Queueing time: mean = 0.01ms, max = 0.06ms, min = 0.03ms, total = 0.09ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + +----------------- +Task execution event stats: + +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: + +----------------- +Task Event stats: + +IO Service Stats: + +Global stats: 4 total (1 active) +Queueing time: mean = 0.01ms, max = 0.02ms, min = 0.02ms, total = 0.04ms +Execution time: mean = 0.19ms, total = 0.77ms +Event stats: + CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 0.56ms, total = 0.56ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.19ms, total = 0.19ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms +Other Stats: + gcs_grpc_in_progress:0 + event_aggregator_grpc_in_progress:0 + current number of task status events in buffer: 0 + current number of profile events in buffer: 0 + current number of dropped task attempts tracked: 0 + total task events sent: 0 MiB + total number of task attempts sent: 0 + total number of task attempts dropped reported: 0 + total number of sent failure: 0 + num status task events dropped: 0 + num profile task events dropped: 0 + num ray task events reported to aggregator: 0 + num ray task events failed to report to aggregator: 0 + num of task attempts dropped reported to aggregator: 0 + num of failed requests to aggregator: 0 + +[2026-02-27 00:03:53,254 I 2410 2471] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:53,254 I 2410 2471] normal_task_submitter.cc:824: Number of alive nodes:1 +[2026-02-27 00:03:53,255 I 2410 2471] metrics_agent_client.cc:54: Exporter initialized. +[2026-02-27 00:03:53,256 I 2410 2410] actor_task_submitter.cc:74: Set actor max pending calls to -1 actor_id=16034777c72931e7b5f9f46401000000 +[2026-02-27 00:03:53,256 I 2410 2410] core_worker.cc:2903: Creating actor actor_id=16034777c72931e7b5f9f46401000000 +[2026-02-27 00:04:03,256 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:04,254 W 2410 2479] task_event_buffer.cc:838: [1] GCS or the event aggregator hasn't replied to the previous flush events call (likely overloaded). Skipping reporting task state events and retry later.[gcs_grpc_in_progress=1][event_aggregator_grpc_in_progress=0][cur_status_events_size=0][cur_profile_events_size=0] +[2026-02-27 00:04:04,256 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:05,256 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:06,256 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:07,256 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:08,256 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:09,257 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:10,257 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:11,257 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:12,257 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:13,257 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:14,258 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:15,258 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:16,258 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:17,258 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:18,258 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:19,259 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:20,259 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:21,259 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:22,259 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:23,259 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:24,259 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:25,259 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:26,260 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:27,260 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:28,260 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:29,260 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:30,260 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:31,261 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:32,261 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:33,261 I 2410 2471] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:34,102 C 2410 2410] task_receiver.cc:132: An unexpected system state has occurred. You have likely discovered a bug in Ray. Please report this issue at https://github.com/ray-project/ray/issues and we'll work with you to fix it. Check failed: actor_creation_task_done_() Status not OK: IOError: Broken pipe +*** StackTrace Information *** +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x16bc06a) [0x7cd486fab06a] ray::operator<<() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray6RayLogD1Ev+0x469) [0x7cd486fad699] ray::RayLog::~RayLog() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb5fc1a) [0x7cd48644ec1a] ray::core::TaskReceiver::HandleTask()::{lambda()#1}::operator()()::{lambda()#1}::operator()() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb7b5b2) [0x7cd48646a5b2] ray::core::InboundRequest::Accept() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb6bc8b) [0x7cd48645ac8b] ray::core::NormalSchedulingQueue::ScheduleRequests() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1012fa8) [0x7cd486901fa8] EventTracker::RecordExecution() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1009f57) [0x7cd4868f8f57] std::_Function_handler<>::_M_invoke() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb8f31b) [0x7cd48647e31b] boost::asio::detail::executor_op<>::do_complete() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x16873db) [0x7cd486f763db] boost::asio::detail::scheduler::do_run_one() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1688d79) [0x7cd486f77d79] boost::asio::detail::scheduler::run() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1689482) [0x7cd486f78482] boost::asio::io_context::run() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray4core10CoreWorker20RunTaskExecutionLoopEv+0x127) [0x7cd4862fbaf7] ray::core::CoreWorker::RunTaskExecutionLoop() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray4core21CoreWorkerProcessImpl26RunWorkerTaskExecutionLoopEv+0x41) [0x7cd486351461] ray::core::CoreWorkerProcessImpl::RunWorkerTaskExecutionLoop() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray4core17CoreWorkerProcess20RunTaskExecutionLoopEv+0x1d) [0x7cd48635167d] ray::core::CoreWorkerProcess::RunTaskExecutionLoop() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x881e81) [0x7cd486170e81] __pyx_pw_3ray_7_raylet_10CoreWorker_5run_task_loop() +ray::TaskRunner(PyObject_Vectorcall+0x36) [0x5627f6] PyObject_Vectorcall +ray::TaskRunner(_PyEval_EvalFrameDefault+0x701) [0x54a2e1] _PyEval_EvalFrameDefault +ray::TaskRunner(PyEval_EvalCode+0x99) [0x620799] PyEval_EvalCode +ray::TaskRunner() [0x65c44b] +ray::TaskRunner() [0x6574d6] +ray::TaskRunner() [0x654145] +ray::TaskRunner(_PyRun_SimpleFileObject+0x1a5) [0x653e15] _PyRun_SimpleFileObject +ray::TaskRunner(_PyRun_AnyFileObject+0x47) [0x653927] _PyRun_AnyFileObject +ray::TaskRunner(Py_RunMain+0x375) [0x650605] Py_RunMain +ray::TaskRunner(Py_BytesMain+0x2d) [0x60962d] Py_BytesMain +/lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7cd48eca3d90] +/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7cd48eca3e40] __libc_start_main +ray::TaskRunner(_start+0x25) [0x6094a5] _start + diff --git a/test/python-core-worker-b3afa43aba26e08f32ca5353f9e473275e92cc25650459921012dbd8_1922.log b/test/python-core-worker-b3afa43aba26e08f32ca5353f9e473275e92cc25650459921012dbd8_1922.log new file mode 100644 index 0000000000000000000000000000000000000000..ca06db1e51355387604ce05e717815d4bdb099cc --- /dev/null +++ b/test/python-core-worker-b3afa43aba26e08f32ca5353f9e473275e92cc25650459921012dbd8_1922.log @@ -0,0 +1,104 @@ +[2026-02-27 00:03:48,411 I 1922 1922] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 1922 +[2026-02-27 00:03:48,420 I 1922 1922] event.cc:499: Ray Event initialized for CORE_WORKER +[2026-02-27 00:03:48,420 I 1922 1922] event.cc:499: Ray Event initialized for EXPORT_TASK +[2026-02-27 00:03:48,420 I 1922 1922] event.cc:332: Set ray event level to warning +[2026-02-27 00:03:48,420 I 1922 1922] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:48,422 I 1922 1922] grpc_server.cc:143: worker server started, listening on port 50419. +[2026-02-27 00:03:48,437 I 1922 1922] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50419 worker_id=b3afa43aba26e08f32ca5353f9e473275e92cc25650459921012dbd8 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,438 I 1922 1922] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms. +[2026-02-27 00:03:48,443 I 1922 2089] core_worker.cc:455: Event stats: + + +Global stats: 11 total (5 active) +Queueing time: mean = 0.02ms, max = 0.13ms, min = 0.02ms, total = 0.22ms +Execution time: mean = 0.33ms, total = 3.66ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 3 total (1 active, 1 running), Execution time: mean = 0.01ms, total = 0.03ms, Queueing time: mean = 0.03ms, max = 0.06ms, min = 0.02ms, total = 0.08ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (0 active), Execution time: mean = 1.04ms, total = 1.04ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.04ms, total = 0.04ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms + ray::rpc::NodeInfoGcsService.grpc_client.GetAllNodeAddressAndLiveness - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 1 total (0 active), Execution time: mean = 1.21ms, total = 1.21ms, Queueing time: mean = 0.13ms, max = 0.13ms, min = 0.13ms, total = 0.13ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (0 active), Execution time: mean = 1.33ms, total = 1.33ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + +----------------- +Task execution event stats: + +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: + +----------------- +Task Event stats: + +IO Service Stats: + +Global stats: 4 total (1 active) +Queueing time: mean = 0.73ms, max = 2.89ms, min = 0.02ms, total = 2.91ms +Execution time: mean = 0.42ms, total = 1.68ms +Event stats: + CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.15ms, total = 0.15ms, Queueing time: mean = 2.89ms, max = 2.89ms, min = 2.89ms, total = 2.89ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 1.49ms, total = 1.49ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.03ms, total = 0.03ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms +Other Stats: + gcs_grpc_in_progress:0 + event_aggregator_grpc_in_progress:0 + current number of task status events in buffer: 0 + current number of profile events in buffer: 0 + current number of dropped task attempts tracked: 0 + total task events sent: 0 MiB + total number of task attempts sent: 0 + total number of task attempts dropped reported: 0 + total number of sent failure: 0 + num status task events dropped: 0 + num profile task events dropped: 0 + num ray task events reported to aggregator: 0 + num ray task events failed to report to aggregator: 0 + num of task attempts dropped reported to aggregator: 0 + num of failed requests to aggregator: 0 + +[2026-02-27 00:03:48,444 I 1922 1922] core_worker.cc:515: Adjusted worker niceness to 15 +[2026-02-27 00:03:48,445 I 1922 1922] metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:48,445 I 1922 2089] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,445 I 1922 2089] normal_task_submitter.cc:824: Number of alive nodes:1 +[2026-02-27 00:03:53,452 I 1922 2089] metrics_agent_client.cc:54: Exporter initialized. +[2026-02-27 00:04:03,447 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:04,445 W 1922 2114] task_event_buffer.cc:838: [1] GCS or the event aggregator hasn't replied to the previous flush events call (likely overloaded). Skipping reporting task state events and retry later.[gcs_grpc_in_progress=1][event_aggregator_grpc_in_progress=0][cur_status_events_size=0][cur_profile_events_size=0] +[2026-02-27 00:04:04,447 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:05,447 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:06,447 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:07,447 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:08,447 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:09,448 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:10,448 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:11,448 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:12,448 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:13,448 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:14,448 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:15,449 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:16,449 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:17,449 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:18,449 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:19,449 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:20,449 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:21,450 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:22,450 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:23,450 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:24,450 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:25,450 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:26,450 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:27,451 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:28,451 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:29,451 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:30,451 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:31,451 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:32,451 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:33,452 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:34,452 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:35,452 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:36,452 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:37,452 I 1922 2089] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 diff --git a/test/python-core-worker-cd2aaefc98dd93f1e1139da87bceb128f98c8a4c4e33a7ad5a7e5550_1926.log b/test/python-core-worker-cd2aaefc98dd93f1e1139da87bceb128f98c8a4c4e33a7ad5a7e5550_1926.log new file mode 100644 index 0000000000000000000000000000000000000000..17c07bd8c24c03f0bfa2c644572e1e3128f6b8a6 --- /dev/null +++ b/test/python-core-worker-cd2aaefc98dd93f1e1139da87bceb128f98c8a4c4e33a7ad5a7e5550_1926.log @@ -0,0 +1,101 @@ +[2026-02-27 00:03:48,459 I 1926 1926] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 1926 +[2026-02-27 00:03:48,474 I 1926 1926] event.cc:499: Ray Event initialized for CORE_WORKER +[2026-02-27 00:03:48,474 I 1926 1926] event.cc:499: Ray Event initialized for EXPORT_TASK +[2026-02-27 00:03:48,474 I 1926 1926] event.cc:332: Set ray event level to warning +[2026-02-27 00:03:48,474 I 1926 1926] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:48,477 I 1926 1926] grpc_server.cc:143: worker server started, listening on port 50083. +[2026-02-27 00:03:48,493 I 1926 1926] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50083 worker_id=cd2aaefc98dd93f1e1139da87bceb128f98c8a4c4e33a7ad5a7e5550 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,494 I 1926 1926] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms. +[2026-02-27 00:03:48,495 I 1926 1926] core_worker.cc:515: Adjusted worker niceness to 15 +[2026-02-27 00:03:48,495 I 1926 1926] metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:48,496 I 1926 2173] core_worker.cc:455: Event stats: + + +Global stats: 14 total (11 active) +Queueing time: mean = 0.07ms, max = 0.76ms, min = 0.15ms, total = 0.91ms +Execution time: mean = 0.07ms, total = 0.95ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.01ms, total = 0.04ms, Queueing time: mean = 0.13ms, max = 0.76ms, min = 0.15ms, total = 0.91ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ReporterService.grpc_client.HealthCheck - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo.OnReplyReceived - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (0 active), Execution time: mean = 0.92ms, total = 0.92ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + +----------------- +Task execution event stats: + +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: + +----------------- +Task Event stats: + +IO Service Stats: + +Global stats: 2 total (2 active) +Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = 0.00ms, total = 0.00ms +Event stats: + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + PeriodicalRunner.RunFnPeriodically - 1 total (1 active, 1 running), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Other Stats: + gcs_grpc_in_progress:1 + event_aggregator_grpc_in_progress:0 + current number of task status events in buffer: 0 + current number of profile events in buffer: 0 + current number of dropped task attempts tracked: 0 + total task events sent: 0 MiB + total number of task attempts sent: 0 + total number of task attempts dropped reported: 0 + total number of sent failure: 0 + num status task events dropped: 0 + num profile task events dropped: 0 + num ray task events reported to aggregator: 0 + num ray task events failed to report to aggregator: 0 + num of task attempts dropped reported to aggregator: 0 + num of failed requests to aggregator: 0 + +[2026-02-27 00:03:48,498 I 1926 2173] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,498 I 1926 2173] normal_task_submitter.cc:824: Number of alive nodes:1 +[2026-02-27 00:03:53,503 I 1926 2173] metrics_agent_client.cc:54: Exporter initialized. +[2026-02-27 00:04:03,499 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:04,499 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:04,502 W 1926 2204] task_event_buffer.cc:838: [1] GCS or the event aggregator hasn't replied to the previous flush events call (likely overloaded). Skipping reporting task state events and retry later.[gcs_grpc_in_progress=1][event_aggregator_grpc_in_progress=0][cur_status_events_size=0][cur_profile_events_size=0] +[2026-02-27 00:04:05,499 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:06,499 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:07,500 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:08,500 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:09,500 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:10,500 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:11,500 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:12,500 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:13,501 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:14,501 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:15,501 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:16,501 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:17,501 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:18,501 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:19,502 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:20,502 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:21,502 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:22,502 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:23,502 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:24,502 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:25,503 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:26,503 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:27,503 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:28,503 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:29,503 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:30,503 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:31,504 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:32,504 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:33,504 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:34,504 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:35,504 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:36,504 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:37,505 I 1926 2173] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 diff --git a/test/python-core-worker-d4b0d18f4fb7e6704589db155523933ef7b28ca4f0517743d5a2d52e_1925.log b/test/python-core-worker-d4b0d18f4fb7e6704589db155523933ef7b28ca4f0517743d5a2d52e_1925.log new file mode 100644 index 0000000000000000000000000000000000000000..1106df88c658e7d8baa07913c62f2a245bfe462a --- /dev/null +++ b/test/python-core-worker-d4b0d18f4fb7e6704589db155523933ef7b28ca4f0517743d5a2d52e_1925.log @@ -0,0 +1,100 @@ +[2026-02-27 00:03:48,491 I 1925 1925] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 1925 +[2026-02-27 00:03:48,496 I 1925 1925] event.cc:499: Ray Event initialized for CORE_WORKER +[2026-02-27 00:03:48,496 I 1925 1925] event.cc:499: Ray Event initialized for EXPORT_TASK +[2026-02-27 00:03:48,496 I 1925 1925] event.cc:332: Set ray event level to warning +[2026-02-27 00:03:48,496 I 1925 1925] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:48,498 I 1925 1925] grpc_server.cc:143: worker server started, listening on port 50469. +[2026-02-27 00:03:48,513 I 1925 1925] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50469 worker_id=d4b0d18f4fb7e6704589db155523933ef7b28ca4f0517743d5a2d52e node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,514 I 1925 1925] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms. +[2026-02-27 00:03:48,516 I 1925 1925] core_worker.cc:515: Adjusted worker niceness to 15 +[2026-02-27 00:03:48,516 I 1925 2208] core_worker.cc:455: Event stats: + + +Global stats: 12 total (10 active) +Queueing time: mean = 0.02ms, max = 0.22ms, min = 0.07ms, total = 0.30ms +Execution time: mean = 0.00ms, total = 0.03ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.00ms, total = 0.03ms, Queueing time: mean = 0.04ms, max = 0.22ms, min = 0.07ms, total = 0.30ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + +----------------- +Task execution event stats: + +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: + +----------------- +Task Event stats: + +IO Service Stats: + +Global stats: 3 total (2 active) +Queueing time: mean = 0.00ms, max = 0.01ms, min = 0.01ms, total = 0.01ms +Execution time: mean = 0.09ms, total = 0.28ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.28ms, total = 0.28ms, Queueing time: mean = 0.01ms, max = 0.01ms, min = 0.01ms, total = 0.01ms + CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Other Stats: + gcs_grpc_in_progress:1 + event_aggregator_grpc_in_progress:0 + current number of task status events in buffer: 0 + current number of profile events in buffer: 0 + current number of dropped task attempts tracked: 0 + total task events sent: 0 MiB + total number of task attempts sent: 0 + total number of task attempts dropped reported: 0 + total number of sent failure: 0 + num status task events dropped: 0 + num profile task events dropped: 0 + num ray task events reported to aggregator: 0 + num ray task events failed to report to aggregator: 0 + num of task attempts dropped reported to aggregator: 0 + num of failed requests to aggregator: 0 + +[2026-02-27 00:03:48,517 I 1925 1925] metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:48,518 I 1925 2208] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,518 I 1925 2208] normal_task_submitter.cc:824: Number of alive nodes:1 +[2026-02-27 00:03:53,522 I 1925 2208] metrics_agent_client.cc:54: Exporter initialized. +[2026-02-27 00:04:03,519 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:04,518 W 1925 2225] task_event_buffer.cc:838: [1] GCS or the event aggregator hasn't replied to the previous flush events call (likely overloaded). Skipping reporting task state events and retry later.[gcs_grpc_in_progress=1][event_aggregator_grpc_in_progress=0][cur_status_events_size=0][cur_profile_events_size=0] +[2026-02-27 00:04:04,519 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:05,519 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:06,519 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:07,519 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:08,519 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:09,519 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:10,520 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:11,520 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:12,520 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:13,520 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:14,520 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:15,520 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:16,521 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:17,521 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:18,521 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:19,521 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:20,521 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:21,521 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:22,521 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:23,522 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:24,522 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:25,522 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:26,522 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:27,522 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:28,522 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:29,522 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:30,523 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:31,523 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:32,523 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:33,523 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:34,523 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:35,523 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:36,524 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:37,524 I 1925 2208] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 diff --git a/test/python-core-worker-e733c08422794fa1f48c127b914da8cccfd148485edc467f6b29e3fb_1923.log b/test/python-core-worker-e733c08422794fa1f48c127b914da8cccfd148485edc467f6b29e3fb_1923.log new file mode 100644 index 0000000000000000000000000000000000000000..e6b73d7af96a0e686dfb7b636dc4a7d264040223 --- /dev/null +++ b/test/python-core-worker-e733c08422794fa1f48c127b914da8cccfd148485edc467f6b29e3fb_1923.log @@ -0,0 +1,101 @@ +[2026-02-27 00:03:48,180 I 1923 1923] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 1923 +[2026-02-27 00:03:48,190 I 1923 1923] event.cc:499: Ray Event initialized for CORE_WORKER +[2026-02-27 00:03:48,193 I 1923 1923] event.cc:499: Ray Event initialized for EXPORT_TASK +[2026-02-27 00:03:48,194 I 1923 1923] event.cc:332: Set ray event level to warning +[2026-02-27 00:03:48,194 I 1923 1923] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 51882 +[2026-02-27 00:03:48,203 I 1923 1923] grpc_server.cc:143: worker server started, listening on port 50037. +[2026-02-27 00:03:48,234 I 1923 1923] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50037 worker_id=e733c08422794fa1f48c127b914da8cccfd148485edc467f6b29e3fb node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,235 I 1923 1923] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms. +[2026-02-27 00:03:48,239 I 1923 1923] core_worker.cc:515: Adjusted worker niceness to 15 +[2026-02-27 00:03:48,239 I 1923 2005] core_worker.cc:455: Event stats: + + +Global stats: 12 total (10 active) +Queueing time: mean = 0.01ms, max = 0.09ms, min = 0.02ms, total = 0.11ms +Execution time: mean = 0.00ms, total = 0.03ms +Event stats: + PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.00ms, total = 0.03ms, Queueing time: mean = 0.02ms, max = 0.09ms, min = 0.02ms, total = 0.11ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + +----------------- +Task execution event stats: + +Global stats: 0 total (0 active) +Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +Execution time: mean = -nanms, total = 0.00ms +Event stats: + +----------------- +Task Event stats: + +IO Service Stats: + +Global stats: 4 total (1 active) +Queueing time: mean = 0.01ms, max = 0.03ms, min = 0.01ms, total = 0.04ms +Execution time: mean = 0.88ms, total = 3.54ms +Event stats: + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 1.79ms, total = 1.79ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms + PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 1.73ms, total = 1.73ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms + ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.01ms, max = 0.01ms, min = 0.01ms, total = 0.01ms +Other Stats: + gcs_grpc_in_progress:0 + event_aggregator_grpc_in_progress:0 + current number of task status events in buffer: 0 + current number of profile events in buffer: 0 + current number of dropped task attempts tracked: 0 + total task events sent: 0 MiB + total number of task attempts sent: 0 + total number of task attempts dropped reported: 0 + total number of sent failure: 0 + num status task events dropped: 0 + num profile task events dropped: 0 + num ray task events reported to aggregator: 0 + num ray task events failed to report to aggregator: 0 + num of task attempts dropped reported to aggregator: 0 + num of failed requests to aggregator: 0 + +[2026-02-27 00:03:48,239 I 1923 1923] metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:48,241 I 1923 2005] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:48,241 I 1923 2005] normal_task_submitter.cc:824: Number of alive nodes:1 +[2026-02-27 00:03:53,253 I 1923 2005] metrics_agent_client.cc:54: Exporter initialized. +[2026-02-27 00:04:03,243 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:04,240 W 1923 2018] task_event_buffer.cc:838: [1] GCS or the event aggregator hasn't replied to the previous flush events call (likely overloaded). Skipping reporting task state events and retry later.[gcs_grpc_in_progress=1][event_aggregator_grpc_in_progress=0][cur_status_events_size=0][cur_profile_events_size=0] +[2026-02-27 00:04:04,243 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:05,243 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:06,244 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:07,244 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:08,244 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:09,244 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:10,244 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:11,244 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:12,244 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:13,245 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:14,245 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:15,245 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:16,245 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:17,245 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:18,245 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:19,246 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:20,246 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:21,246 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:22,246 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:23,246 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:24,246 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:25,247 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:26,247 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:27,247 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:28,247 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:29,247 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:30,247 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:31,248 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:32,248 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:33,248 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:34,248 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:35,248 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:36,248 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 +[2026-02-27 00:04:37,248 I 1923 2005] raylet_client.cc:99: Error reporting lease backlog information: RpcError: RPC error: failed to connect to all addresses; last error: UNKNOWN: ipv4:10.128.0.163:50317: Failed to connect to remote host: Connection refused rpc_code: 14 diff --git a/test/raylet.err b/test/raylet.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/raylet.out b/test/raylet.out new file mode 100644 index 0000000000000000000000000000000000000000..25c03823d6018e0b9adf931cb6618e818f1045e4 --- /dev/null +++ b/test/raylet.out @@ -0,0 +1,221 @@ +[2026-02-27 00:03:47,328 I 1875 1875] (raylet) main.cc:271: Setting cluster ID to: 53ef51bb0bb70a80ae057770eba1177484524b98986050e67bb3e439 +[2026-02-27 00:03:47,335 I 1875 1875] (raylet) main.cc:461: Per-worker process group cleanup is DISABLED, subreaper is DISABLED +[2026-02-27 00:03:47,335 I 1875 1875] (raylet) main.cc:595: Setting node ID node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:47,335 I 1875 1875] (raylet) store_runner.cc:50: Allowing the Plasma store to use up to 9.52932GB of memory. +[2026-02-27 00:03:47,335 I 1875 1875] (raylet) store_runner.cc:66: Starting object store with directory /dev/shm, fallback /tmp/ray/session_2026-02-27_00-03-44_103874_1384, and huge page support disabled +[2026-02-27 00:03:47,335 I 1875 1890] (raylet) dlmalloc.cc:324: Setting dlmalloc config: plasma_directory=/dev/shm, fallback_directory=/tmp/ray/session_2026-02-27_00-03-44_103874_1384, hugepage_enabled=0, fallback_enabled=1 +[2026-02-27 00:03:47,336 I 1875 1890] (raylet) dlmalloc.cc:153: create_and_mmap_buffer(9529327624, /dev/shm/plasmaXXXXXX) +[2026-02-27 00:03:47,336 I 1875 1890] (raylet) store.cc:572: Plasma store debug dump: +Current usage: 0 / 9.52932 GB +- num bytes created total: 0 +0 pending objects of total size 0MB +- objects spillable: 0 +- bytes spillable: 0 +- objects unsealed: 0 +- bytes unsealed: 0 +- objects in use: 0 +- bytes in use: 0 +- objects evictable: 0 +- bytes evictable: 0 + +- objects created by worker: 0 +- bytes created by worker: 0 +- objects restored: 0 +- bytes restored: 0 +- objects received: 0 +- bytes received: 0 +- objects errored: 0 +- bytes errored: 0 + +[2026-02-27 00:03:47,337 I 1875 1875] (raylet) grpc_server.cc:143: ObjectManager server started, listening on port 50297. +[2026-02-27 00:03:47,341 I 1875 1875] (raylet) memory_monitor.cc:47: MemoryMonitor initialized with usage threshold at 31969628160 bytes (0.95 system memory), total system memory bytes: 33652240384 +[2026-02-27 00:03:47,341 I 1875 1875] (raylet) node_manager.cc:241: Initializing NodeManager node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:47,341 I 1875 1875] (raylet) grpc_server.cc:143: NodeManager server started, listening on port 50317. +[2026-02-27 00:03:47,355 I 1875 1906] (raylet) agent_manager.cc:81: Monitor agent process with name dashboard_agent +[2026-02-27 00:03:47,356 I 1875 1908] (raylet) agent_manager.cc:81: Monitor agent process with name runtime_env_agent +[2026-02-27 00:03:47,357 I 1875 1875] (raylet) metrics_agent_client.cc:42: Initializing exporter ... +[2026-02-27 00:03:47,357 I 1875 1875] (raylet) event.cc:499: Ray Event initialized for RAYLET +[2026-02-27 00:03:47,357 I 1875 1875] (raylet) event.cc:332: Set ray event level to warning +[2026-02-27 00:03:47,359 I 1875 1875] (raylet) node_manager.cc:292: Raylet of id, cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 started. Raylet consists of node_manager and object_manager. node_manager address: 10.128.0.163:0 object_manager address: 10.128.0.163:50297 hostname: cs-01kje4289qf3k6pv20jzcef9t8 +[2026-02-27 00:03:47,362 I 1875 1875] (raylet) node_manager.cc:440: [state-dump] NodeManager: +[state-dump] Node ID: cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[state-dump] Node name: 10.128.0.163 +[state-dump] InitialConfigResources: {GPU: 1, node:__internal_head__: 1, object_store_memory: 9.52932e+09, memory: 2.22351e+10, CPU: 8, node:10.128.0.163: 1, accelerator_type:L4: 1} +[state-dump] ClusterLeaseManager: +[state-dump] ========== Node: cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 ================= +[state-dump] Infeasible queue length: 0 +[state-dump] Schedule queue length: 0 +[state-dump] Grant queue length: 0 +[state-dump] num_waiting_for_resource: 0 +[state-dump] num_waiting_for_plasma_memory: 0 +[state-dump] num_waiting_for_remote_node_resources: 0 +[state-dump] num_worker_not_started_by_job_config_not_exist: 0 +[state-dump] num_worker_not_started_by_registration_timeout: 0 +[state-dump] num_tasks_waiting_for_workers: 0 +[state-dump] num_cancelled_leases: 0 +[state-dump] cluster_resource_scheduler state: +[state-dump] Local id: -7930779791598977017 Local resources: {"total":{GPU: [10000], node:10.128.0.163: [10000], node:__internal_head__: [10000], CPU: [80000], memory: [222350872580000], object_store_memory: [95293231100000], accelerator_type:L4: [10000]}}, "available": {GPU: [10000], node:10.128.0.163: [10000], node:__internal_head__: [10000], CPU: [80000], memory: [222350872580000], object_store_memory: [95293231100000], accelerator_type:L4: [10000]}}, "labels":{"ray.io/accelerator-type":"L4","ray.io/node-id":"cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7",} is_draining: 0 is_idle: 1 Cluster resources (at most 20 nodes are shown): node id: -7930779791598977017{"total":{GPU: 10000, node:__internal_head__: 10000, object_store_memory: 95293231100000, memory: 222350872580000, CPU: 80000, node:10.128.0.163: 10000, accelerator_type:L4: 10000}}, "available": {object_store_memory: 95293231100000, GPU: 10000, memory: 222350872580000, CPU: 80000, node:10.128.0.163: 10000, accelerator_type:L4: 10000, node:__internal_head__: 10000}}, "labels":{"ray.io/accelerator-type":"L4","ray.io/node-id":"cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7",}, "is_draining": 0, "draining_deadline_timestamp_ms": -1} { "placement group locations": [], "node to bundles": []} +[state-dump] Waiting leases size: 0 +[state-dump] Number of granted lease arguments: 0 +[state-dump] Number of pinned lease arguments: 0 +[state-dump] Number of total spilled leases: 0 +[state-dump] Number of spilled waiting leases: 0 +[state-dump] Number of spilled unschedulable leases: 0 +[state-dump] Resource usage { +[state-dump] } +[state-dump] Backlog Size per scheduling descriptor :{workerId: num backlogs}: +[state-dump] +[state-dump] Granted leases by scheduling class: +[state-dump] ================================================== +[state-dump] +[state-dump] ClusterResources: +[state-dump] LocalObjectManager: +[state-dump] - num pinned objects: 0 +[state-dump] - pinned objects size: 0 +[state-dump] - num objects pending restore: 0 +[state-dump] - num objects pending spill: 0 +[state-dump] - num bytes pending spill: 0 +[state-dump] - num bytes currently spilled: 0 +[state-dump] - cumulative spill requests: 0 +[state-dump] - cumulative restore requests: 0 +[state-dump] - spilled objects pending delete: 0 +[state-dump] +[state-dump] ObjectManager: +[state-dump] - num local objects: 0 +[state-dump] - num unfulfilled push requests: 0 +[state-dump] - num object pull requests: 0 +[state-dump] - num chunks received total: 0 +[state-dump] - num chunks received failed (all): 0 +[state-dump] - num chunks received failed / cancelled: 0 +[state-dump] - num chunks received failed / plasma error: 0 +[state-dump] Event stats: +[state-dump] Global stats: 0 total (0 active) +[state-dump] Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] Execution time: mean = -nanms, total = 0.00ms +[state-dump] Event stats: +[state-dump] PushManager: +[state-dump] - num pushes remaining: 0 +[state-dump] - num chunks in flight: 0 +[state-dump] - num chunks remaining: 0 +[state-dump] - max chunks allowed: 409 +[state-dump] OwnershipBasedObjectDirectory: +[state-dump] - num listeners: 0 +[state-dump] - cumulative location updates: 0 +[state-dump] - num location updates per second: 0.000 +[state-dump] - num location lookups per second: 0.000 +[state-dump] - num locations added per second: 0.000 +[state-dump] - num locations removed per second: 0.000 +[state-dump] BufferPool: +[state-dump] - create buffer state map size: 0 +[state-dump] PullManager: +[state-dump] - num bytes available for pulled objects: 9529323110 +[state-dump] - num bytes being pulled (all): 0 +[state-dump] - num bytes being pulled / pinned: 0 +[state-dump] - get request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} +[state-dump] - wait request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} +[state-dump] - task request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} +[state-dump] - first get request bundle: N/A +[state-dump] - first wait request bundle: N/A +[state-dump] - first task request bundle: N/A +[state-dump] - num objects queued: 0 +[state-dump] - num objects actively pulled (all): 0 +[state-dump] - num objects actively pulled / pinned: 0 +[state-dump] - num bundles being pulled: 0 +[state-dump] - num pull retries: 0 +[state-dump] - max timeout seconds: 0 +[state-dump] - max timeout request is already processed. No entry. +[state-dump] +[state-dump] WorkerPool: +[state-dump] - registered jobs: 0 +[state-dump] - process_failed_job_config_missing: 0 +[state-dump] - process_failed_rate_limited: 0 +[state-dump] - process_failed_pending_registration: 0 +[state-dump] - process_failed_runtime_env_setup_failed: 0 +[state-dump] - num PYTHON workers: 0 +[state-dump] - num PYTHON drivers: 0 +[state-dump] - num PYTHON pending start requests: 0 +[state-dump] - num PYTHON pending registration requests: 0 +[state-dump] - num object spill callbacks queued: 0 +[state-dump] - num object restore queued: 0 +[state-dump] - num util functions queued: 0 +[state-dump] - num idle workers: 0 +[state-dump] LeaseDependencyManager: +[state-dump] - lease deps map size: 0 +[state-dump] - get req map size: 0 +[state-dump] - wait req map size: 0 +[state-dump] - local objects map size: 0 +[state-dump] WaitManager: +[state-dump] - num active wait requests: 0 +[state-dump] Subscriber: +[state-dump] Channel WORKER_REF_REMOVED_CHANNEL +[state-dump] - cumulative subscribe requests: 0 +[state-dump] - cumulative unsubscribe requests: 0 +[state-dump] - active subscribed publishers: 0 +[state-dump] - cumulative published messages: 0 +[state-dump] - cumulative processed messages: 0 +[state-dump] Channel WORKER_OBJECT_LOCATIONS_CHANNEL +[state-dump] - cumulative subscribe requests: 0 +[state-dump] - cumulative unsubscribe requests: 0 +[state-dump] - active subscribed publishers: 0 +[state-dump] - cumulative published messages: 0 +[state-dump] - cumulative processed messages: 0 +[state-dump] Channel WORKER_OBJECT_EVICTION +[state-dump] - cumulative subscribe requests: 0 +[state-dump] - cumulative unsubscribe requests: 0 +[state-dump] - active subscribed publishers: 0 +[state-dump] - cumulative published messages: 0 +[state-dump] - cumulative processed messages: 0 +[state-dump] num async plasma notifications: 0 +[state-dump] Event stats: +[state-dump] Global stats: 34 total (15 active) +[state-dump] Queueing time: mean = 2.51ms, max = 17.68ms, min = 0.00ms, total = 85.24ms +[state-dump] Execution time: mean = 0.86ms, total = 29.20ms +[state-dump] Event stats: +[state-dump] PeriodicalRunner.RunFnPeriodically - 12 total (2 active, 1 running), Execution time: mean = 0.18ms, total = 2.15ms, Queueing time: mean = 6.61ms, max = 17.68ms, min = 0.05ms, total = 79.32ms +[state-dump] event_loop_lag_probe - 2 total (0 active), Execution time: mean = 0.01ms, total = 0.02ms, Queueing time: mean = 2.92ms, max = 5.83ms, min = 0.00ms, total = 5.83ms +[state-dump] NodeManager.deadline_timer.debug_state_dump - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] NodeManager.deadline_timer.flush_free_objects - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig - 1 total (0 active), Execution time: mean = 0.79ms, total = 0.79ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (0 active), Execution time: mean = 0.83ms, total = 0.83ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] RayletWorkerPool.deadline_timer.kill_idle_workers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] NodeManager.deadline_timer.spill_objects_when_over_threshold - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] NodeManager.CheckForUnexpectedWorkerDisconnects - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] MetricsAgentClient.WaitForServerReadyWithRetry - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.40ms, total = 0.40ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms +[state-dump] NodeManager.deadline_timer.record_metrics - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] ClusterResourceManager.ResetRemoteNodeView - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] NodeManager.GCTaskFailureReason - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig.OnReplyReceived - 1 total (0 active), Execution time: mean = 22.86ms, total = 22.86ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms +[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode - 1 total (0 active), Execution time: mean = 1.14ms, total = 1.14ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] ReporterService.grpc_client.HealthCheck.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.05ms, total = 0.05ms, Queueing time: mean = 0.05ms, max = 0.05ms, min = 0.05ms, total = 0.05ms +[state-dump] ReporterService.grpc_client.HealthCheck - 1 total (0 active), Execution time: mean = 0.98ms, total = 0.98ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] NodeManager.ScheduleAndGrantLeases - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] MemoryMonitor.CheckIsMemoryUsageAboveThreshold - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms +[state-dump] DebugString() time ms: 0 +[state-dump] +[state-dump] +[2026-02-27 00:03:47,363 I 1875 1875] (raylet) accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 +[2026-02-27 00:03:47,439 I 1875 1875] (raylet) worker_pool.cc:750: [Eagerly] Start install runtime environment for job 01000000. +[2026-02-27 00:03:47,442 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1922, the token is 0 +[2026-02-27 00:03:47,445 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1923, the token is 1 +[2026-02-27 00:03:47,448 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1924, the token is 2 +[2026-02-27 00:03:47,451 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1925, the token is 3 +[2026-02-27 00:03:47,454 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1926, the token is 4 +[2026-02-27 00:03:47,457 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1927, the token is 5 +[2026-02-27 00:03:47,461 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1928, the token is 6 +[2026-02-27 00:03:47,466 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1929, the token is 7 +[2026-02-27 00:03:47,469 I 1875 1875] (raylet) runtime_env_agent_client.cc:350: Runtime Env Agent network error: NotFound: on_connect Connection refused, the server may be still starting or is already failed. Scheduling a retry in 1000ms... +[2026-02-27 00:03:48,199 I 1875 1890] (raylet) object_store.cc:37: Object store current usage 8e-09 / 9.52932 GB. +[2026-02-27 00:03:48,474 I 1875 1875] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000 +[2026-02-27 00:03:48,474 I 1875 1875] (raylet) worker_pool.cc:761: [Eagerly] Create runtime env successful for job 01000000. +[2026-02-27 00:03:48,572 I 1875 1875] (raylet) worker_pool.cc:740: Job 01000000 already started in worker pool. +[2026-02-27 00:03:49,568 I 1875 1875] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=1, has_creation_task_exception=false worker_id=29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111 job_id=NIL_ID +[2026-02-27 00:03:49,644 W 1875 1890] (raylet) store.cc:365: Disconnecting client due to connection error with code 2: End of file +[2026-02-27 00:03:51,471 I 1875 1875] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000 +[2026-02-27 00:03:51,474 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 2372, the token is 8 +[2026-02-27 00:03:53,363 I 1875 1875] (raylet) metrics_agent_client.cc:54: Exporter initialized. +Collecting data... +Generating '/tmp/nsys-report-db8e.qdstrm' + [1/1] [0% ] worker_process_2410.nsys-rep [1/1] [0% ] worker_process_2410.nsys-rep [1/1] [===========51% ] worker_process_2410.nsys-rep [1/1] [===========52% ] worker_process_2410.nsys-rep [1/1] [===========53% ] worker_process_2410.nsys-rep \ No newline at end of file diff --git a/test/runtime_env_agent.err b/test/runtime_env_agent.err new file mode 100644 index 0000000000000000000000000000000000000000..45c4c2d171f4b3540836cff6daf7971f2ad0c31c --- /dev/null +++ b/test/runtime_env_agent.err @@ -0,0 +1,22 @@ +Raylet is terminated. Termination is unexpected. Possible reasons include: (1) SIGKILL by the user or system OOM killer, (2) Invalid memory access from Raylet causing SIGSEGV or SIGBUS, (3) Other termination signals. Last 20 lines of the Raylet logs: + [2026-02-27 00:03:47,363 I 1875 1875] (raylet) accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=cf562760d44bbe7c695ad3e8c246c3a8d992e4ef7594a5654c4c19c7 + [2026-02-27 00:03:47,439 I 1875 1875] (raylet) worker_pool.cc:750: [Eagerly] Start install runtime environment for job 01000000. + [2026-02-27 00:03:47,442 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1922, the token is 0 + [2026-02-27 00:03:47,445 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1923, the token is 1 + [2026-02-27 00:03:47,448 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1924, the token is 2 + [2026-02-27 00:03:47,451 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1925, the token is 3 + [2026-02-27 00:03:47,454 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1926, the token is 4 + [2026-02-27 00:03:47,457 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1927, the token is 5 + [2026-02-27 00:03:47,461 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1928, the token is 6 + [2026-02-27 00:03:47,466 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 1929, the token is 7 + [2026-02-27 00:03:47,469 I 1875 1875] (raylet) runtime_env_agent_client.cc:350: Runtime Env Agent network error: NotFound: on_connect Connection refused, the server may be still starting or is already failed. Scheduling a retry in 1000ms... + [2026-02-27 00:03:48,199 I 1875 1890] (raylet) object_store.cc:37: Object store current usage 8e-09 / 9.52932 GB. + [2026-02-27 00:03:48,474 I 1875 1875] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000 + [2026-02-27 00:03:48,474 I 1875 1875] (raylet) worker_pool.cc:761: [Eagerly] Create runtime env successful for job 01000000. + [2026-02-27 00:03:48,572 I 1875 1875] (raylet) worker_pool.cc:740: Job 01000000 already started in worker pool. + [2026-02-27 00:03:49,568 I 1875 1875] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=1, has_creation_task_exception=false worker_id=29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111 job_id=NIL_ID + [2026-02-27 00:03:49,644 W 1875 1890] (raylet) store.cc:365: Disconnecting client due to connection error with code 2: End of file + [2026-02-27 00:03:51,471 I 1875 1875] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000 + [2026-02-27 00:03:51,474 I 1875 1875] (raylet) worker_pool.cc:531: Started worker process with pid 2372, the token is 8 + [2026-02-27 00:03:53,363 I 1875 1875] (raylet) metrics_agent_client.cc:54: Exporter initialized. + diff --git a/test/runtime_env_agent.log b/test/runtime_env_agent.log new file mode 100644 index 0000000000000000000000000000000000000000..95514f8521b4e742cfd3585c4391857bd9033c07 --- /dev/null +++ b/test/runtime_env_agent.log @@ -0,0 +1,9 @@ +2026-02-27 00:03:48,454 INFO runtime_env_agent.py:193 -- Starting runtime env agent at pid 1907 +2026-02-27 00:03:48,455 INFO runtime_env_agent.py:194 -- Parent raylet pid is 1875 +2026-02-27 00:03:48,456 INFO runtime_env_agent.py:250 -- Listening to address 10.128.0.163, port 65317 +2026-02-27 00:03:48,472 INFO runtime_env_agent.py:378 -- Creating runtime env: {"env_vars": {"CUDA_DEVICE_MAX_CONNECTIONS": "1", "HCCL_HOST_SOCKET_PORT_RANGE": "auto", "HCCL_NPU_SOCKET_PORT_RANGE": "auto", "NCCL_CUMEM_ENABLE": "0", "NCCL_DEBUG": "WARN", "TOKENIZERS_PARALLELISM": "true", "VLLM_ALLOW_RUNTIME_LORA_UPDATING": "true", "VLLM_DISABLE_COMPILE_CACHE": "1", "VLLM_LOGGING_LEVEL": "WARN"}} with timeout 600 seconds. +2026-02-27 00:03:48,473 INFO runtime_env_agent.py:428 -- Successfully created runtime env: {"env_vars": {"CUDA_DEVICE_MAX_CONNECTIONS": "1", "HCCL_HOST_SOCKET_PORT_RANGE": "auto", "HCCL_NPU_SOCKET_PORT_RANGE": "auto", "NCCL_CUMEM_ENABLE": "0", "NCCL_DEBUG": "WARN", "TOKENIZERS_PARALLELISM": "true", "VLLM_ALLOW_RUNTIME_LORA_UPDATING": "true", "VLLM_DISABLE_COMPILE_CACHE": "1", "VLLM_LOGGING_LEVEL": "WARN"}}, context: {"command_prefix": [], "env_vars": {"CUDA_DEVICE_MAX_CONNECTIONS": "1", "HCCL_HOST_SOCKET_PORT_RANGE": "auto", "HCCL_NPU_SOCKET_PORT_RANGE": "auto", "NCCL_CUMEM_ENABLE": "0", "NCCL_DEBUG": "WARN", "TOKENIZERS_PARALLELISM": "true", "VLLM_ALLOW_RUNTIME_LORA_UPDATING": "true", "VLLM_DISABLE_COMPILE_CACHE": "1", "VLLM_LOGGING_LEVEL": "WARN"}, "py_executable": "/usr/bin/python3", "override_worker_entrypoint": null, "java_jars": []} +2026-02-27 00:03:48,680 INFO runtime_env_agent.py:378 -- Creating runtime env: {"_nsight":{"cuda-graph-trace":"graph","cuda-memory-usage":"true","trace":"cuda,nvtx,cublas,ucx"},"env_vars":{"CUDA_DEVICE_MAX_CONNECTIONS":"1","HCCL_HOST_SOCKET_PORT_RANGE":"auto","HCCL_NPU_SOCKET_PORT_RANGE":"auto","NCCL_CUMEM_ENABLE":"0","NCCL_DEBUG":"WARN","TOKENIZERS_PARALLELISM":"true","VLLM_ALLOW_RUNTIME_LORA_UPDATING":"true","VLLM_DISABLE_COMPILE_CACHE":"1","VLLM_LOGGING_LEVEL":"WARN"}} with timeout 600 seconds. +2026-02-27 00:03:51,470 INFO runtime_env_agent.py:428 -- Successfully created runtime env: {"_nsight":{"cuda-graph-trace":"graph","cuda-memory-usage":"true","trace":"cuda,nvtx,cublas,ucx"},"env_vars":{"CUDA_DEVICE_MAX_CONNECTIONS":"1","HCCL_HOST_SOCKET_PORT_RANGE":"auto","HCCL_NPU_SOCKET_PORT_RANGE":"auto","NCCL_CUMEM_ENABLE":"0","NCCL_DEBUG":"WARN","TOKENIZERS_PARALLELISM":"true","VLLM_ALLOW_RUNTIME_LORA_UPDATING":"true","VLLM_DISABLE_COMPILE_CACHE":"1","VLLM_LOGGING_LEVEL":"WARN"}}, context: {"command_prefix": [], "env_vars": {"CUDA_DEVICE_MAX_CONNECTIONS": "1", "HCCL_HOST_SOCKET_PORT_RANGE": "auto", "HCCL_NPU_SOCKET_PORT_RANGE": "auto", "NCCL_CUMEM_ENABLE": "0", "NCCL_DEBUG": "WARN", "TOKENIZERS_PARALLELISM": "true", "VLLM_ALLOW_RUNTIME_LORA_UPDATING": "true", "VLLM_DISABLE_COMPILE_CACHE": "1", "VLLM_LOGGING_LEVEL": "WARN"}, "py_executable": "nsys profile --cuda-graph-trace=graph --cuda-memory-usage=true --trace=cuda,nvtx,cublas,ucx -o /tmp/ray/session_2026-02-27_00-03-44_103874_1384/logs/nsight/'worker_process_%p' python", "override_worker_entrypoint": null, "java_jars": []} +2026-02-27 00:04:02,600 INFO main.py:214 -- Raylet is dead! Exiting Runtime Env Agent. addr: 10.128.0.163, port: 65317 +_check_parent_via_pipe: The parent is dead. diff --git a/test/runtime_env_agent.out b/test/runtime_env_agent.out new file mode 100644 index 0000000000000000000000000000000000000000..6614ace30a9a66ce98efdcc691911760330944a2 --- /dev/null +++ b/test/runtime_env_agent.out @@ -0,0 +1,2 @@ +======== Running on http://10.128.0.163:65317 ======== +(Press CTRL+C to quit) diff --git a/test/runtime_env_setup-01000000.log b/test/runtime_env_setup-01000000.log new file mode 100644 index 0000000000000000000000000000000000000000..3a9956b2d6e86c4235055b88a33682b3cdbcb877 --- /dev/null +++ b/test/runtime_env_setup-01000000.log @@ -0,0 +1 @@ +2026-02-27 00:03:51,470 INFO nsight.py:148 -- Running nsight profiler diff --git a/test/worker-23108ae3fe97ac4122983de9e3923572ea790562aebd4b71fc4accd2-ffffffff-1928.err b/test/worker-23108ae3fe97ac4122983de9e3923572ea790562aebd4b71fc4accd2-ffffffff-1928.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-23108ae3fe97ac4122983de9e3923572ea790562aebd4b71fc4accd2-ffffffff-1928.out b/test/worker-23108ae3fe97ac4122983de9e3923572ea790562aebd4b71fc4accd2-ffffffff-1928.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111-ffffffff-1929.err b/test/worker-29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111-ffffffff-1929.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111-ffffffff-1929.out b/test/worker-29b15f3dea7d69a07782de663871cc2d93a35120d41e29d152021111-ffffffff-1929.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-516ef7f6a0e041d45bbe172463ced38587189b2d71f6f946a6209c94-ffffffff-1924.err b/test/worker-516ef7f6a0e041d45bbe172463ced38587189b2d71f6f946a6209c94-ffffffff-1924.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-516ef7f6a0e041d45bbe172463ced38587189b2d71f6f946a6209c94-ffffffff-1924.out b/test/worker-516ef7f6a0e041d45bbe172463ced38587189b2d71f6f946a6209c94-ffffffff-1924.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-619263e4e6c37bf7cd040a19519e22767492ee9fb7eec61b73be6157-ffffffff-1927.err b/test/worker-619263e4e6c37bf7cd040a19519e22767492ee9fb7eec61b73be6157-ffffffff-1927.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-619263e4e6c37bf7cd040a19519e22767492ee9fb7eec61b73be6157-ffffffff-1927.out b/test/worker-619263e4e6c37bf7cd040a19519e22767492ee9fb7eec61b73be6157-ffffffff-1927.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508-01000000-2410.err b/test/worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508-01000000-2410.err new file mode 100644 index 0000000000000000000000000000000000000000..e97afacf04ed313c31c4d4758478a2ed4fecbedf --- /dev/null +++ b/test/worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508-01000000-2410.err @@ -0,0 +1,35 @@ +:job_id:01000000 +W0227 00:04:02.244000 2410 torch/utils/cpp_extension.py:117] No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' +WARNING:2026-02-27 00:04:03,065:fused_indices_to_multihot has reached end of life. Please migrate to a non-experimental function. +:actor_name:TaskRunner +[2026-02-27 00:04:34,102 C 2410 2410] task_receiver.cc:132: An unexpected system state has occurred. You have likely discovered a bug in Ray. Please report this issue at https://github.com/ray-project/ray/issues and we'll work with you to fix it. Check failed: actor_creation_task_done_() Status not OK: IOError: Broken pipe +*** StackTrace Information *** +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x16bc06a) [0x7cd486fab06a] ray::operator<<() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray6RayLogD1Ev+0x469) [0x7cd486fad699] ray::RayLog::~RayLog() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb5fc1a) [0x7cd48644ec1a] ray::core::TaskReceiver::HandleTask()::{lambda()#1}::operator()()::{lambda()#1}::operator()() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb7b5b2) [0x7cd48646a5b2] ray::core::InboundRequest::Accept() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb6bc8b) [0x7cd48645ac8b] ray::core::NormalSchedulingQueue::ScheduleRequests() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1012fa8) [0x7cd486901fa8] EventTracker::RecordExecution() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1009f57) [0x7cd4868f8f57] std::_Function_handler<>::_M_invoke() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0xb8f31b) [0x7cd48647e31b] boost::asio::detail::executor_op<>::do_complete() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x16873db) [0x7cd486f763db] boost::asio::detail::scheduler::do_run_one() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1688d79) [0x7cd486f77d79] boost::asio::detail::scheduler::run() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x1689482) [0x7cd486f78482] boost::asio::io_context::run() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray4core10CoreWorker20RunTaskExecutionLoopEv+0x127) [0x7cd4862fbaf7] ray::core::CoreWorker::RunTaskExecutionLoop() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray4core21CoreWorkerProcessImpl26RunWorkerTaskExecutionLoopEv+0x41) [0x7cd486351461] ray::core::CoreWorkerProcessImpl::RunWorkerTaskExecutionLoop() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(_ZN3ray4core17CoreWorkerProcess20RunTaskExecutionLoopEv+0x1d) [0x7cd48635167d] ray::core::CoreWorkerProcess::RunTaskExecutionLoop() +/usr/local/lib/python3.12/dist-packages/ray/_raylet.so(+0x881e81) [0x7cd486170e81] __pyx_pw_3ray_7_raylet_10CoreWorker_5run_task_loop() +ray::TaskRunner(PyObject_Vectorcall+0x36) [0x5627f6] PyObject_Vectorcall +ray::TaskRunner(_PyEval_EvalFrameDefault+0x701) [0x54a2e1] _PyEval_EvalFrameDefault +ray::TaskRunner(PyEval_EvalCode+0x99) [0x620799] PyEval_EvalCode +ray::TaskRunner() [0x65c44b] +ray::TaskRunner() [0x6574d6] +ray::TaskRunner() [0x654145] +ray::TaskRunner(_PyRun_SimpleFileObject+0x1a5) [0x653e15] _PyRun_SimpleFileObject +ray::TaskRunner(_PyRun_AnyFileObject+0x47) [0x653927] _PyRun_AnyFileObject +ray::TaskRunner(Py_RunMain+0x375) [0x650605] Py_RunMain +ray::TaskRunner(Py_BytesMain+0x2d) [0x60962d] Py_BytesMain +/lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x7cd48eca3d90] +/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x7cd48eca3e40] __libc_start_main +ray::TaskRunner(_start+0x25) [0x6094a5] _start + diff --git a/test/worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508-01000000-2410.out b/test/worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508-01000000-2410.out new file mode 100644 index 0000000000000000000000000000000000000000..aff2f4e4f978f69973e5e06f4f9375327c3c997f --- /dev/null +++ b/test/worker-a90c7d025be10c8b52ddb0c367136d57136cfc58ba507147879e9508-01000000-2410.out @@ -0,0 +1,2 @@ +:job_id:01000000 +:actor_name:TaskRunner diff --git a/test/worker-b3afa43aba26e08f32ca5353f9e473275e92cc25650459921012dbd8-ffffffff-1922.err b/test/worker-b3afa43aba26e08f32ca5353f9e473275e92cc25650459921012dbd8-ffffffff-1922.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-b3afa43aba26e08f32ca5353f9e473275e92cc25650459921012dbd8-ffffffff-1922.out b/test/worker-b3afa43aba26e08f32ca5353f9e473275e92cc25650459921012dbd8-ffffffff-1922.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-cd2aaefc98dd93f1e1139da87bceb128f98c8a4c4e33a7ad5a7e5550-ffffffff-1926.err b/test/worker-cd2aaefc98dd93f1e1139da87bceb128f98c8a4c4e33a7ad5a7e5550-ffffffff-1926.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-cd2aaefc98dd93f1e1139da87bceb128f98c8a4c4e33a7ad5a7e5550-ffffffff-1926.out b/test/worker-cd2aaefc98dd93f1e1139da87bceb128f98c8a4c4e33a7ad5a7e5550-ffffffff-1926.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-d4b0d18f4fb7e6704589db155523933ef7b28ca4f0517743d5a2d52e-ffffffff-1925.err b/test/worker-d4b0d18f4fb7e6704589db155523933ef7b28ca4f0517743d5a2d52e-ffffffff-1925.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-d4b0d18f4fb7e6704589db155523933ef7b28ca4f0517743d5a2d52e-ffffffff-1925.out b/test/worker-d4b0d18f4fb7e6704589db155523933ef7b28ca4f0517743d5a2d52e-ffffffff-1925.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-e733c08422794fa1f48c127b914da8cccfd148485edc467f6b29e3fb-ffffffff-1923.err b/test/worker-e733c08422794fa1f48c127b914da8cccfd148485edc467f6b29e3fb-ffffffff-1923.err new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/test/worker-e733c08422794fa1f48c127b914da8cccfd148485edc467f6b29e3fb-ffffffff-1923.out b/test/worker-e733c08422794fa1f48c127b914da8cccfd148485edc467f6b29e3fb-ffffffff-1923.out new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391