Add files using upload-large-folder tool
Browse files- dashboard_DataHead.out +0 -0
- dashboard_EventHead.log +6 -0
- dashboard_JobHead.out +0 -0
- dashboard_ReportHead.out +0 -0
- dashboard_TrainHead.log +5 -0
- dashboard_agent.err +0 -0
- monitor.log +538 -0
- python-core-worker-1535028fd440028216a02042c55e0b58baec34df171b54a8306f4bc8_11376.log +87 -0
- python-core-worker-b809b75ac50a13f3d02e083041fe7ba32c1445fa33db5801d3c6cfe5_12477.log +73 -0
- raylet.out +579 -0
- worker-15c410d5d6a75625cb50c80927d18090e899b8edc49402fe08e50ee6-01000000-12567.out +2 -0
- worker-2cda7ffb1fdfeaaf98e6be62760ae2627c565d43cb10409f83c0a748-ffffffff-11372.out +0 -0
- worker-33b9d0a21a51ca22dda2aa2142cb264d1ee4f9d53a55dc567b49496c-01000000-12500.out +2 -0
- worker-389d4ca43c5eadc5290ba2907f911210cffe11839a5cfe9496d636c1-01000000-12110.out +3 -0
- worker-5ad3b871f9c47a0419d1c26aa73c88d3ae2d40ede3aeceeef3079ef2-ffffffff-11379.out +0 -0
- worker-a99abd04b70eed71bbc2b85849964e1f45cdec8a7b96f35e101ab940-01000000-13106.out +52 -0
- worker-af6e4d2eae80c226c783dd6717832e015ec8fc0144d801649c12abfe-01000000-12563.out +2 -0
- worker-bb0f50c5405699ae07f957ec3f7c03f2bdf40be03f6e39b39232dc16-ffffffff-11378.out +0 -0
- worker-d94856809cc941df63bf786bee51e2d1396c50fb3b25d48a4be64edf-ffffffff-11373.out +0 -0
- worker-fc14e0d4e4b6acb4ecead813c2d960587eefa7859aac6d8e19aeec98-ffffffff-11374.err +0 -0
dashboard_DataHead.out
ADDED
|
File without changes
|
dashboard_EventHead.log
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2026-02-27 00:30:29,851 INFO module.py:210 -- Starting module EventHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='1055e483f4dc49122b1241989fd976e87d4b63a2cfd37b9f5e0a28de', gcs_address='10.128.0.163:54299', session_name='session_2026-02-27_00-30-26_175126_10593', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593/sockets')
|
| 2 |
+
2026-02-27 00:30:29,855 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version.
|
| 3 |
+
2026-02-27 00:30:29,859 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-30-26_175126_10593/sockets/dash_EventHead.
|
| 4 |
+
2026-02-27 00:30:29,859 INFO event_utils.py:130 -- Monitor events logs modified after 1772150429.5048454 on /tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/events, the source types are all.
|
| 5 |
+
2026-02-27 00:30:29,860 INFO module.py:225 -- Module EventHead initialized, receiving messages...
|
| 6 |
+
2026-02-27 00:32:14,976 WARNING module.py:82 -- Parent process 10931 died. Exiting...
|
dashboard_JobHead.out
ADDED
|
File without changes
|
dashboard_ReportHead.out
ADDED
|
File without changes
|
dashboard_TrainHead.log
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2026-02-27 00:30:29,551 INFO module.py:210 -- Starting module TrainHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='1055e483f4dc49122b1241989fd976e87d4b63a2cfd37b9f5e0a28de', gcs_address='10.128.0.163:54299', session_name='session_2026-02-27_00-30-26_175126_10593', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593/sockets')
|
| 2 |
+
2026-02-27 00:30:29,551 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version.
|
| 3 |
+
2026-02-27 00:30:29,565 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-30-26_175126_10593/sockets/dash_TrainHead.
|
| 4 |
+
2026-02-27 00:30:29,570 INFO module.py:225 -- Module TrainHead initialized, receiving messages...
|
| 5 |
+
2026-02-27 00:32:14,696 WARNING module.py:82 -- Parent process 10931 died. Exiting...
|
dashboard_agent.err
ADDED
|
File without changes
|
monitor.log
ADDED
|
@@ -0,0 +1,538 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2026-02-27 00:30:27,024 INFO monitor.py:729 -- Starting monitor using ray installation: /usr/local/lib/python3.12/dist-packages/ray/__init__.py
|
| 2 |
+
2026-02-27 00:30:27,025 INFO monitor.py:730 -- Ray version: 2.52.1
|
| 3 |
+
2026-02-27 00:30:27,025 INFO monitor.py:731 -- Ray commit: 4ebdc0abe5e5a551625fe7f87053c7e668a6ff74
|
| 4 |
+
2026-02-27 00:30:27,025 INFO monitor.py:732 -- Monitor started with command: ['/usr/local/lib/python3.12/dist-packages/ray/autoscaler/_private/monitor.py', '--logs-dir=/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs', '--logging-rotate-bytes=536870912', '--logging-rotate-backup-count=5', '--gcs-address=10.128.0.163:54299', '--stdout-filepath=/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/monitor.out', '--stderr-filepath=/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/monitor.err', '--monitor-ip=10.128.0.163']
|
| 5 |
+
2026-02-27 00:30:27,031 INFO monitor.py:161 -- session_name: session_2026-02-27_00-30-26_175126_10593
|
| 6 |
+
2026-02-27 00:30:27,033 INFO monitor.py:193 -- Starting autoscaler metrics server on port 44217
|
| 7 |
+
2026-02-27 00:30:27,049 INFO monitor.py:218 -- Monitor: Started
|
| 8 |
+
2026-02-27 00:30:27,059 INFO autoscaler.py:280 -- disable_node_updaters:False
|
| 9 |
+
2026-02-27 00:30:27,059 INFO autoscaler.py:289 -- disable_launch_config_check:True
|
| 10 |
+
2026-02-27 00:30:27,059 INFO autoscaler.py:301 -- foreground_node_launch:False
|
| 11 |
+
2026-02-27 00:30:27,059 INFO autoscaler.py:311 -- worker_liveness_check:True
|
| 12 |
+
2026-02-27 00:30:27,059 INFO autoscaler.py:361 -- StandardAutoscaler: {'cluster_name': 'default', 'max_workers': 0, 'upscaling_speed': 1.0, 'docker': {}, 'idle_timeout_minutes': 0, 'provider': {'type': 'readonly', 'use_node_id_as_ip': True, 'disable_launch_config_check': True}, 'auth': {}, 'available_node_types': {'ray.head.default': {'resources': {}, 'node_config': {}, 'max_workers': 0}}, 'head_node_type': 'ray.head.default', 'file_mounts': {}, 'cluster_synced_files': [], 'file_mounts_sync_continuously': False, 'rsync_exclude': [], 'rsync_filter': [], 'initialization_commands': [], 'setup_commands': [], 'head_setup_commands': [], 'worker_setup_commands': [], 'head_start_ray_commands': [], 'worker_start_ray_commands': []}
|
| 13 |
+
2026-02-27 00:30:27,062 INFO monitor.py:407 -- Autoscaler has not yet received load metrics. Waiting.
|
| 14 |
+
2026-02-27 00:30:32,072 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 15 |
+
2026-02-27 00:30:32,073 INFO autoscaler.py:408 --
|
| 16 |
+
======== Autoscaler status: 2026-02-27 00:30:32.073106 ========
|
| 17 |
+
Node status
|
| 18 |
+
---------------------------------------------------------------
|
| 19 |
+
Active:
|
| 20 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 21 |
+
Pending:
|
| 22 |
+
(no pending nodes)
|
| 23 |
+
Recent failures:
|
| 24 |
+
(no failures)
|
| 25 |
+
|
| 26 |
+
Resources
|
| 27 |
+
---------------------------------------------------------------
|
| 28 |
+
Total Usage:
|
| 29 |
+
0.0/8.0 CPU
|
| 30 |
+
0.0/1.0 GPU
|
| 31 |
+
0B/19.37GiB memory
|
| 32 |
+
0B/8.30GiB object_store_memory
|
| 33 |
+
|
| 34 |
+
From request_resources:
|
| 35 |
+
(none)
|
| 36 |
+
Pending Demands:
|
| 37 |
+
(no resource demands)
|
| 38 |
+
2026-02-27 00:30:32,075 INFO autoscaler.py:463 -- The autoscaler took 0.003 seconds to complete the update iteration.
|
| 39 |
+
2026-02-27 00:30:37,082 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 40 |
+
2026-02-27 00:30:37,082 INFO autoscaler.py:408 --
|
| 41 |
+
======== Autoscaler status: 2026-02-27 00:30:37.082814 ========
|
| 42 |
+
Node status
|
| 43 |
+
---------------------------------------------------------------
|
| 44 |
+
Active:
|
| 45 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 46 |
+
Pending:
|
| 47 |
+
(no pending nodes)
|
| 48 |
+
Recent failures:
|
| 49 |
+
(no failures)
|
| 50 |
+
|
| 51 |
+
Resources
|
| 52 |
+
---------------------------------------------------------------
|
| 53 |
+
Total Usage:
|
| 54 |
+
1.0/8.0 CPU
|
| 55 |
+
0.0/1.0 GPU
|
| 56 |
+
0B/19.37GiB memory
|
| 57 |
+
0B/8.30GiB object_store_memory
|
| 58 |
+
|
| 59 |
+
From request_resources:
|
| 60 |
+
(none)
|
| 61 |
+
Pending Demands:
|
| 62 |
+
(no resource demands)
|
| 63 |
+
2026-02-27 00:30:37,084 INFO autoscaler.py:463 -- The autoscaler took 0.002 seconds to complete the update iteration.
|
| 64 |
+
2026-02-27 00:30:42,088 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 65 |
+
2026-02-27 00:30:42,088 INFO autoscaler.py:408 --
|
| 66 |
+
======== Autoscaler status: 2026-02-27 00:30:42.088471 ========
|
| 67 |
+
Node status
|
| 68 |
+
---------------------------------------------------------------
|
| 69 |
+
Active:
|
| 70 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 71 |
+
Pending:
|
| 72 |
+
(no pending nodes)
|
| 73 |
+
Recent failures:
|
| 74 |
+
(no failures)
|
| 75 |
+
|
| 76 |
+
Resources
|
| 77 |
+
---------------------------------------------------------------
|
| 78 |
+
Total Usage:
|
| 79 |
+
1.0/8.0 CPU
|
| 80 |
+
0.0/1.0 GPU
|
| 81 |
+
0B/19.37GiB memory
|
| 82 |
+
0B/8.30GiB object_store_memory
|
| 83 |
+
|
| 84 |
+
From request_resources:
|
| 85 |
+
(none)
|
| 86 |
+
Pending Demands:
|
| 87 |
+
(no resource demands)
|
| 88 |
+
2026-02-27 00:30:42,089 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 89 |
+
2026-02-27 00:30:47,093 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 90 |
+
2026-02-27 00:30:47,093 INFO autoscaler.py:408 --
|
| 91 |
+
======== Autoscaler status: 2026-02-27 00:30:47.093598 ========
|
| 92 |
+
Node status
|
| 93 |
+
---------------------------------------------------------------
|
| 94 |
+
Active:
|
| 95 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 96 |
+
Pending:
|
| 97 |
+
(no pending nodes)
|
| 98 |
+
Recent failures:
|
| 99 |
+
(no failures)
|
| 100 |
+
|
| 101 |
+
Resources
|
| 102 |
+
---------------------------------------------------------------
|
| 103 |
+
Total Usage:
|
| 104 |
+
1.0/8.0 CPU
|
| 105 |
+
0.0/1.0 GPU
|
| 106 |
+
0B/19.37GiB memory
|
| 107 |
+
0B/8.30GiB object_store_memory
|
| 108 |
+
|
| 109 |
+
From request_resources:
|
| 110 |
+
(none)
|
| 111 |
+
Pending Demands:
|
| 112 |
+
(no resource demands)
|
| 113 |
+
2026-02-27 00:30:47,094 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 114 |
+
2026-02-27 00:30:52,098 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 115 |
+
2026-02-27 00:30:52,098 INFO autoscaler.py:408 --
|
| 116 |
+
======== Autoscaler status: 2026-02-27 00:30:52.098555 ========
|
| 117 |
+
Node status
|
| 118 |
+
---------------------------------------------------------------
|
| 119 |
+
Active:
|
| 120 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 121 |
+
Pending:
|
| 122 |
+
(no pending nodes)
|
| 123 |
+
Recent failures:
|
| 124 |
+
(no failures)
|
| 125 |
+
|
| 126 |
+
Resources
|
| 127 |
+
---------------------------------------------------------------
|
| 128 |
+
Total Usage:
|
| 129 |
+
1.0/8.0 CPU
|
| 130 |
+
0.0/1.0 GPU
|
| 131 |
+
0B/19.37GiB memory
|
| 132 |
+
0B/8.30GiB object_store_memory
|
| 133 |
+
|
| 134 |
+
From request_resources:
|
| 135 |
+
(none)
|
| 136 |
+
Pending Demands:
|
| 137 |
+
(no resource demands)
|
| 138 |
+
2026-02-27 00:30:52,099 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 139 |
+
2026-02-27 00:30:57,103 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 140 |
+
2026-02-27 00:30:57,103 INFO autoscaler.py:408 --
|
| 141 |
+
======== Autoscaler status: 2026-02-27 00:30:57.103821 ========
|
| 142 |
+
Node status
|
| 143 |
+
---------------------------------------------------------------
|
| 144 |
+
Active:
|
| 145 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 146 |
+
Pending:
|
| 147 |
+
(no pending nodes)
|
| 148 |
+
Recent failures:
|
| 149 |
+
(no failures)
|
| 150 |
+
|
| 151 |
+
Resources
|
| 152 |
+
---------------------------------------------------------------
|
| 153 |
+
Total Usage:
|
| 154 |
+
1.0/8.0 CPU
|
| 155 |
+
0.0/1.0 GPU
|
| 156 |
+
0B/19.37GiB memory
|
| 157 |
+
0B/8.30GiB object_store_memory
|
| 158 |
+
|
| 159 |
+
From request_resources:
|
| 160 |
+
(none)
|
| 161 |
+
Pending Demands:
|
| 162 |
+
(no resource demands)
|
| 163 |
+
2026-02-27 00:30:57,105 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 164 |
+
2026-02-27 00:31:02,113 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 165 |
+
2026-02-27 00:31:02,114 INFO autoscaler.py:408 --
|
| 166 |
+
======== Autoscaler status: 2026-02-27 00:31:02.113946 ========
|
| 167 |
+
Node status
|
| 168 |
+
---------------------------------------------------------------
|
| 169 |
+
Active:
|
| 170 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 171 |
+
Pending:
|
| 172 |
+
(no pending nodes)
|
| 173 |
+
Recent failures:
|
| 174 |
+
(no failures)
|
| 175 |
+
|
| 176 |
+
Resources
|
| 177 |
+
---------------------------------------------------------------
|
| 178 |
+
Total Usage:
|
| 179 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 180 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 181 |
+
0B/19.37GiB memory
|
| 182 |
+
0B/8.30GiB object_store_memory
|
| 183 |
+
|
| 184 |
+
From request_resources:
|
| 185 |
+
(none)
|
| 186 |
+
Pending Demands:
|
| 187 |
+
(no resource demands)
|
| 188 |
+
2026-02-27 00:31:02,115 INFO autoscaler.py:463 -- The autoscaler took 0.002 seconds to complete the update iteration.
|
| 189 |
+
2026-02-27 00:31:07,119 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 190 |
+
2026-02-27 00:31:07,120 INFO autoscaler.py:408 --
|
| 191 |
+
======== Autoscaler status: 2026-02-27 00:31:07.119980 ========
|
| 192 |
+
Node status
|
| 193 |
+
---------------------------------------------------------------
|
| 194 |
+
Active:
|
| 195 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 196 |
+
Pending:
|
| 197 |
+
(no pending nodes)
|
| 198 |
+
Recent failures:
|
| 199 |
+
(no failures)
|
| 200 |
+
|
| 201 |
+
Resources
|
| 202 |
+
---------------------------------------------------------------
|
| 203 |
+
Total Usage:
|
| 204 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 205 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 206 |
+
0B/19.37GiB memory
|
| 207 |
+
0B/8.30GiB object_store_memory
|
| 208 |
+
|
| 209 |
+
From request_resources:
|
| 210 |
+
(none)
|
| 211 |
+
Pending Demands:
|
| 212 |
+
(no resource demands)
|
| 213 |
+
2026-02-27 00:31:07,121 INFO autoscaler.py:463 -- The autoscaler took 0.002 seconds to complete the update iteration.
|
| 214 |
+
2026-02-27 00:31:12,125 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 215 |
+
2026-02-27 00:31:12,125 INFO autoscaler.py:408 --
|
| 216 |
+
======== Autoscaler status: 2026-02-27 00:31:12.125468 ========
|
| 217 |
+
Node status
|
| 218 |
+
---------------------------------------------------------------
|
| 219 |
+
Active:
|
| 220 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 221 |
+
Pending:
|
| 222 |
+
(no pending nodes)
|
| 223 |
+
Recent failures:
|
| 224 |
+
(no failures)
|
| 225 |
+
|
| 226 |
+
Resources
|
| 227 |
+
---------------------------------------------------------------
|
| 228 |
+
Total Usage:
|
| 229 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 230 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 231 |
+
0B/19.37GiB memory
|
| 232 |
+
0B/8.30GiB object_store_memory
|
| 233 |
+
|
| 234 |
+
From request_resources:
|
| 235 |
+
(none)
|
| 236 |
+
Pending Demands:
|
| 237 |
+
(no resource demands)
|
| 238 |
+
2026-02-27 00:31:12,126 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 239 |
+
2026-02-27 00:31:17,129 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 240 |
+
2026-02-27 00:31:17,130 INFO autoscaler.py:408 --
|
| 241 |
+
======== Autoscaler status: 2026-02-27 00:31:17.130194 ========
|
| 242 |
+
Node status
|
| 243 |
+
---------------------------------------------------------------
|
| 244 |
+
Active:
|
| 245 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 246 |
+
Pending:
|
| 247 |
+
(no pending nodes)
|
| 248 |
+
Recent failures:
|
| 249 |
+
(no failures)
|
| 250 |
+
|
| 251 |
+
Resources
|
| 252 |
+
---------------------------------------------------------------
|
| 253 |
+
Total Usage:
|
| 254 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 255 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 256 |
+
0B/19.37GiB memory
|
| 257 |
+
0B/8.30GiB object_store_memory
|
| 258 |
+
|
| 259 |
+
From request_resources:
|
| 260 |
+
(none)
|
| 261 |
+
Pending Demands:
|
| 262 |
+
(no resource demands)
|
| 263 |
+
2026-02-27 00:31:17,131 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 264 |
+
2026-02-27 00:31:22,134 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 265 |
+
2026-02-27 00:31:22,135 INFO autoscaler.py:408 --
|
| 266 |
+
======== Autoscaler status: 2026-02-27 00:31:22.135012 ========
|
| 267 |
+
Node status
|
| 268 |
+
---------------------------------------------------------------
|
| 269 |
+
Active:
|
| 270 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 271 |
+
Pending:
|
| 272 |
+
(no pending nodes)
|
| 273 |
+
Recent failures:
|
| 274 |
+
(no failures)
|
| 275 |
+
|
| 276 |
+
Resources
|
| 277 |
+
---------------------------------------------------------------
|
| 278 |
+
Total Usage:
|
| 279 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 280 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 281 |
+
0B/19.37GiB memory
|
| 282 |
+
0B/8.30GiB object_store_memory
|
| 283 |
+
|
| 284 |
+
From request_resources:
|
| 285 |
+
(none)
|
| 286 |
+
Pending Demands:
|
| 287 |
+
(no resource demands)
|
| 288 |
+
2026-02-27 00:31:22,136 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 289 |
+
2026-02-27 00:31:27,139 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 290 |
+
2026-02-27 00:31:27,139 INFO autoscaler.py:408 --
|
| 291 |
+
======== Autoscaler status: 2026-02-27 00:31:27.139778 ========
|
| 292 |
+
Node status
|
| 293 |
+
---------------------------------------------------------------
|
| 294 |
+
Active:
|
| 295 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 296 |
+
Pending:
|
| 297 |
+
(no pending nodes)
|
| 298 |
+
Recent failures:
|
| 299 |
+
(no failures)
|
| 300 |
+
|
| 301 |
+
Resources
|
| 302 |
+
---------------------------------------------------------------
|
| 303 |
+
Total Usage:
|
| 304 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 305 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 306 |
+
0B/19.37GiB memory
|
| 307 |
+
0B/8.30GiB object_store_memory
|
| 308 |
+
|
| 309 |
+
From request_resources:
|
| 310 |
+
(none)
|
| 311 |
+
Pending Demands:
|
| 312 |
+
(no resource demands)
|
| 313 |
+
2026-02-27 00:31:27,140 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 314 |
+
2026-02-27 00:31:32,151 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 315 |
+
2026-02-27 00:31:32,152 INFO autoscaler.py:408 --
|
| 316 |
+
======== Autoscaler status: 2026-02-27 00:31:32.152090 ========
|
| 317 |
+
Node status
|
| 318 |
+
---------------------------------------------------------------
|
| 319 |
+
Active:
|
| 320 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 321 |
+
Pending:
|
| 322 |
+
(no pending nodes)
|
| 323 |
+
Recent failures:
|
| 324 |
+
(no failures)
|
| 325 |
+
|
| 326 |
+
Resources
|
| 327 |
+
---------------------------------------------------------------
|
| 328 |
+
Total Usage:
|
| 329 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 330 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 331 |
+
0B/19.37GiB memory
|
| 332 |
+
0B/8.30GiB object_store_memory
|
| 333 |
+
|
| 334 |
+
From request_resources:
|
| 335 |
+
(none)
|
| 336 |
+
Pending Demands:
|
| 337 |
+
(no resource demands)
|
| 338 |
+
2026-02-27 00:31:32,156 INFO autoscaler.py:463 -- The autoscaler took 0.005 seconds to complete the update iteration.
|
| 339 |
+
2026-02-27 00:31:37,164 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 340 |
+
2026-02-27 00:31:37,164 INFO autoscaler.py:408 --
|
| 341 |
+
======== Autoscaler status: 2026-02-27 00:31:37.164450 ========
|
| 342 |
+
Node status
|
| 343 |
+
---------------------------------------------------------------
|
| 344 |
+
Active:
|
| 345 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 346 |
+
Pending:
|
| 347 |
+
(no pending nodes)
|
| 348 |
+
Recent failures:
|
| 349 |
+
(no failures)
|
| 350 |
+
|
| 351 |
+
Resources
|
| 352 |
+
---------------------------------------------------------------
|
| 353 |
+
Total Usage:
|
| 354 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 355 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 356 |
+
0B/19.37GiB memory
|
| 357 |
+
0B/8.30GiB object_store_memory
|
| 358 |
+
|
| 359 |
+
From request_resources:
|
| 360 |
+
(none)
|
| 361 |
+
Pending Demands:
|
| 362 |
+
(no resource demands)
|
| 363 |
+
2026-02-27 00:31:37,165 INFO autoscaler.py:463 -- The autoscaler took 0.002 seconds to complete the update iteration.
|
| 364 |
+
2026-02-27 00:31:42,174 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 365 |
+
2026-02-27 00:31:42,174 INFO autoscaler.py:408 --
|
| 366 |
+
======== Autoscaler status: 2026-02-27 00:31:42.174629 ========
|
| 367 |
+
Node status
|
| 368 |
+
---------------------------------------------------------------
|
| 369 |
+
Active:
|
| 370 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 371 |
+
Pending:
|
| 372 |
+
(no pending nodes)
|
| 373 |
+
Recent failures:
|
| 374 |
+
(no failures)
|
| 375 |
+
|
| 376 |
+
Resources
|
| 377 |
+
---------------------------------------------------------------
|
| 378 |
+
Total Usage:
|
| 379 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 380 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 381 |
+
0B/19.37GiB memory
|
| 382 |
+
0B/8.30GiB object_store_memory
|
| 383 |
+
|
| 384 |
+
From request_resources:
|
| 385 |
+
(none)
|
| 386 |
+
Pending Demands:
|
| 387 |
+
(no resource demands)
|
| 388 |
+
2026-02-27 00:31:42,176 INFO autoscaler.py:463 -- The autoscaler took 0.002 seconds to complete the update iteration.
|
| 389 |
+
2026-02-27 00:31:47,179 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 390 |
+
2026-02-27 00:31:47,180 INFO autoscaler.py:408 --
|
| 391 |
+
======== Autoscaler status: 2026-02-27 00:31:47.179922 ========
|
| 392 |
+
Node status
|
| 393 |
+
---------------------------------------------------------------
|
| 394 |
+
Active:
|
| 395 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 396 |
+
Pending:
|
| 397 |
+
(no pending nodes)
|
| 398 |
+
Recent failures:
|
| 399 |
+
(no failures)
|
| 400 |
+
|
| 401 |
+
Resources
|
| 402 |
+
---------------------------------------------------------------
|
| 403 |
+
Total Usage:
|
| 404 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 405 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 406 |
+
0B/19.37GiB memory
|
| 407 |
+
0B/8.30GiB object_store_memory
|
| 408 |
+
|
| 409 |
+
From request_resources:
|
| 410 |
+
(none)
|
| 411 |
+
Pending Demands:
|
| 412 |
+
(no resource demands)
|
| 413 |
+
2026-02-27 00:31:47,181 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 414 |
+
2026-02-27 00:31:52,184 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 415 |
+
2026-02-27 00:31:52,184 INFO autoscaler.py:408 --
|
| 416 |
+
======== Autoscaler status: 2026-02-27 00:31:52.184734 ========
|
| 417 |
+
Node status
|
| 418 |
+
---------------------------------------------------------------
|
| 419 |
+
Active:
|
| 420 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 421 |
+
Pending:
|
| 422 |
+
(no pending nodes)
|
| 423 |
+
Recent failures:
|
| 424 |
+
(no failures)
|
| 425 |
+
|
| 426 |
+
Resources
|
| 427 |
+
---------------------------------------------------------------
|
| 428 |
+
Total Usage:
|
| 429 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 430 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 431 |
+
0B/19.37GiB memory
|
| 432 |
+
0B/8.30GiB object_store_memory
|
| 433 |
+
|
| 434 |
+
From request_resources:
|
| 435 |
+
(none)
|
| 436 |
+
Pending Demands:
|
| 437 |
+
(no resource demands)
|
| 438 |
+
2026-02-27 00:31:52,185 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 439 |
+
2026-02-27 00:31:57,189 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 440 |
+
2026-02-27 00:31:57,189 INFO autoscaler.py:408 --
|
| 441 |
+
======== Autoscaler status: 2026-02-27 00:31:57.189806 ========
|
| 442 |
+
Node status
|
| 443 |
+
---------------------------------------------------------------
|
| 444 |
+
Active:
|
| 445 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 446 |
+
Pending:
|
| 447 |
+
(no pending nodes)
|
| 448 |
+
Recent failures:
|
| 449 |
+
(no failures)
|
| 450 |
+
|
| 451 |
+
Resources
|
| 452 |
+
---------------------------------------------------------------
|
| 453 |
+
Total Usage:
|
| 454 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 455 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 456 |
+
0B/19.37GiB memory
|
| 457 |
+
0B/8.30GiB object_store_memory
|
| 458 |
+
|
| 459 |
+
From request_resources:
|
| 460 |
+
(none)
|
| 461 |
+
Pending Demands:
|
| 462 |
+
(no resource demands)
|
| 463 |
+
2026-02-27 00:31:57,190 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 464 |
+
2026-02-27 00:32:02,194 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 465 |
+
2026-02-27 00:32:02,194 INFO autoscaler.py:408 --
|
| 466 |
+
======== Autoscaler status: 2026-02-27 00:32:02.194442 ========
|
| 467 |
+
Node status
|
| 468 |
+
---------------------------------------------------------------
|
| 469 |
+
Active:
|
| 470 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 471 |
+
Pending:
|
| 472 |
+
(no pending nodes)
|
| 473 |
+
Recent failures:
|
| 474 |
+
(no failures)
|
| 475 |
+
|
| 476 |
+
Resources
|
| 477 |
+
---------------------------------------------------------------
|
| 478 |
+
Total Usage:
|
| 479 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 480 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 481 |
+
0B/19.37GiB memory
|
| 482 |
+
0B/8.30GiB object_store_memory
|
| 483 |
+
|
| 484 |
+
From request_resources:
|
| 485 |
+
(none)
|
| 486 |
+
Pending Demands:
|
| 487 |
+
(no resource demands)
|
| 488 |
+
2026-02-27 00:32:02,195 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
| 489 |
+
2026-02-27 00:32:07,199 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 490 |
+
2026-02-27 00:32:07,199 INFO autoscaler.py:408 --
|
| 491 |
+
======== Autoscaler status: 2026-02-27 00:32:07.199365 ========
|
| 492 |
+
Node status
|
| 493 |
+
---------------------------------------------------------------
|
| 494 |
+
Active:
|
| 495 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 496 |
+
Pending:
|
| 497 |
+
(no pending nodes)
|
| 498 |
+
Recent failures:
|
| 499 |
+
(no failures)
|
| 500 |
+
|
| 501 |
+
Resources
|
| 502 |
+
---------------------------------------------------------------
|
| 503 |
+
Total Usage:
|
| 504 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 505 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 506 |
+
0B/19.37GiB memory
|
| 507 |
+
0B/8.30GiB object_store_memory
|
| 508 |
+
|
| 509 |
+
From request_resources:
|
| 510 |
+
(none)
|
| 511 |
+
Pending Demands:
|
| 512 |
+
(no resource demands)
|
| 513 |
+
2026-02-27 00:32:07,200 INFO autoscaler.py:463 -- The autoscaler took 0.002 seconds to complete the update iteration.
|
| 514 |
+
2026-02-27 00:32:12,204 INFO autoscaler.py:147 -- The autoscaler took 0.0 seconds to fetch the list of non-terminated nodes.
|
| 515 |
+
2026-02-27 00:32:12,204 INFO autoscaler.py:408 --
|
| 516 |
+
======== Autoscaler status: 2026-02-27 00:32:12.204461 ========
|
| 517 |
+
Node status
|
| 518 |
+
---------------------------------------------------------------
|
| 519 |
+
Active:
|
| 520 |
+
1 node_d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 521 |
+
Pending:
|
| 522 |
+
(no pending nodes)
|
| 523 |
+
Recent failures:
|
| 524 |
+
(no failures)
|
| 525 |
+
|
| 526 |
+
Resources
|
| 527 |
+
---------------------------------------------------------------
|
| 528 |
+
Total Usage:
|
| 529 |
+
2.0/8.0 CPU (1.0 used of 3.0 reserved in placement groups)
|
| 530 |
+
0.33330000000000004/1.0 GPU (0.33330000000000004 used of 1.0 reserved in placement groups)
|
| 531 |
+
0B/19.37GiB memory
|
| 532 |
+
0B/8.30GiB object_store_memory
|
| 533 |
+
|
| 534 |
+
From request_resources:
|
| 535 |
+
(none)
|
| 536 |
+
Pending Demands:
|
| 537 |
+
(no resource demands)
|
| 538 |
+
2026-02-27 00:32:12,205 INFO autoscaler.py:463 -- The autoscaler took 0.001 seconds to complete the update iteration.
|
python-core-worker-1535028fd440028216a02042c55e0b58baec34df171b54a8306f4bc8_11376.log
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[2026-02-27 00:30:32,517 I 11376 11376] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 11376
|
| 2 |
+
[2026-02-27 00:30:32,526 I 11376 11376] event.cc:499: Ray Event initialized for CORE_WORKER
|
| 3 |
+
[2026-02-27 00:30:32,526 I 11376 11376] event.cc:499: Ray Event initialized for EXPORT_TASK
|
| 4 |
+
[2026-02-27 00:30:32,526 I 11376 11376] event.cc:332: Set ray event level to warning
|
| 5 |
+
[2026-02-27 00:30:32,526 I 11376 11376] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 55678
|
| 6 |
+
[2026-02-27 00:30:32,528 I 11376 11376] grpc_server.cc:143: worker server started, listening on port 50485.
|
| 7 |
+
[2026-02-27 00:30:32,575 I 11376 11376] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50485 worker_id=1535028fd440028216a02042c55e0b58baec34df171b54a8306f4bc8 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 8 |
+
[2026-02-27 00:30:32,579 I 11376 11376] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms.
|
| 9 |
+
[2026-02-27 00:30:32,596 I 11376 11376] core_worker.cc:515: Adjusted worker niceness to 15
|
| 10 |
+
[2026-02-27 00:30:32,596 I 11376 11518] core_worker.cc:455: Event stats:
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
Global stats: 14 total (10 active)
|
| 14 |
+
Queueing time: mean = 0.17ms, max = 2.25ms, min = 0.08ms, total = 2.33ms
|
| 15 |
+
Execution time: mean = 0.42ms, total = 5.94ms
|
| 16 |
+
Event stats:
|
| 17 |
+
PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.00ms, total = 0.03ms, Queueing time: mean = 0.33ms, max = 2.25ms, min = 0.08ms, total = 2.33ms
|
| 18 |
+
ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo.OnReplyReceived - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 19 |
+
ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (0 active), Execution time: mean = 3.48ms, total = 3.48ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 20 |
+
ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 21 |
+
ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (0 active), Execution time: mean = 2.43ms, total = 2.43ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 22 |
+
Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 23 |
+
ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 24 |
+
CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 25 |
+
|
| 26 |
+
-----------------
|
| 27 |
+
Task execution event stats:
|
| 28 |
+
|
| 29 |
+
Global stats: 0 total (0 active)
|
| 30 |
+
Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 31 |
+
Execution time: mean = -nanms, total = 0.00ms
|
| 32 |
+
Event stats:
|
| 33 |
+
|
| 34 |
+
-----------------
|
| 35 |
+
Task Event stats:
|
| 36 |
+
|
| 37 |
+
IO Service Stats:
|
| 38 |
+
|
| 39 |
+
Global stats: 4 total (1 active)
|
| 40 |
+
Queueing time: mean = 0.47ms, max = 1.85ms, min = 0.02ms, total = 1.86ms
|
| 41 |
+
Execution time: mean = 0.42ms, total = 1.69ms
|
| 42 |
+
Event stats:
|
| 43 |
+
PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.21ms, total = 0.21ms, Queueing time: mean = 1.85ms, max = 1.85ms, min = 1.85ms, total = 1.85ms
|
| 44 |
+
CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 45 |
+
ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 1.38ms, total = 1.38ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 46 |
+
ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.10ms, total = 0.10ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
|
| 47 |
+
Other Stats:
|
| 48 |
+
gcs_grpc_in_progress:0
|
| 49 |
+
event_aggregator_grpc_in_progress:0
|
| 50 |
+
current number of task status events in buffer: 0
|
| 51 |
+
current number of profile events in buffer: 0
|
| 52 |
+
current number of dropped task attempts tracked: 0
|
| 53 |
+
total task events sent: 0 MiB
|
| 54 |
+
total number of task attempts sent: 0
|
| 55 |
+
total number of task attempts dropped reported: 0
|
| 56 |
+
total number of sent failure: 0
|
| 57 |
+
num status task events dropped: 0
|
| 58 |
+
num profile task events dropped: 0
|
| 59 |
+
num ray task events reported to aggregator: 0
|
| 60 |
+
num ray task events failed to report to aggregator: 0
|
| 61 |
+
num of task attempts dropped reported to aggregator: 0
|
| 62 |
+
num of failed requests to aggregator: 0
|
| 63 |
+
|
| 64 |
+
[2026-02-27 00:30:32,596 I 11376 11376] metrics_agent_client.cc:42: Initializing exporter ...
|
| 65 |
+
[2026-02-27 00:30:32,597 I 11376 11518] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 66 |
+
[2026-02-27 00:30:32,597 I 11376 11518] normal_task_submitter.cc:824: Number of alive nodes:1
|
| 67 |
+
[2026-02-27 00:30:34,599 I 11376 11518] metrics_agent_client.cc:54: Exporter initialized.
|
| 68 |
+
[2026-02-27 00:31:00,501 I 11376 11518] core_worker_shutdown_executor.cc:184: Executing handle exit: INTENDED_SYSTEM_EXIT - Worker exited because it was idle for a long time (timeout: -1ms)
|
| 69 |
+
[2026-02-27 00:31:00,501 I 11376 11518] core_worker_shutdown_executor.cc:94: Executing worker exit: INTENDED_SYSTEM_EXIT - Worker exited because it was idle for a long time (timeout: 10000ms)
|
| 70 |
+
[2026-02-27 00:31:00,501 I 11376 11376] core_worker_shutdown_executor.cc:128: Wait for currently executing tasks in the underlying thread pools to finish.
|
| 71 |
+
[2026-02-27 00:31:00,501 I 11376 11376] core_worker_shutdown_executor.cc:162: Releasing local references, then draining reference counter.
|
| 72 |
+
[2026-02-27 00:31:00,505 I 11376 11376] core_worker_shutdown_executor.cc:217: Try killing all child processes of this worker as it exits. Child process pids:
|
| 73 |
+
[2026-02-27 00:31:00,505 I 11376 11376] core_worker_shutdown_executor.cc:262: Sending disconnect message to the local raylet.
|
| 74 |
+
[2026-02-27 00:31:00,507 I 11376 11376] raylet_ipc_client.cc:135: RayletIpcClient::Disconnect, exit_type=INTENDED_SYSTEM_EXIT, exit_detail=Worker exited because it was idle for a long time, has creation_task_exception_pb_bytes=0
|
| 75 |
+
[2026-02-27 00:31:00,507 I 11376 11376] core_worker_shutdown_executor.cc:279: Disconnected from the local raylet.
|
| 76 |
+
[2026-02-27 00:31:00,507 I 11376 11376] task_event_buffer.cc:491: Shutting down TaskEventBuffer.
|
| 77 |
+
[2026-02-27 00:31:00,508 I 11376 11577] task_event_buffer.cc:459: Task event buffer io service stopped.
|
| 78 |
+
[2026-02-27 00:31:00,508 I 11376 11376] core_worker_shutdown_executor.cc:54: Waiting for joining a core worker io thread. If it hangs here, there might be deadlock or a high load in the core worker io service.
|
| 79 |
+
[2026-02-27 00:31:00,508 I 11376 11518] core_worker_process.cc:194: Core worker main io service stopped.
|
| 80 |
+
[2026-02-27 00:31:00,511 I 11376 11376] core_worker_shutdown_executor.cc:72: Disconnecting a GCS client.
|
| 81 |
+
[2026-02-27 00:31:00,511 I 11376 11376] core_worker_shutdown_executor.cc:79: Core worker ready to be deallocated.
|
| 82 |
+
[2026-02-27 00:31:00,511 I 11376 11376] core_worker_process.cc:950: Task execution loop terminated. Removing the global worker.
|
| 83 |
+
[2026-02-27 00:31:00,511 I 11376 11376] core_worker.cc:539: Core worker is destructed
|
| 84 |
+
[2026-02-27 00:31:00,511 I 11376 11376] task_event_buffer.cc:491: Shutting down TaskEventBuffer.
|
| 85 |
+
[2026-02-27 00:31:00,512 I 11376 11376] core_worker_process.cc:846: Destructing CoreWorkerProcessImpl. pid: 11376
|
| 86 |
+
[2026-02-27 00:31:00,514 I 11376 11376] stats.h:149: Stats module has shutdown.
|
| 87 |
+
[2026-02-27 00:31:00,546 W 11376 11376] core_worker_process.cc:860: The core worker process is not initialized yet or already shutdown.
|
python-core-worker-b809b75ac50a13f3d02e083041fe7ba32c1445fa33db5801d3c6cfe5_12477.log
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[2026-02-27 00:31:33,129 I 12477 12477] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 12477
|
| 2 |
+
[2026-02-27 00:31:33,135 I 12477 12477] event.cc:499: Ray Event initialized for CORE_WORKER
|
| 3 |
+
[2026-02-27 00:31:33,139 I 12477 12477] event.cc:499: Ray Event initialized for EXPORT_TASK
|
| 4 |
+
[2026-02-27 00:31:33,139 I 12477 12477] event.cc:332: Set ray event level to warning
|
| 5 |
+
[2026-02-27 00:31:33,139 I 12477 12477] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 55678
|
| 6 |
+
[2026-02-27 00:31:33,141 I 12477 12477] grpc_server.cc:143: worker server started, listening on port 50119.
|
| 7 |
+
[2026-02-27 00:31:33,177 I 12477 12477] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50119 worker_id=b809b75ac50a13f3d02e083041fe7ba32c1445fa33db5801d3c6cfe5 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 8 |
+
[2026-02-27 00:31:33,180 I 12477 12477] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms.
|
| 9 |
+
[2026-02-27 00:31:33,183 I 12477 12477] core_worker.cc:515: Adjusted worker niceness to 15
|
| 10 |
+
[2026-02-27 00:31:33,183 I 12477 12477] metrics_agent_client.cc:42: Initializing exporter ...
|
| 11 |
+
[2026-02-27 00:31:33,183 I 12477 12788] core_worker.cc:455: Event stats:
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
Global stats: 12 total (10 active)
|
| 15 |
+
Queueing time: mean = 0.04ms, max = 0.50ms, min = 0.03ms, total = 0.54ms
|
| 16 |
+
Execution time: mean = 0.00ms, total = 0.04ms
|
| 17 |
+
Event stats:
|
| 18 |
+
PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.01ms, total = 0.04ms, Queueing time: mean = 0.08ms, max = 0.50ms, min = 0.03ms, total = 0.54ms
|
| 19 |
+
Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 20 |
+
ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 21 |
+
CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 22 |
+
ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 23 |
+
ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 24 |
+
|
| 25 |
+
-----------------
|
| 26 |
+
Task execution event stats:
|
| 27 |
+
|
| 28 |
+
Global stats: 0 total (0 active)
|
| 29 |
+
Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 30 |
+
Execution time: mean = -nanms, total = 0.00ms
|
| 31 |
+
Event stats:
|
| 32 |
+
|
| 33 |
+
-----------------
|
| 34 |
+
Task Event stats:
|
| 35 |
+
|
| 36 |
+
IO Service Stats:
|
| 37 |
+
|
| 38 |
+
Global stats: 4 total (1 active)
|
| 39 |
+
Queueing time: mean = 0.24ms, max = 0.92ms, min = 0.03ms, total = 0.95ms
|
| 40 |
+
Execution time: mean = 0.31ms, total = 1.22ms
|
| 41 |
+
Event stats:
|
| 42 |
+
CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 43 |
+
PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.24ms, total = 0.24ms, Queueing time: mean = 0.92ms, max = 0.92ms, min = 0.92ms, total = 0.92ms
|
| 44 |
+
ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 0.95ms, total = 0.95ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 45 |
+
ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.04ms, total = 0.04ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms
|
| 46 |
+
Other Stats:
|
| 47 |
+
gcs_grpc_in_progress:0
|
| 48 |
+
event_aggregator_grpc_in_progress:0
|
| 49 |
+
current number of task status events in buffer: 0
|
| 50 |
+
current number of profile events in buffer: 0
|
| 51 |
+
current number of dropped task attempts tracked: 0
|
| 52 |
+
total task events sent: 0 MiB
|
| 53 |
+
total number of task attempts sent: 0
|
| 54 |
+
total number of task attempts dropped reported: 0
|
| 55 |
+
total number of sent failure: 0
|
| 56 |
+
num status task events dropped: 0
|
| 57 |
+
num profile task events dropped: 0
|
| 58 |
+
num ray task events reported to aggregator: 0
|
| 59 |
+
num ray task events failed to report to aggregator: 0
|
| 60 |
+
num of task attempts dropped reported to aggregator: 0
|
| 61 |
+
num of failed requests to aggregator: 0
|
| 62 |
+
|
| 63 |
+
[2026-02-27 00:31:33,190 I 12477 12788] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 64 |
+
[2026-02-27 00:31:33,190 I 12477 12788] normal_task_submitter.cc:824: Number of alive nodes:1
|
| 65 |
+
[2026-02-27 00:31:33,191 I 12477 12788] metrics_agent_client.cc:54: Exporter initialized.
|
| 66 |
+
[2026-02-27 00:31:33,244 I 12477 12477] actor_task_submitter.cc:74: Set actor max pending calls to -1 actor_id=2110253750be447ceb2be14f01000000
|
| 67 |
+
[2026-02-27 00:31:33,256 I 12477 12477] core_worker.cc:2903: Creating actor actor_id=2110253750be447ceb2be14f01000000
|
| 68 |
+
[2026-02-27 00:31:44,737 I 12477 12477] task_receiver.cc:142: Actor creation task finished, task_id: ffffffffffffffff2110253750be447ceb2be14f01000000, actor_id: 2110253750be447ceb2be14f01000000, actor_repr_name:
|
| 69 |
+
[2026-02-27 00:32:12,486 I 12477 12788] core_worker_shutdown_executor.cc:217: Try killing all child processes of this worker as it exits. Child process pids:
|
| 70 |
+
[2026-02-27 00:32:12,489 I 12477 12788] core_worker_shutdown_executor.cc:262: Sending disconnect message to the local raylet.
|
| 71 |
+
[2026-02-27 00:32:12,490 I 12477 12788] raylet_ipc_client.cc:135: RayletIpcClient::Disconnect, exit_type=INTENDED_USER_EXIT, exit_detail=Worker force exited because its job has finished, has creation_task_exception_pb_bytes=0
|
| 72 |
+
[2026-02-27 00:32:12,508 I 12477 12788] core_worker_shutdown_executor.cc:279: Disconnected from the local raylet.
|
| 73 |
+
[2026-02-27 00:32:12,508 W 12477 12788] core_worker_shutdown_executor.cc:288: Quick exit - terminating process immediately
|
raylet.out
ADDED
|
@@ -0,0 +1,579 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[2026-02-27 00:30:30,443 I 11302 11302] (raylet) main.cc:271: Setting cluster ID to: 1055e483f4dc49122b1241989fd976e87d4b63a2cfd37b9f5e0a28de
|
| 2 |
+
[2026-02-27 00:30:30,459 I 11302 11302] (raylet) main.cc:461: Per-worker process group cleanup is DISABLED, subreaper is DISABLED
|
| 3 |
+
[2026-02-27 00:30:30,459 I 11302 11302] (raylet) main.cc:595: Setting node ID node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 4 |
+
[2026-02-27 00:30:30,459 I 11302 11302] (raylet) store_runner.cc:50: Allowing the Plasma store to use up to 8.91152GB of memory.
|
| 5 |
+
[2026-02-27 00:30:30,459 I 11302 11302] (raylet) store_runner.cc:66: Starting object store with directory /dev/shm, fallback /tmp/ray/session_2026-02-27_00-30-26_175126_10593, and huge page support disabled
|
| 6 |
+
[2026-02-27 00:30:30,460 I 11302 11317] (raylet) dlmalloc.cc:324: Setting dlmalloc config: plasma_directory=/dev/shm, fallback_directory=/tmp/ray/session_2026-02-27_00-30-26_175126_10593, hugepage_enabled=0, fallback_enabled=1
|
| 7 |
+
[2026-02-27 00:30:30,460 I 11302 11317] (raylet) dlmalloc.cc:153: create_and_mmap_buffer(8911585288, /dev/shm/plasmaXXXXXX)
|
| 8 |
+
[2026-02-27 00:30:30,461 I 11302 11317] (raylet) store.cc:572: Plasma store debug dump:
|
| 9 |
+
Current usage: 0 / 8.91152 GB
|
| 10 |
+
- num bytes created total: 0
|
| 11 |
+
0 pending objects of total size 0MB
|
| 12 |
+
- objects spillable: 0
|
| 13 |
+
- bytes spillable: 0
|
| 14 |
+
- objects unsealed: 0
|
| 15 |
+
- bytes unsealed: 0
|
| 16 |
+
- objects in use: 0
|
| 17 |
+
- bytes in use: 0
|
| 18 |
+
- objects evictable: 0
|
| 19 |
+
- bytes evictable: 0
|
| 20 |
+
|
| 21 |
+
- objects created by worker: 0
|
| 22 |
+
- bytes created by worker: 0
|
| 23 |
+
- objects restored: 0
|
| 24 |
+
- bytes restored: 0
|
| 25 |
+
- objects received: 0
|
| 26 |
+
- bytes received: 0
|
| 27 |
+
- objects errored: 0
|
| 28 |
+
- bytes errored: 0
|
| 29 |
+
|
| 30 |
+
[2026-02-27 00:30:31,464 I 11302 11302] (raylet) grpc_server.cc:143: ObjectManager server started, listening on port 50479.
|
| 31 |
+
[2026-02-27 00:30:31,469 I 11302 11302] (raylet) memory_monitor.cc:47: MemoryMonitor initialized with usage threshold at 31969628160 bytes (0.95 system memory), total system memory bytes: 33652240384
|
| 32 |
+
[2026-02-27 00:30:31,469 I 11302 11302] (raylet) node_manager.cc:241: Initializing NodeManager node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 33 |
+
[2026-02-27 00:30:31,470 I 11302 11302] (raylet) grpc_server.cc:143: NodeManager server started, listening on port 50449.
|
| 34 |
+
[2026-02-27 00:30:31,486 I 11302 11358] (raylet) agent_manager.cc:81: Monitor agent process with name dashboard_agent
|
| 35 |
+
[2026-02-27 00:30:31,487 I 11302 11360] (raylet) agent_manager.cc:81: Monitor agent process with name runtime_env_agent
|
| 36 |
+
[2026-02-27 00:30:31,487 I 11302 11302] (raylet) metrics_agent_client.cc:42: Initializing exporter ...
|
| 37 |
+
[2026-02-27 00:30:31,489 I 11302 11302] (raylet) event.cc:499: Ray Event initialized for RAYLET
|
| 38 |
+
[2026-02-27 00:30:31,489 I 11302 11302] (raylet) event.cc:332: Set ray event level to warning
|
| 39 |
+
[2026-02-27 00:30:31,492 I 11302 11302] (raylet) node_manager.cc:292: Raylet of id, d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120 started. Raylet consists of node_manager and object_manager. node_manager address: 10.128.0.163:0 object_manager address: 10.128.0.163:50479 hostname: cs-01kje4289qf3k6pv20jzcef9t8
|
| 40 |
+
[2026-02-27 00:30:31,496 I 11302 11302] (raylet) node_manager.cc:440: [state-dump] NodeManager:
|
| 41 |
+
[state-dump] Node ID: d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 42 |
+
[state-dump] Node name: 10.128.0.163
|
| 43 |
+
[state-dump] InitialConfigResources: {node:__internal_head__: 1, GPU: 1, accelerator_type:L4: 1, memory: 2.07935e+10, object_store_memory: 8.91152e+09, node:10.128.0.163: 1, CPU: 8}
|
| 44 |
+
[state-dump] ClusterLeaseManager:
|
| 45 |
+
[state-dump] ========== Node: d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120 =================
|
| 46 |
+
[state-dump] Infeasible queue length: 0
|
| 47 |
+
[state-dump] Schedule queue length: 0
|
| 48 |
+
[state-dump] Grant queue length: 0
|
| 49 |
+
[state-dump] num_waiting_for_resource: 0
|
| 50 |
+
[state-dump] num_waiting_for_plasma_memory: 0
|
| 51 |
+
[state-dump] num_waiting_for_remote_node_resources: 0
|
| 52 |
+
[state-dump] num_worker_not_started_by_job_config_not_exist: 0
|
| 53 |
+
[state-dump] num_worker_not_started_by_registration_timeout: 0
|
| 54 |
+
[state-dump] num_tasks_waiting_for_workers: 0
|
| 55 |
+
[state-dump] num_cancelled_leases: 0
|
| 56 |
+
[state-dump] cluster_resource_scheduler state:
|
| 57 |
+
[state-dump] Local id: 9054801897395801548 Local resources: {"total":{node:10.128.0.163: [10000], GPU: [10000], memory: [207935393800000], accelerator_type:L4: [10000], CPU: [80000], object_store_memory: [89115168760000], node:__internal_head__: [10000]}}, "available": {node:10.128.0.163: [10000], GPU: [10000], memory: [207935393800000], accelerator_type:L4: [10000], CPU: [80000], object_store_memory: [89115168760000], node:__internal_head__: [10000]}}, "labels":{"ray.io/accelerator-type":"L4","ray.io/node-id":"d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120",} is_draining: 0 is_idle: 1 Cluster resources (at most 20 nodes are shown): node id: 9054801897395801548{"total":{node:__internal_head__: 10000, GPU: 10000, accelerator_type:L4: 10000, memory: 207935393800000, object_store_memory: 89115168760000, node:10.128.0.163: 10000, CPU: 80000}}, "available": {GPU: 10000, CPU: 80000, node:__internal_head__: 10000, accelerator_type:L4: 10000, memory: 207935393800000, object_store_memory: 89115168760000, node:10.128.0.163: 10000}}, "labels":{"ray.io/accelerator-type":"L4","ray.io/node-id":"d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120",}, "is_draining": 0, "draining_deadline_timestamp_ms": -1} { "placement group locations": [], "node to bundles": []}
|
| 58 |
+
[state-dump] Waiting leases size: 0
|
| 59 |
+
[state-dump] Number of granted lease arguments: 0
|
| 60 |
+
[state-dump] Number of pinned lease arguments: 0
|
| 61 |
+
[state-dump] Number of total spilled leases: 0
|
| 62 |
+
[state-dump] Number of spilled waiting leases: 0
|
| 63 |
+
[state-dump] Number of spilled unschedulable leases: 0
|
| 64 |
+
[state-dump] Resource usage {
|
| 65 |
+
[state-dump] }
|
| 66 |
+
[state-dump] Backlog Size per scheduling descriptor :{workerId: num backlogs}:
|
| 67 |
+
[state-dump]
|
| 68 |
+
[state-dump] Granted leases by scheduling class:
|
| 69 |
+
[state-dump] ==================================================
|
| 70 |
+
[state-dump]
|
| 71 |
+
[state-dump] ClusterResources:
|
| 72 |
+
[state-dump] LocalObjectManager:
|
| 73 |
+
[state-dump] - num pinned objects: 0
|
| 74 |
+
[state-dump] - pinned objects size: 0
|
| 75 |
+
[state-dump] - num objects pending restore: 0
|
| 76 |
+
[state-dump] - num objects pending spill: 0
|
| 77 |
+
[state-dump] - num bytes pending spill: 0
|
| 78 |
+
[state-dump] - num bytes currently spilled: 0
|
| 79 |
+
[state-dump] - cumulative spill requests: 0
|
| 80 |
+
[state-dump] - cumulative restore requests: 0
|
| 81 |
+
[state-dump] - spilled objects pending delete: 0
|
| 82 |
+
[state-dump]
|
| 83 |
+
[state-dump] ObjectManager:
|
| 84 |
+
[state-dump] - num local objects: 0
|
| 85 |
+
[state-dump] - num unfulfilled push requests: 0
|
| 86 |
+
[state-dump] - num object pull requests: 0
|
| 87 |
+
[state-dump] - num chunks received total: 0
|
| 88 |
+
[state-dump] - num chunks received failed (all): 0
|
| 89 |
+
[state-dump] - num chunks received failed / cancelled: 0
|
| 90 |
+
[state-dump] - num chunks received failed / plasma error: 0
|
| 91 |
+
[state-dump] Event stats:
|
| 92 |
+
[state-dump] Global stats: 0 total (0 active)
|
| 93 |
+
[state-dump] Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 94 |
+
[state-dump] Execution time: mean = -nanms, total = 0.00ms
|
| 95 |
+
[state-dump] Event stats:
|
| 96 |
+
[state-dump] PushManager:
|
| 97 |
+
[state-dump] - num pushes remaining: 0
|
| 98 |
+
[state-dump] - num chunks in flight: 0
|
| 99 |
+
[state-dump] - num chunks remaining: 0
|
| 100 |
+
[state-dump] - max chunks allowed: 409
|
| 101 |
+
[state-dump] OwnershipBasedObjectDirectory:
|
| 102 |
+
[state-dump] - num listeners: 0
|
| 103 |
+
[state-dump] - cumulative location updates: 0
|
| 104 |
+
[state-dump] - num location updates per second: 0.000
|
| 105 |
+
[state-dump] - num location lookups per second: 0.000
|
| 106 |
+
[state-dump] - num locations added per second: 0.000
|
| 107 |
+
[state-dump] - num locations removed per second: 0.000
|
| 108 |
+
[state-dump] BufferPool:
|
| 109 |
+
[state-dump] - create buffer state map size: 0
|
| 110 |
+
[state-dump] PullManager:
|
| 111 |
+
[state-dump] - num bytes available for pulled objects: 8911516876
|
| 112 |
+
[state-dump] - num bytes being pulled (all): 0
|
| 113 |
+
[state-dump] - num bytes being pulled / pinned: 0
|
| 114 |
+
[state-dump] - get request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
|
| 115 |
+
[state-dump] - wait request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
|
| 116 |
+
[state-dump] - task request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
|
| 117 |
+
[state-dump] - first get request bundle: N/A
|
| 118 |
+
[state-dump] - first wait request bundle: N/A
|
| 119 |
+
[state-dump] - first task request bundle: N/A
|
| 120 |
+
[state-dump] - num objects queued: 0
|
| 121 |
+
[state-dump] - num objects actively pulled (all): 0
|
| 122 |
+
[state-dump] - num objects actively pulled / pinned: 0
|
| 123 |
+
[state-dump] - num bundles being pulled: 0
|
| 124 |
+
[state-dump] - num pull retries: 0
|
| 125 |
+
[state-dump] - max timeout seconds: 0
|
| 126 |
+
[state-dump] - max timeout request is already processed. No entry.
|
| 127 |
+
[state-dump]
|
| 128 |
+
[state-dump] WorkerPool:
|
| 129 |
+
[state-dump] - registered jobs: 0
|
| 130 |
+
[state-dump] - process_failed_job_config_missing: 0
|
| 131 |
+
[state-dump] - process_failed_rate_limited: 0
|
| 132 |
+
[state-dump] - process_failed_pending_registration: 0
|
| 133 |
+
[state-dump] - process_failed_runtime_env_setup_failed: 0
|
| 134 |
+
[state-dump] - num PYTHON workers: 0
|
| 135 |
+
[state-dump] - num PYTHON drivers: 0
|
| 136 |
+
[state-dump] - num PYTHON pending start requests: 0
|
| 137 |
+
[state-dump] - num PYTHON pending registration requests: 0
|
| 138 |
+
[state-dump] - num object spill callbacks queued: 0
|
| 139 |
+
[state-dump] - num object restore queued: 0
|
| 140 |
+
[state-dump] - num util functions queued: 0
|
| 141 |
+
[state-dump] - num idle workers: 0
|
| 142 |
+
[state-dump] LeaseDependencyManager:
|
| 143 |
+
[state-dump] - lease deps map size: 0
|
| 144 |
+
[state-dump] - get req map size: 0
|
| 145 |
+
[state-dump] - wait req map size: 0
|
| 146 |
+
[state-dump] - local objects map size: 0
|
| 147 |
+
[state-dump] WaitManager:
|
| 148 |
+
[state-dump] - num active wait requests: 0
|
| 149 |
+
[state-dump] Subscriber:
|
| 150 |
+
[state-dump] Channel WORKER_OBJECT_EVICTION
|
| 151 |
+
[state-dump] - cumulative subscribe requests: 0
|
| 152 |
+
[state-dump] - cumulative unsubscribe requests: 0
|
| 153 |
+
[state-dump] - active subscribed publishers: 0
|
| 154 |
+
[state-dump] - cumulative published messages: 0
|
| 155 |
+
[state-dump] - cumulative processed messages: 0
|
| 156 |
+
[state-dump] Channel WORKER_REF_REMOVED_CHANNEL
|
| 157 |
+
[state-dump] - cumulative subscribe requests: 0
|
| 158 |
+
[state-dump] - cumulative unsubscribe requests: 0
|
| 159 |
+
[state-dump] - active subscribed publishers: 0
|
| 160 |
+
[state-dump] - cumulative published messages: 0
|
| 161 |
+
[state-dump] - cumulative processed messages: 0
|
| 162 |
+
[state-dump] Channel WORKER_OBJECT_LOCATIONS_CHANNEL
|
| 163 |
+
[state-dump] - cumulative subscribe requests: 0
|
| 164 |
+
[state-dump] - cumulative unsubscribe requests: 0
|
| 165 |
+
[state-dump] - active subscribed publishers: 0
|
| 166 |
+
[state-dump] - cumulative published messages: 0
|
| 167 |
+
[state-dump] - cumulative processed messages: 0
|
| 168 |
+
[state-dump] num async plasma notifications: 0
|
| 169 |
+
[state-dump] Event stats:
|
| 170 |
+
[state-dump] Global stats: 35 total (15 active)
|
| 171 |
+
[state-dump] Queueing time: mean = 3.20ms, max = 21.50ms, min = 0.02ms, total = 112.11ms
|
| 172 |
+
[state-dump] Execution time: mean = 29.80ms, total = 1043.02ms
|
| 173 |
+
[state-dump] Event stats:
|
| 174 |
+
[state-dump] PeriodicalRunner.RunFnPeriodically - 12 total (2 active, 1 running), Execution time: mean = 0.19ms, total = 2.26ms, Queueing time: mean = 8.08ms, max = 21.50ms, min = 0.08ms, total = 96.99ms
|
| 175 |
+
[state-dump] event_loop_lag_probe - 2 total (0 active), Execution time: mean = 0.01ms, total = 0.03ms, Queueing time: mean = 6.89ms, max = 13.76ms, min = 0.02ms, total = 13.78ms
|
| 176 |
+
[state-dump] MemoryMonitor.CheckIsMemoryUsageAboveThreshold - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 177 |
+
[state-dump] ReporterService.grpc_client.HealthCheck - 1 total (0 active), Execution time: mean = 1.14ms, total = 1.14ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 178 |
+
[state-dump] NodeManager.GCTaskFailureReason - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 179 |
+
[state-dump] ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig - 1 total (0 active), Execution time: mean = 1.38ms, total = 1.38ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 180 |
+
[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (0 active), Execution time: mean = 2.09ms, total = 2.09ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 181 |
+
[state-dump] NodeManager.deadline_timer.spill_objects_when_over_threshold - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 182 |
+
[state-dump] NodeManager.deadline_timer.flush_free_objects - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 183 |
+
[state-dump] NodeManager.deadline_timer.debug_state_dump - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 184 |
+
[state-dump] RayletWorkerPool.deadline_timer.kill_idle_workers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 185 |
+
[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode - 1 total (0 active), Execution time: mean = 3.05ms, total = 3.05ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 186 |
+
[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 187 |
+
[state-dump] ReporterService.grpc_client.HealthCheck.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.04ms, total = 0.04ms, Queueing time: mean = 1.22ms, max = 1.22ms, min = 1.22ms, total = 1.22ms
|
| 188 |
+
[state-dump] NodeManager.ScheduleAndGrantLeases - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 189 |
+
[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 190 |
+
[state-dump] ObjectManager.UpdateAvailableMemory - 1 total (0 active), Execution time: mean = 0.04ms, total = 0.04ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
|
| 191 |
+
[state-dump] NodeManager.deadline_timer.record_metrics - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 192 |
+
[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.91ms, total = 0.91ms, Queueing time: mean = 0.09ms, max = 0.09ms, min = 0.09ms, total = 0.09ms
|
| 193 |
+
[state-dump] ClusterResourceManager.ResetRemoteNodeView - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 194 |
+
[state-dump] NodeManager.CheckForUnexpectedWorkerDisconnects - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 195 |
+
[state-dump] ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig.OnReplyReceived - 1 total (0 active), Execution time: mean = 1032.07ms, total = 1032.07ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
|
| 196 |
+
[state-dump] MetricsAgentClient.WaitForServerReadyWithRetry - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 197 |
+
[state-dump] DebugString() time ms: 0
|
| 198 |
+
[state-dump]
|
| 199 |
+
[state-dump]
|
| 200 |
+
[2026-02-27 00:30:31,498 I 11302 11302] (raylet) accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 201 |
+
[2026-02-27 00:30:31,550 I 11302 11302] (raylet) worker_pool.cc:750: [Eagerly] Start install runtime environment for job 01000000.
|
| 202 |
+
[2026-02-27 00:30:31,553 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 11372, the token is 0
|
| 203 |
+
[2026-02-27 00:30:31,556 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 11373, the token is 1
|
| 204 |
+
[2026-02-27 00:30:31,558 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 11374, the token is 2
|
| 205 |
+
[2026-02-27 00:30:31,561 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 11375, the token is 3
|
| 206 |
+
[2026-02-27 00:30:31,564 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 11376, the token is 4
|
| 207 |
+
[2026-02-27 00:30:31,567 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 11377, the token is 5
|
| 208 |
+
[2026-02-27 00:30:31,570 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 11378, the token is 6
|
| 209 |
+
[2026-02-27 00:30:31,574 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 11379, the token is 7
|
| 210 |
+
[2026-02-27 00:30:31,575 I 11302 11302] (raylet) runtime_env_agent_client.cc:350: Runtime Env Agent network error: NotFound: on_connect Connection refused, the server may be still starting or is already failed. Scheduling a retry in 1000ms...
|
| 211 |
+
[2026-02-27 00:30:32,419 I 11302 11317] (raylet) object_store.cc:37: Object store current usage 8e-09 / 8.91152 GB.
|
| 212 |
+
[2026-02-27 00:30:32,582 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 213 |
+
[2026-02-27 00:30:32,582 I 11302 11302] (raylet) worker_pool.cc:761: [Eagerly] Create runtime env successful for job 01000000.
|
| 214 |
+
[2026-02-27 00:30:32,753 I 11302 11302] (raylet) worker_pool.cc:740: Job 01000000 already started in worker pool.
|
| 215 |
+
[2026-02-27 00:30:33,898 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=1, has_creation_task_exception=false worker_id=fc14e0d4e4b6acb4ecead813c2d960587eefa7859aac6d8e19aeec98 job_id=NIL_ID
|
| 216 |
+
[2026-02-27 00:30:33,903 W 11302 11317] (raylet) store.cc:365: Disconnecting client due to connection error with code 2: End of file
|
| 217 |
+
[2026-02-27 00:30:34,492 I 11302 11302] (raylet) metrics_agent_client.cc:54: Exporter initialized.
|
| 218 |
+
[2026-02-27 00:30:36,525 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 219 |
+
[2026-02-27 00:30:36,529 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 11872, the token is 8
|
| 220 |
+
[2026-02-27 00:30:57,313 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 221 |
+
[2026-02-27 00:30:57,316 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 12083, the token is 9
|
| 222 |
+
[2026-02-27 00:30:57,504 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=1, has_creation_task_exception=false worker_id=100e3eeb4a57ce034285a311628f834885904c1e1ea9caa911a3c4da job_id=NIL_ID
|
| 223 |
+
[2026-02-27 00:30:57,506 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=1, has_creation_task_exception=false worker_id=bb0f50c5405699ae07f957ec3f7c03f2bdf40be03f6e39b39232dc16 job_id=NIL_ID
|
| 224 |
+
[2026-02-27 00:30:57,507 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=1, has_creation_task_exception=false worker_id=d94856809cc941df63bf786bee51e2d1396c50fb3b25d48a4be64edf job_id=NIL_ID
|
| 225 |
+
[2026-02-27 00:30:57,511 W 11302 11317] (raylet) store.cc:365: Disconnecting client due to connection error with code 2: End of file
|
| 226 |
+
[2026-02-27 00:30:57,512 W 11302 11317] (raylet) store.cc:365: Disconnecting client due to connection error with code 2: End of file
|
| 227 |
+
[2026-02-27 00:30:57,759 W 11302 11317] (raylet) store.cc:365: Disconnecting client due to connection error with code 2: End of file
|
| 228 |
+
[2026-02-27 00:31:00,166 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 229 |
+
[2026-02-27 00:31:00,170 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 12197, the token is 10
|
| 230 |
+
[2026-02-27 00:31:00,507 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=1, has_creation_task_exception=false worker_id=1535028fd440028216a02042c55e0b58baec34df171b54a8306f4bc8 job_id=NIL_ID
|
| 231 |
+
[2026-02-27 00:31:00,512 W 11302 11317] (raylet) store.cc:365: Disconnecting client due to connection error with code 2: End of file
|
| 232 |
+
[2026-02-27 00:31:29,865 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 233 |
+
[2026-02-27 00:31:29,868 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 12365, the token is 11
|
| 234 |
+
[2026-02-27 00:31:29,875 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 235 |
+
[2026-02-27 00:31:29,878 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 12366, the token is 12
|
| 236 |
+
[2026-02-27 00:31:29,889 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 237 |
+
[2026-02-27 00:31:29,892 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 12367, the token is 13
|
| 238 |
+
[2026-02-27 00:31:29,903 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 239 |
+
[2026-02-27 00:31:29,907 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 12368, the token is 14
|
| 240 |
+
[2026-02-27 00:31:29,920 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 241 |
+
[2026-02-27 00:31:29,923 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 12369, the token is 15
|
| 242 |
+
[2026-02-27 00:31:29,939 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 243 |
+
[2026-02-27 00:31:29,944 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 12370, the token is 16
|
| 244 |
+
[2026-02-27 00:31:29,964 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 245 |
+
[2026-02-27 00:31:29,968 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 12371, the token is 17
|
| 246 |
+
[2026-02-27 00:31:29,985 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 247 |
+
[2026-02-27 00:31:29,989 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 12372, the token is 18
|
| 248 |
+
[2026-02-27 00:31:30,461 I 11302 11317] (raylet) store.cc:572: Plasma store debug dump:
|
| 249 |
+
Current usage: 0 / 8.91152 GB
|
| 250 |
+
- num bytes created total: 96
|
| 251 |
+
0 pending objects of total size 0MB
|
| 252 |
+
- objects spillable: 0
|
| 253 |
+
- bytes spillable: 0
|
| 254 |
+
- objects unsealed: 0
|
| 255 |
+
- bytes unsealed: 0
|
| 256 |
+
- objects in use: 0
|
| 257 |
+
- bytes in use: 0
|
| 258 |
+
- objects evictable: 0
|
| 259 |
+
- bytes evictable: 0
|
| 260 |
+
|
| 261 |
+
- objects created by worker: 0
|
| 262 |
+
- bytes created by worker: 0
|
| 263 |
+
- objects restored: 0
|
| 264 |
+
- bytes restored: 0
|
| 265 |
+
- objects received: 0
|
| 266 |
+
- bytes received: 0
|
| 267 |
+
- objects errored: 0
|
| 268 |
+
- bytes errored: 0
|
| 269 |
+
|
| 270 |
+
[2026-02-27 00:31:31,498 I 11302 11302] (raylet) node_manager.cc:440: [state-dump] NodeManager:
|
| 271 |
+
[state-dump] Node ID: d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 272 |
+
[state-dump] Node name: 10.128.0.163
|
| 273 |
+
[state-dump] InitialConfigResources: {node:__internal_head__: 1, GPU: 1, accelerator_type:L4: 1, memory: 2.07935e+10, object_store_memory: 8.91152e+09, node:10.128.0.163: 1, CPU: 8}
|
| 274 |
+
[state-dump] ClusterLeaseManager:
|
| 275 |
+
[state-dump] ========== Node: d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120 =================
|
| 276 |
+
[state-dump] Infeasible queue length: 0
|
| 277 |
+
[state-dump] Schedule queue length: 0
|
| 278 |
+
[state-dump] Grant queue length: 8
|
| 279 |
+
[state-dump] num_waiting_for_resource: 0
|
| 280 |
+
[state-dump] num_waiting_for_plasma_memory: 0
|
| 281 |
+
[state-dump] num_waiting_for_remote_node_resources: 0
|
| 282 |
+
[state-dump] num_worker_not_started_by_job_config_not_exist: 0
|
| 283 |
+
[state-dump] num_worker_not_started_by_registration_timeout: 0
|
| 284 |
+
[state-dump] num_tasks_waiting_for_workers: 8
|
| 285 |
+
[state-dump] num_cancelled_leases: 0
|
| 286 |
+
[state-dump] cluster_resource_scheduler state:
|
| 287 |
+
[state-dump] Local id: 9054801897395801548 Local resources: {"total":{memory: [207935393800000], accelerator_type:L4: [10000], CPU_group_6834375140417d94aa5cc2a5c3d701000000: [30000], node:__internal_head__: [10000], CPU: [80000], object_store_memory: [89115168760000], CPU_group_0_6834375140417d94aa5cc2a5c3d701000000: [30000], GPU_group_0_6834375140417d94aa5cc2a5c3d701000000: [10000], GPU_group_6834375140417d94aa5cc2a5c3d701000000: [10000], GPU: [10000], bundle_group_0_6834375140417d94aa5cc2a5c3d701000000: [10000000], bundle_group_6834375140417d94aa5cc2a5c3d701000000: [10000000], node:10.128.0.163: [10000]}}, "available": {memory: [207935393800000], accelerator_type:L4: [10000], CPU_group_6834375140417d94aa5cc2a5c3d701000000: [20000], node:__internal_head__: [10000], CPU: [40000], object_store_memory: [89115168760000], CPU_group_0_6834375140417d94aa5cc2a5c3d701000000: [20000], GPU_group_0_6834375140417d94aa5cc2a5c3d701000000: [6667], GPU_group_6834375140417d94aa5cc2a5c3d701000000: [6667], GPU: [0], bundle_group_0_6834375140417d94aa5cc2a5c3d701000000: [9999990], bundle_group_6834375140417d94aa5cc2a5c3d701000000: [9999990], node:10.128.0.163: [10000]}}, "labels":{"ray.io/accelerator-type":"L4","ray.io/node-id":"d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120",} is_draining: 0 is_idle: 0 Cluster resources (at most 20 nodes are shown): node id: 9054801897395801548{"total":{node:__internal_head__: 10000, GPU: 10000, object_store_memory: 89115168760000, GPU_group_6834375140417d94aa5cc2a5c3d701000000: 10000, accelerator_type:L4: 10000, memory: 207935393800000, CPU_group_0_6834375140417d94aa5cc2a5c3d701000000: 30000, CPU: 80000, bundle_group_0_6834375140417d94aa5cc2a5c3d701000000: 10000000, bundle_group_6834375140417d94aa5cc2a5c3d701000000: 10000000, node:10.128.0.163: 10000, CPU_group_6834375140417d94aa5cc2a5c3d701000000: 30000, GPU_group_0_6834375140417d94aa5cc2a5c3d701000000: 10000}}, "available": {node:__internal_head__: 10000, CPU_group_0_6834375140417d94aa5cc2a5c3d701000000: 20000, GPU_group_6834375140417d94aa5cc2a5c3d701000000: 6667, object_store_memory: 89115168760000, CPU: 40000, accelerator_type:L4: 10000, memory: 207935393800000, bundle_group_0_6834375140417d94aa5cc2a5c3d701000000: 9999990, bundle_group_6834375140417d94aa5cc2a5c3d701000000: 9999990, node:10.128.0.163: 10000, CPU_group_6834375140417d94aa5cc2a5c3d701000000: 20000, GPU_group_0_6834375140417d94aa5cc2a5c3d701000000: 6667}}, "labels":{"ray.io/accelerator-type":"L4","ray.io/node-id":"d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120",}, "is_draining": 0, "draining_deadline_timestamp_ms": -1} { "placement group locations": [], "node to bundles": []}
|
| 288 |
+
[state-dump] Waiting leases size: 0
|
| 289 |
+
[state-dump] Number of granted lease arguments: 8
|
| 290 |
+
[state-dump] Number of pinned lease arguments: 0
|
| 291 |
+
[state-dump] Number of total spilled leases: 0
|
| 292 |
+
[state-dump] Number of spilled waiting leases: 0
|
| 293 |
+
[state-dump] Number of spilled unschedulable leases: 0
|
| 294 |
+
[state-dump] Resource usage {
|
| 295 |
+
[state-dump] - (language=PYTHON actor_or_taskWorkerDict.__init__ pid=12223 worker_id=a711ab381f1202e338fc2083afa6dd5133aebf91969e1e83b35a9610): {GPU_group_6834375140417d94aa5cc2a5c3d701000000: 0.3333, CPU_group_6834375140417d94aa5cc2a5c3d701000000: 1, bundle_group_6834375140417d94aa5cc2a5c3d701000000: 0.001, bundle_group_0_6834375140417d94aa5cc2a5c3d701000000: 0.001, CPU_group_0_6834375140417d94aa5cc2a5c3d701000000: 1, GPU_group_0_6834375140417d94aa5cc2a5c3d701000000: 0.3333}
|
| 296 |
+
[state-dump] - (language=PYTHON actor_or_taskTaskRunner.__init__ pid=11896 worker_id=8d27d27b6a5c820150d6a54cd27fe296fd0409567d2b4685b9a84fc8): {CPU: 1}
|
| 297 |
+
[state-dump] }
|
| 298 |
+
[state-dump] Backlog Size per scheduling descriptor :{workerId: num backlogs}:
|
| 299 |
+
[state-dump]
|
| 300 |
+
[state-dump] Granted leases by scheduling class:
|
| 301 |
+
[state-dump] - {depth=2 function_descriptor={type=PythonFunctionDescriptor, module_name=verl.experimental.reward_loop.reward_loop, class_name=RewardLoopWorker, function_name=__init__, function_hash=4aa985cb1b55482cac9e9973b48c0507} scheduling_strategy=node_affinity_scheduling_strategy {
|
| 302 |
+
[state-dump] node_id: "\331\026\r\302{\002nx\177\037\201\264FZ6\3619\317\000\363/\362\202\016B\362\001 "
|
| 303 |
+
[state-dump] soft: true
|
| 304 |
+
[state-dump] }
|
| 305 |
+
[state-dump] resource_set={CPU : 1, }label_selector={}}fallback_strategy=[]: 8/8
|
| 306 |
+
[state-dump] ==================================================
|
| 307 |
+
[state-dump]
|
| 308 |
+
[state-dump] ClusterResources:
|
| 309 |
+
[state-dump] LocalObjectManager:
|
| 310 |
+
[state-dump] - num pinned objects: 0
|
| 311 |
+
[state-dump] - pinned objects size: 0
|
| 312 |
+
[state-dump] - num objects pending restore: 0
|
| 313 |
+
[state-dump] - num objects pending spill: 0
|
| 314 |
+
[state-dump] - num bytes pending spill: 0
|
| 315 |
+
[state-dump] - num bytes currently spilled: 0
|
| 316 |
+
[state-dump] - cumulative spill requests: 0
|
| 317 |
+
[state-dump] - cumulative restore requests: 0
|
| 318 |
+
[state-dump] - spilled objects pending delete: 0
|
| 319 |
+
[state-dump]
|
| 320 |
+
[state-dump] ObjectManager:
|
| 321 |
+
[state-dump] - num local objects: 0
|
| 322 |
+
[state-dump] - num unfulfilled push requests: 0
|
| 323 |
+
[state-dump] - num object pull requests: 0
|
| 324 |
+
[state-dump] - num chunks received total: 0
|
| 325 |
+
[state-dump] - num chunks received failed (all): 0
|
| 326 |
+
[state-dump] - num chunks received failed / cancelled: 0
|
| 327 |
+
[state-dump] - num chunks received failed / plasma error: 0
|
| 328 |
+
[state-dump] Event stats:
|
| 329 |
+
[state-dump] Global stats: 0 total (0 active)
|
| 330 |
+
[state-dump] Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 331 |
+
[state-dump] Execution time: mean = -nanms, total = 0.00ms
|
| 332 |
+
[state-dump] Event stats:
|
| 333 |
+
[state-dump] PushManager:
|
| 334 |
+
[state-dump] - num pushes remaining: 0
|
| 335 |
+
[state-dump] - num chunks in flight: 0
|
| 336 |
+
[state-dump] - num chunks remaining: 0
|
| 337 |
+
[state-dump] - max chunks allowed: 409
|
| 338 |
+
[state-dump] OwnershipBasedObjectDirectory:
|
| 339 |
+
[state-dump] - num listeners: 0
|
| 340 |
+
[state-dump] - cumulative location updates: 0
|
| 341 |
+
[state-dump] - num location updates per second: 0.000
|
| 342 |
+
[state-dump] - num location lookups per second: 0.000
|
| 343 |
+
[state-dump] - num locations added per second: 0.000
|
| 344 |
+
[state-dump] - num locations removed per second: 0.000
|
| 345 |
+
[state-dump] BufferPool:
|
| 346 |
+
[state-dump] - create buffer state map size: 0
|
| 347 |
+
[state-dump] PullManager:
|
| 348 |
+
[state-dump] - num bytes available for pulled objects: 8911516876
|
| 349 |
+
[state-dump] - num bytes being pulled (all): 0
|
| 350 |
+
[state-dump] - num bytes being pulled / pinned: 0
|
| 351 |
+
[state-dump] - get request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
|
| 352 |
+
[state-dump] - wait request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
|
| 353 |
+
[state-dump] - task request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
|
| 354 |
+
[state-dump] - first get request bundle: N/A
|
| 355 |
+
[state-dump] - first wait request bundle: N/A
|
| 356 |
+
[state-dump] - first task request bundle: N/A
|
| 357 |
+
[state-dump] - num objects queued: 0
|
| 358 |
+
[state-dump] - num objects actively pulled (all): 0
|
| 359 |
+
[state-dump] - num objects actively pulled / pinned: 0
|
| 360 |
+
[state-dump] - num bundles being pulled: 0
|
| 361 |
+
[state-dump] - num pull retries: 0
|
| 362 |
+
[state-dump] - max timeout seconds: 0
|
| 363 |
+
[state-dump] - max timeout request is already processed. No entry.
|
| 364 |
+
[state-dump]
|
| 365 |
+
[state-dump] WorkerPool:
|
| 366 |
+
[state-dump] - registered jobs: 1
|
| 367 |
+
[state-dump] - process_failed_job_config_missing: 0
|
| 368 |
+
[state-dump] - process_failed_rate_limited: 0
|
| 369 |
+
[state-dump] - process_failed_pending_registration: 0
|
| 370 |
+
[state-dump] - process_failed_runtime_env_setup_failed: 0
|
| 371 |
+
[state-dump] - num PYTHON workers: 6
|
| 372 |
+
[state-dump] - num PYTHON drivers: 1
|
| 373 |
+
[state-dump] - num PYTHON pending start requests: 0
|
| 374 |
+
[state-dump] - num PYTHON pending registration requests: 8
|
| 375 |
+
[state-dump] - num object spill callbacks queued: 0
|
| 376 |
+
[state-dump] - num object restore queued: 0
|
| 377 |
+
[state-dump] - num util functions queued: 0
|
| 378 |
+
[state-dump] - num idle workers: 4
|
| 379 |
+
[state-dump] LeaseDependencyManager:
|
| 380 |
+
[state-dump] - lease deps map size: 0
|
| 381 |
+
[state-dump] - get req map size: 0
|
| 382 |
+
[state-dump] - wait req map size: 0
|
| 383 |
+
[state-dump] - local objects map size: 0
|
| 384 |
+
[state-dump] WaitManager:
|
| 385 |
+
[state-dump] - num active wait requests: 0
|
| 386 |
+
[state-dump] Subscriber:
|
| 387 |
+
[state-dump] Channel WORKER_OBJECT_EVICTION
|
| 388 |
+
[state-dump] - cumulative subscribe requests: 0
|
| 389 |
+
[state-dump] - cumulative unsubscribe requests: 0
|
| 390 |
+
[state-dump] - active subscribed publishers: 0
|
| 391 |
+
[state-dump] - cumulative published messages: 0
|
| 392 |
+
[state-dump] - cumulative processed messages: 0
|
| 393 |
+
[state-dump] Channel WORKER_REF_REMOVED_CHANNEL
|
| 394 |
+
[state-dump] - cumulative subscribe requests: 0
|
| 395 |
+
[state-dump] - cumulative unsubscribe requests: 0
|
| 396 |
+
[state-dump] - active subscribed publishers: 0
|
| 397 |
+
[state-dump] - cumulative published messages: 0
|
| 398 |
+
[state-dump] - cumulative processed messages: 0
|
| 399 |
+
[state-dump] Channel WORKER_OBJECT_LOCATIONS_CHANNEL
|
| 400 |
+
[state-dump] - cumulative subscribe requests: 0
|
| 401 |
+
[state-dump] - cumulative unsubscribe requests: 0
|
| 402 |
+
[state-dump] - active subscribed publishers: 0
|
| 403 |
+
[state-dump] - cumulative published messages: 0
|
| 404 |
+
[state-dump] - cumulative processed messages: 0
|
| 405 |
+
[state-dump] num async plasma notifications: 0
|
| 406 |
+
[state-dump] Event stats:
|
| 407 |
+
[state-dump] Global stats: 4057 total (30 active)
|
| 408 |
+
[state-dump] Queueing time: mean = 33.16ms, max = 27904.78ms, min = 0.00ms, total = 134513.84ms
|
| 409 |
+
[state-dump] Execution time: mean = 9.96ms, total = 40404.99ms
|
| 410 |
+
[state-dump] Event stats:
|
| 411 |
+
[state-dump] NodeManager.CheckGC - 600 total (1 active), Execution time: mean = 0.00ms, total = 1.44ms, Queueing time: mean = 0.07ms, max = 3.47ms, min = 0.01ms, total = 42.49ms
|
| 412 |
+
[state-dump] ObjectManager.UpdateAvailableMemory - 600 total (0 active), Execution time: mean = 0.00ms, total = 2.14ms, Queueing time: mean = 0.03ms, max = 2.65ms, min = 0.00ms, total = 17.58ms
|
| 413 |
+
[state-dump] RaySyncer.OnDemandBroadcasting - 600 total (1 active), Execution time: mean = 0.01ms, total = 7.47ms, Queueing time: mean = 0.06ms, max = 3.47ms, min = 0.01ms, total = 36.78ms
|
| 414 |
+
[state-dump] NodeManagerService.grpc_server.ReportWorkerBacklog.HandleRequestImpl - 458 total (0 active), Execution time: mean = 0.07ms, total = 31.22ms, Queueing time: mean = 0.05ms, max = 3.58ms, min = 0.01ms, total = 21.26ms
|
| 415 |
+
[state-dump] NodeManagerService.grpc_server.ReportWorkerBacklog - 458 total (0 active), Execution time: mean = 0.32ms, total = 144.36ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 416 |
+
[state-dump] RayletWorkerPool.deadline_timer.kill_idle_workers - 300 total (1 active), Execution time: mean = 0.02ms, total = 4.58ms, Queueing time: mean = 0.06ms, max = 3.83ms, min = 0.02ms, total = 18.60ms
|
| 417 |
+
[state-dump] MemoryMonitor.CheckIsMemoryUsageAboveThreshold - 240 total (1 active), Execution time: mean = 0.25ms, total = 58.95ms, Queueing time: mean = 0.07ms, max = 3.57ms, min = 0.02ms, total = 16.08ms
|
| 418 |
+
[state-dump] NodeManager.CheckForUnexpectedWorkerDisconnects - 61 total (1 active), Execution time: mean = 0.02ms, total = 1.20ms, Queueing time: mean = 0.04ms, max = 0.13ms, min = 0.01ms, total = 2.45ms
|
| 419 |
+
[state-dump] NodeManager.ScheduleAndGrantLeases - 61 total (1 active), Execution time: mean = 0.01ms, total = 0.88ms, Queueing time: mean = 0.05ms, max = 0.13ms, min = 0.03ms, total = 2.83ms
|
| 420 |
+
[state-dump] NodeManager.deadline_timer.spill_objects_when_over_threshold - 60 total (1 active), Execution time: mean = 0.00ms, total = 0.14ms, Queueing time: mean = 0.10ms, max = 0.82ms, min = 0.02ms, total = 5.83ms
|
| 421 |
+
[state-dump] NodeManager.deadline_timer.flush_free_objects - 60 total (1 active), Execution time: mean = 0.01ms, total = 0.35ms, Queueing time: mean = 0.09ms, max = 0.83ms, min = 0.02ms, total = 5.68ms
|
| 422 |
+
[state-dump] NodeManagerService.grpc_server.GetResourceLoad.HandleRequestImpl - 60 total (0 active), Execution time: mean = 0.12ms, total = 7.42ms, Queueing time: mean = 0.06ms, max = 2.00ms, min = 0.01ms, total = 3.76ms
|
| 423 |
+
[state-dump] NodeManagerService.grpc_server.GetResourceLoad - 60 total (0 active), Execution time: mean = 0.40ms, total = 24.02ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 424 |
+
[state-dump] ClientConnection.async_read.ProcessMessageHeader - 55 total (7 active), Execution time: mean = 0.01ms, total = 0.36ms, Queueing time: mean = 2385.40ms, max = 27904.78ms, min = 0.02ms, total = 131197.05ms
|
| 425 |
+
[state-dump] ClientConnection.async_read.ProcessMessage - 48 total (0 active), Execution time: mean = 0.74ms, total = 35.68ms, Queueing time: mean = 0.02ms, max = 0.62ms, min = 0.00ms, total = 1.18ms
|
| 426 |
+
[state-dump] CoreWorkerService.grpc_client.GetCoreWorkerStats.OnReplyReceived - 30 total (0 active), Execution time: mean = 0.02ms, total = 0.69ms, Queueing time: mean = 0.19ms, max = 1.14ms, min = 0.01ms, total = 5.64ms
|
| 427 |
+
[state-dump] CoreWorkerService.grpc_client.GetCoreWorkerStats - 30 total (0 active), Execution time: mean = 1.49ms, total = 44.73ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 428 |
+
[state-dump] ClusterResourceManager.ResetRemoteNodeView - 21 total (1 active), Execution time: mean = 0.01ms, total = 0.19ms, Queueing time: mean = 0.05ms, max = 0.14ms, min = 0.03ms, total = 0.98ms
|
| 429 |
+
[state-dump] PeriodicalRunner.RunFnPeriodically - 14 total (0 active), Execution time: mean = 0.25ms, total = 3.56ms, Queueing time: mean = 7.37ms, max = 21.50ms, min = 0.08ms, total = 103.24ms
|
| 430 |
+
[state-dump] ClientConnection.async_write.DoAsyncWrites - 13 total (0 active), Execution time: mean = 0.00ms, total = 0.01ms, Queueing time: mean = 0.03ms, max = 0.08ms, min = 0.01ms, total = 0.40ms
|
| 431 |
+
[state-dump] NodeManagerService.grpc_server.GetSystemConfig.HandleRequestImpl - 12 total (0 active), Execution time: mean = 0.08ms, total = 0.93ms, Queueing time: mean = 0.04ms, max = 0.16ms, min = 0.01ms, total = 0.46ms
|
| 432 |
+
[state-dump] ObjectManager.ObjectDeleted - 12 total (0 active), Execution time: mean = 0.03ms, total = 0.31ms, Queueing time: mean = 0.16ms, max = 0.56ms, min = 0.03ms, total = 1.86ms
|
| 433 |
+
[state-dump] NodeManagerService.grpc_server.RequestWorkerLease.HandleRequestImpl - 12 total (0 active), Execution time: mean = 0.43ms, total = 5.14ms, Queueing time: mean = 0.07ms, max = 0.49ms, min = 0.02ms, total = 0.81ms
|
| 434 |
+
[state-dump] NodeManager.deadline_timer.record_metrics - 12 total (1 active), Execution time: mean = 0.24ms, total = 2.83ms, Queueing time: mean = 0.24ms, max = 0.65ms, min = 0.03ms, total = 2.93ms
|
| 435 |
+
[state-dump] NodeManagerService.grpc_server.GetWorkerPIDs.HandleRequestImpl - 12 total (0 active), Execution time: mean = 0.09ms, total = 1.04ms, Queueing time: mean = 0.03ms, max = 0.05ms, min = 0.02ms, total = 0.39ms
|
| 436 |
+
[state-dump] NodeManagerService.grpc_server.GetWorkerPIDs - 12 total (0 active), Execution time: mean = 0.33ms, total = 3.92ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 437 |
+
[state-dump] ObjectManager.ObjectAdded - 12 total (0 active), Execution time: mean = 0.02ms, total = 0.19ms, Queueing time: mean = 0.14ms, max = 1.31ms, min = 0.02ms, total = 1.67ms
|
| 438 |
+
[state-dump] NodeManagerService.grpc_server.GetSystemConfig - 12 total (0 active), Execution time: mean = 0.65ms, total = 7.85ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 439 |
+
[state-dump] NodeManagerService.grpc_server.RequestWorkerLease - 12 total (8 active), Execution time: mean = 825.43ms, total = 9905.14ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 440 |
+
[state-dump] RaySyncer.BroadcastMessage - 9 total (0 active), Execution time: mean = 0.20ms, total = 1.83ms, Queueing time: mean = 0.00ms, max = 0.00ms, min = 0.00ms, total = 0.00ms
|
| 441 |
+
[state-dump] - 9 total (0 active), Execution time: mean = 0.00ms, total = 0.01ms, Queueing time: mean = 0.33ms, max = 2.76ms, min = 0.01ms, total = 3.00ms
|
| 442 |
+
[state-dump] event_loop_lag_probe - 8 total (0 active), Execution time: mean = 0.02ms, total = 0.12ms, Queueing time: mean = 1.73ms, max = 13.76ms, min = 0.00ms, total = 13.80ms
|
| 443 |
+
[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 7 total (1 active), Execution time: mean = 4145.00ms, total = 29015.02ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 444 |
+
[state-dump] NodeManager.deadline_timer.debug_state_dump - 6 total (1 active), Execution time: mean = 0.91ms, total = 5.44ms, Queueing time: mean = 0.03ms, max = 0.05ms, min = 0.03ms, total = 0.16ms
|
| 445 |
+
[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll.OnReplyReceived - 6 total (0 active), Execution time: mean = 0.16ms, total = 0.98ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.02ms, total = 0.18ms
|
| 446 |
+
[state-dump] Subscriber.HandlePublishedMessage_GCS_WORKER_DELTA_CHANNEL - 5 total (0 active), Execution time: mean = 0.01ms, total = 0.04ms, Queueing time: mean = 0.19ms, max = 0.24ms, min = 0.13ms, total = 0.97ms
|
| 447 |
+
[state-dump] CoreWorkerService.grpc_client.Exit.OnReplyReceived - 5 total (0 active), Execution time: mean = 0.06ms, total = 0.32ms, Queueing time: mean = 0.04ms, max = 0.06ms, min = 0.02ms, total = 0.18ms
|
| 448 |
+
[state-dump] ray::rpc::WorkerInfoGcsService.grpc_client.ReportWorkerFailure - 5 total (0 active), Execution time: mean = 1.56ms, total = 7.81ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 449 |
+
[state-dump] ray::rpc::WorkerInfoGcsService.grpc_client.ReportWorkerFailure.OnReplyReceived - 5 total (0 active), Execution time: mean = 0.02ms, total = 0.11ms, Queueing time: mean = 0.23ms, max = 0.46ms, min = 0.01ms, total = 1.13ms
|
| 450 |
+
[state-dump] CoreWorkerService.grpc_client.Exit - 5 total (0 active), Execution time: mean = 1.01ms, total = 5.04ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 451 |
+
[state-dump] ReporterService.grpc_client.HealthCheck - 4 total (0 active), Execution time: mean = 0.71ms, total = 2.85ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 452 |
+
[state-dump] ReporterService.grpc_client.HealthCheck.OnReplyReceived - 4 total (0 active), Execution time: mean = 0.12ms, total = 0.50ms, Queueing time: mean = 0.36ms, max = 1.22ms, min = 0.02ms, total = 1.43ms
|
| 453 |
+
[state-dump] NodeManagerService.grpc_server.GetNodeStats.HandleRequestImpl - 4 total (0 active), Execution time: mean = 1.51ms, total = 6.05ms, Queueing time: mean = 0.03ms, max = 0.04ms, min = 0.03ms, total = 0.12ms
|
| 454 |
+
[state-dump] NodeManagerService.grpc_server.GetNodeStats - 4 total (0 active), Execution time: mean = 3.02ms, total = 12.08ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 455 |
+
[state-dump] MetricsAgentClient.WaitForServerReadyWithRetry - 3 total (0 active), Execution time: mean = 0.19ms, total = 0.58ms, Queueing time: mean = 1000.04ms, max = 1000.06ms, min = 1000.02ms, total = 3000.13ms
|
| 456 |
+
[state-dump] NodeManagerService.grpc_server.ReturnWorkerLease - 2 total (0 active), Execution time: mean = 0.35ms, total = 0.69ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 457 |
+
[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 2 total (0 active), Execution time: mean = 1.59ms, total = 3.18ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 458 |
+
[state-dump] ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 2 total (0 active), Execution time: mean = 0.28ms, total = 0.57ms, Queueing time: mean = 0.96ms, max = 1.83ms, min = 0.08ms, total = 1.91ms
|
| 459 |
+
[state-dump] NodeManagerService.grpc_server.ReturnWorkerLease.HandleRequestImpl - 2 total (0 active), Execution time: mean = 0.13ms, total = 0.27ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.02ms, total = 0.05ms
|
| 460 |
+
[state-dump] RaySyncerRegister - 2 total (0 active), Execution time: mean = 0.00ms, total = 0.01ms, Queueing time: mean = 0.00ms, max = 0.00ms, min = 0.00ms, total = 0.00ms
|
| 461 |
+
[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.GetAllNodeAddressAndLiveness - 1 total (0 active), Execution time: mean = 1.28ms, total = 1.28ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 462 |
+
[state-dump] NodeManager.GcsCheckAlive - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 463 |
+
[state-dump] ray::rpc::JobInfoGcsService.grpc_client.GetAllJobInfo.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.16ms, max = 0.16ms, min = 0.16ms, total = 0.16ms
|
| 464 |
+
[state-dump] ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig.OnReplyReceived - 1 total (0 active), Execution time: mean = 1032.07ms, total = 1032.07ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
|
| 465 |
+
[state-dump] ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig - 1 total (0 active), Execution time: mean = 1.38ms, total = 1.38ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 466 |
+
[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode - 1 total (0 active), Execution time: mean = 3.05ms, total = 3.05ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 467 |
+
[state-dump] WorkerPool.PopWorkerCallback - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.01ms, max = 0.01ms, min = 0.01ms, total = 0.01ms
|
| 468 |
+
[state-dump] ray::rpc::JobInfoGcsService.grpc_client.AddJob.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.07ms, total = 0.07ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms
|
| 469 |
+
[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.91ms, total = 0.91ms, Queueing time: mean = 0.09ms, max = 0.09ms, min = 0.09ms, total = 0.09ms
|
| 470 |
+
[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.CheckAlive.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.04ms, total = 0.04ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
|
| 471 |
+
[state-dump] NodeManager.GCTaskFailureReason - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 472 |
+
[state-dump] ray::rpc::JobInfoGcsService.grpc_client.GetAllJobInfo - 1 total (0 active), Execution time: mean = 1.01ms, total = 1.01ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 473 |
+
[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.GetAllNodeAddressAndLiveness.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.86ms, total = 0.86ms, Queueing time: mean = 0.33ms, max = 0.33ms, min = 0.33ms, total = 0.33ms
|
| 474 |
+
[state-dump] ray::rpc::NodeInfoGcsService.grpc_client.CheckAlive - 1 total (0 active), Execution time: mean = 1.16ms, total = 1.16ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 475 |
+
[state-dump] NodeManagerService.grpc_server.CommitBundleResources - 1 total (0 active), Execution time: mean = 0.48ms, total = 0.48ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 476 |
+
[state-dump] NodeManagerService.grpc_server.PrepareBundleResources.HandleRequestImpl - 1 total (0 active), Execution time: mean = 0.26ms, total = 0.26ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms
|
| 477 |
+
[state-dump] ray::rpc::JobInfoGcsService.grpc_client.AddJob - 1 total (0 active), Execution time: mean = 1.09ms, total = 1.09ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 478 |
+
[state-dump] Subscriber.HandlePublishedMessage_GCS_JOB_CHANNEL - 1 total (0 active), Execution time: mean = 0.26ms, total = 0.26ms, Queueing time: mean = 0.10ms, max = 0.10ms, min = 0.10ms, total = 0.10ms
|
| 479 |
+
[state-dump] NodeManager.deadline_timer.print_event_loop_stats - 1 total (1 active, 1 running), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 480 |
+
[state-dump] NodeManagerService.grpc_server.CommitBundleResources.HandleRequestImpl - 1 total (0 active), Execution time: mean = 0.26ms, total = 0.26ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms
|
| 481 |
+
[state-dump] NodeManagerService.grpc_server.PrepareBundleResources - 1 total (0 active), Execution time: mean = 0.51ms, total = 0.51ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
|
| 482 |
+
[state-dump] DebugString() time ms: 1
|
| 483 |
+
[state-dump]
|
| 484 |
+
[state-dump]
|
| 485 |
+
[2026-02-27 00:31:41,791 I 11302 11302] (raylet) runtime_env_agent_client.cc:393: Create runtime env for job 01000000
|
| 486 |
+
[2026-02-27 00:31:41,795 I 11302 11302] (raylet) worker_pool.cc:531: Started worker process with pid 13081, the token is 19
|
| 487 |
+
[2026-02-27 00:32:12,445 I 11302 11302] (raylet) node_manager.cc:1444: Disconnecting driver, graceful=true, disconnect_type=3 worker_id=01000000ffffffffffffffffffffffffffffffffffffffffffffffff job_id=01000000
|
| 488 |
+
[2026-02-27 00:32:12,446 I 11302 11302] (raylet) node_manager.cc:1573: Driver (pid=10593) is disconnected. worker_id=01000000ffffffffffffffffffffffffffffffffffffffffffffffff job_id=01000000
|
| 489 |
+
[2026-02-27 00:32:12,448 I 11302 11302] (raylet) node_manager.cc:1007: Killing leased worker because its owner died. worker_id=8d27d27b6a5c820150d6a54cd27fe296fd0409567d2b4685b9a84fc8 owner_worker_id=01000000ffffffffffffffffffffffffffffffffffffffffffffffff
|
| 490 |
+
[2026-02-27 00:32:12,449 I 11302 11302] (raylet) worker_pool.cc:740: Job 01000000 already started in worker pool.
|
| 491 |
+
[2026-02-27 00:32:12,449 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=9224fcd6abcfd04deeca6990e3ac522c58f6eec637ba09c0e927aaef job_id=01000000
|
| 492 |
+
[2026-02-27 00:32:12,449 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=b809b75ac50a13f3d02e083041fe7ba32c1445fa33db5801d3c6cfe5 job_id=01000000
|
| 493 |
+
[2026-02-27 00:32:12,449 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=e98598eeddae739fb0211beef22a201ab9028016b2b64fe185d8c813 job_id=01000000
|
| 494 |
+
[2026-02-27 00:32:12,449 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=af6e4d2eae80c226c783dd6717832e015ec8fc0144d801649c12abfe job_id=01000000
|
| 495 |
+
[2026-02-27 00:32:12,450 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=33b9d0a21a51ca22dda2aa2142cb264d1ee4f9d53a55dc567b49496c job_id=01000000
|
| 496 |
+
[2026-02-27 00:32:12,450 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=8d27d27b6a5c820150d6a54cd27fe296fd0409567d2b4685b9a84fc8 job_id=01000000
|
| 497 |
+
[2026-02-27 00:32:12,450 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=15c410d5d6a75625cb50c80927d18090e899b8edc49402fe08e50ee6 job_id=01000000
|
| 498 |
+
[2026-02-27 00:32:12,450 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=a99abd04b70eed71bbc2b85849964e1f45cdec8a7b96f35e101ab940 job_id=01000000
|
| 499 |
+
[2026-02-27 00:32:12,450 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=f46103e29121f0b748164b47d1653310da7f304c2c8c8df73871f0e5 job_id=01000000
|
| 500 |
+
[2026-02-27 00:32:12,450 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=a711ab381f1202e338fc2083afa6dd5133aebf91969e1e83b35a9610 job_id=01000000
|
| 501 |
+
[2026-02-27 00:32:12,450 I 11302 11302] (raylet) node_manager.cc:564: Killing leased worker because its job finished. worker_id=772052e39bd349442253d65d82fc94825e9e58c75098b1b473bedce2 job_id=01000000
|
| 502 |
+
[2026-02-27 00:32:12,451 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=false, disconnect_type=1, has_creation_task_exception=false worker_id=a711ab381f1202e338fc2083afa6dd5133aebf91969e1e83b35a9610 job_id=01000000
|
| 503 |
+
[2026-02-27 00:32:12,451 I 11302 11302] (raylet) node_manager.cc:1448: Got disconnect message from an unregistered client, ignoring.
|
| 504 |
+
[2026-02-27 00:32:12,462 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=3, has_creation_task_exception=false worker_id=772052e39bd349442253d65d82fc94825e9e58c75098b1b473bedce2 job_id=01000000
|
| 505 |
+
[2026-02-27 00:32:12,467 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=3, has_creation_task_exception=false worker_id=f46103e29121f0b748164b47d1653310da7f304c2c8c8df73871f0e5 job_id=01000000
|
| 506 |
+
[2026-02-27 00:32:12,474 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=3, has_creation_task_exception=false worker_id=e98598eeddae739fb0211beef22a201ab9028016b2b64fe185d8c813 job_id=01000000
|
| 507 |
+
[2026-02-27 00:32:12,478 W 11302 11317] (raylet) store.cc:365: Disconnecting client due to connection error with code 2: End of file
|
| 508 |
+
[2026-02-27 00:32:12,479 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=3, has_creation_task_exception=false worker_id=33b9d0a21a51ca22dda2aa2142cb264d1ee4f9d53a55dc567b49496c job_id=01000000
|
| 509 |
+
[2026-02-27 00:32:12,485 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=1, has_creation_task_exception=false worker_id=8d27d27b6a5c820150d6a54cd27fe296fd0409567d2b4685b9a84fc8 job_id=01000000
|
| 510 |
+
[2026-02-27 00:32:12,488 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=3, has_creation_task_exception=false worker_id=15c410d5d6a75625cb50c80927d18090e899b8edc49402fe08e50ee6 job_id=01000000
|
| 511 |
+
[2026-02-27 00:32:12,490 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=3, has_creation_task_exception=false worker_id=af6e4d2eae80c226c783dd6717832e015ec8fc0144d801649c12abfe job_id=01000000
|
| 512 |
+
[2026-02-27 00:32:12,498 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=3, has_creation_task_exception=false worker_id=a99abd04b70eed71bbc2b85849964e1f45cdec8a7b96f35e101ab940 job_id=01000000
|
| 513 |
+
[2026-02-27 00:32:12,501 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=3, has_creation_task_exception=false worker_id=9224fcd6abcfd04deeca6990e3ac522c58f6eec637ba09c0e927aaef job_id=01000000
|
| 514 |
+
[2026-02-27 00:32:12,506 I 11302 11302] (raylet) node_manager.cc:1437: Disconnecting worker, graceful=true, disconnect_type=3, has_creation_task_exception=false worker_id=b809b75ac50a13f3d02e083041fe7ba32c1445fa33db5801d3c6cfe5 job_id=01000000
|
| 515 |
+
[2026-02-27 00:32:12,508 I 11302 11302] (raylet) main.cc:1033: received SIGTERM. Existing local drain request = None
|
| 516 |
+
[2026-02-27 00:32:12,508 I 11302 11302] (raylet) main.cc:415: Raylet graceful shutdown triggered with death info: reason: EXPECTED_TERMINATION
|
| 517 |
+
reason_message: "received SIGTERM"
|
| 518 |
+
|
| 519 |
+
[2026-02-27 00:32:12,508 I 11302 11302] (raylet) accessor.cc:186: Unregistering node node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 520 |
+
[2026-02-27 00:32:12,512 I 11302 11302] (raylet) accessor.cc:194: Finished unregistering node info, status = OK node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
|
| 521 |
+
[2026-02-27 00:32:12,513 W 11302 11317] (raylet) store.cc:365: Disconnecting client due to connection error with code 2: End of file
|
| 522 |
+
[2026-02-27 00:32:12,516 I 11302 11302] (raylet) agent_manager.cc:116: Killing agent dashboard_agent, pid 11357.
|
| 523 |
+
[2026-02-27 00:32:12,544 I 11302 11358] (raylet) agent_manager.cc:83: Agent process with name dashboard_agent exited, exit code 0.
|
| 524 |
+
[2026-02-27 00:32:12,545 I 11302 11302] (raylet) agent_manager.cc:116: Killing agent runtime_env_agent, pid 11359.
|
| 525 |
+
[2026-02-27 00:32:12,560 I 11302 11360] (raylet) agent_manager.cc:83: Agent process with name runtime_env_agent exited, exit code 0.
|
| 526 |
+
[2026-02-27 00:32:12,566 I 11302 11302] (raylet) stats.h:149: Stats module has shutdown.
|
| 527 |
+
Processing events...
|
| 528 |
+
Generated:
|
| 529 |
+
No reports were generated
|
| 530 |
+
Collecting data...
|
| 531 |
+
Generating '/tmp/nsys-report-8870.qdstrm'
|
| 532 |
+
|
| 533 |
+
Generating '/tmp/nsys-report-6f58.qdstrm'
|
| 534 |
+
|
| 535 |
+
Generating '/tmp/nsys-report-f42d.qdstrm'
|
| 536 |
+
|
| 537 |
+
Generating '/tmp/nsys-report-4fd8.qdstrm'
|
| 538 |
+
|
| 539 |
+
Generating '/tmp/nsys-report-1fa1.qdstrm'
|
| 540 |
+
|
| 541 |
+
Generating '/tmp/nsys-report-f960.qdstrm'
|
| 542 |
+
|
| 543 |
+
Generating '/tmp/nsys-report-9015.qdstrm'
|
| 544 |
+
|
| 545 |
+
Generating '/tmp/nsys-report-6c6d.qdstrm'
|
| 546 |
+
|
| 547 |
+
Generating '/tmp/nsys-report-2e2d.qdstrm'
|
| 548 |
+
|
| 549 |
+
Generated:
|
| 550 |
+
/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/nsight/worker_process_12507.nsys-rep
|
| 551 |
+
|
| 552 |
+
Generated:
|
| 553 |
+
/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/nsight/worker_process_12567.nsys-rep
|
| 554 |
+
|
| 555 |
+
Generated:
|
| 556 |
+
/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/nsight/worker_process_12500.nsys-rep
|
| 557 |
+
|
| 558 |
+
Generated:
|
| 559 |
+
/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/nsight/worker_process_12563.nsys-rep
|
| 560 |
+
|
| 561 |
+
Generated:
|
| 562 |
+
/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/nsight/worker_process_12584.nsys-rep
|
| 563 |
+
|
| 564 |
+
Generated:
|
| 565 |
+
/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/nsight/worker_process_12477.nsys-rep
|
| 566 |
+
|
| 567 |
+
Generated:
|
| 568 |
+
/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/nsight/worker_process_12481.nsys-rep
|
| 569 |
+
|
| 570 |
+
Generated:
|
| 571 |
+
/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/nsight/worker_process_12586.nsys-rep
|
| 572 |
+
|
| 573 |
+
Generated:
|
| 574 |
+
/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/nsight/worker_process_11896.nsys-rep
|
| 575 |
+
Collecting data...
|
| 576 |
+
Generating '/tmp/nsys-report-dbf2.qdstrm'
|
| 577 |
+
|
| 578 |
+
Generated:
|
| 579 |
+
/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs/nsight/worker_process_13106.nsys-rep
|
worker-15c410d5d6a75625cb50c80927d18090e899b8edc49402fe08e50ee6-01000000-12567.out
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
:job_id:01000000
|
| 2 |
+
:actor_name:RewardLoopWorker
|
worker-2cda7ffb1fdfeaaf98e6be62760ae2627c565d43cb10409f83c0a748-ffffffff-11372.out
ADDED
|
File without changes
|
worker-33b9d0a21a51ca22dda2aa2142cb264d1ee4f9d53a55dc567b49496c-01000000-12500.out
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
:job_id:01000000
|
| 2 |
+
:actor_name:RewardLoopWorker
|
worker-389d4ca43c5eadc5290ba2907f911210cffe11839a5cfe9496d636c1-01000000-12110.out
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
:job_id:01000000
|
| 2 |
+
:task_name:bundle_reservation_check_func
|
| 3 |
+
:task_name:get_master_addr_port
|
worker-5ad3b871f9c47a0419d1c26aa73c88d3ae2d40ede3aeceeef3079ef2-ffffffff-11379.out
ADDED
|
File without changes
|
worker-a99abd04b70eed71bbc2b85849964e1f45cdec8a7b96f35e101ab940-01000000-13106.out
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
:job_id:01000000
|
| 2 |
+
:actor_name:vLLMHttpServer
|
| 3 |
+
['serve',
|
| 4 |
+
'Qwen/Qwen2.5-0.5B-Instruct',
|
| 5 |
+
'--dtype',
|
| 6 |
+
'bfloat16',
|
| 7 |
+
'--load_format',
|
| 8 |
+
'dummy',
|
| 9 |
+
'--distributed_executor_backend',
|
| 10 |
+
'mp',
|
| 11 |
+
'--worker_extension_cls',
|
| 12 |
+
'verl.workers.rollout.vllm_rollout.utils.vLLMColocateWorkerExtension',
|
| 13 |
+
'--max_model_len',
|
| 14 |
+
'32768',
|
| 15 |
+
'--max_num_seqs',
|
| 16 |
+
'1024',
|
| 17 |
+
'--enable_chunked_prefill',
|
| 18 |
+
'--max_num_batched_tokens',
|
| 19 |
+
'8192',
|
| 20 |
+
'--enable_prefix_caching',
|
| 21 |
+
'--enable_sleep_mode',
|
| 22 |
+
'--logprobs_mode',
|
| 23 |
+
'processed_logprobs',
|
| 24 |
+
'--gpu_memory_utilization',
|
| 25 |
+
'0.4',
|
| 26 |
+
'--disable_log_stats',
|
| 27 |
+
'--tensor_parallel_size',
|
| 28 |
+
'1',
|
| 29 |
+
'--seed',
|
| 30 |
+
'0',
|
| 31 |
+
'--override_generation_config',
|
| 32 |
+
'{"temperature": 1.0, "top_k": -1, "top_p": 1, "repetition_penalty": 1.0, '
|
| 33 |
+
'"max_new_tokens": 512}',
|
| 34 |
+
'--hf_overrides',
|
| 35 |
+
'{}',
|
| 36 |
+
'--scheduling_policy',
|
| 37 |
+
'fcfs',
|
| 38 |
+
'--compilation_config',
|
| 39 |
+
'{"cudagraph_mode": "FULL_AND_PIECEWISE"}']
|
| 40 |
+
WARNING 02-27 00:32:03 [system_utils.py:136] We must use the `spawn` multiprocessing start method. Overriding VLLM_WORKER_MULTIPROC_METHOD to 'spawn'. See https://docs.vllm.ai/en/latest/usage/troubleshooting.html#python-multiprocessing for more information. Reasons: In a Ray actor and can only be spawned; CUDA is initialized
|
| 41 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] WorkerProc failed to start.
|
| 42 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] Traceback (most recent call last):
|
| 43 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 725, in worker_main
|
| 44 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] ready_writer.send(
|
| 45 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] File "/usr/lib/python3.12/multiprocessing/connection.py", line 206, in send
|
| 46 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] self._send_bytes(_ForkingPickler.dumps(obj))
|
| 47 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] File "/usr/lib/python3.12/multiprocessing/connection.py", line 427, in _send_bytes
|
| 48 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] self._send(header + buf)
|
| 49 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] File "/usr/lib/python3.12/multiprocessing/connection.py", line 384, in _send
|
| 50 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] n = write(self._handle, buf)
|
| 51 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] ^^^^^^^^^^^^^^^^^^^^^^^^
|
| 52 |
+
[0;36m(Worker pid=13325)[0;0m ERROR 02-27 00:32:38 [multiproc_executor.py:750] BrokenPipeError: [Errno 32] Broken pipe
|
worker-af6e4d2eae80c226c783dd6717832e015ec8fc0144d801649c12abfe-01000000-12563.out
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
:job_id:01000000
|
| 2 |
+
:actor_name:RewardLoopWorker
|
worker-bb0f50c5405699ae07f957ec3f7c03f2bdf40be03f6e39b39232dc16-ffffffff-11378.out
ADDED
|
File without changes
|
worker-d94856809cc941df63bf786bee51e2d1396c50fb3b25d48a4be64edf-ffffffff-11373.out
ADDED
|
File without changes
|
worker-fc14e0d4e4b6acb4ecead813c2d960587eefa7859aac6d8e19aeec98-ffffffff-11374.err
ADDED
|
File without changes
|