XPK Start: Sun Apr 19 11:42:39 UTC 2026 2026-04-19 11:42:43.653820: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered WARNING: All log messages before absl::InitializeLog() is called are written to STDERR E0000 00:00:1776598963.666623 11 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E0000 00:00:1776598963.670314 11 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered W0000 00:00:1776598963.681527 11 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1776598963.681544 11 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1776598963.681547 11 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1776598963.681549 11 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. 2026-04-19 11:43:18.509404: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0419 11:43:19.027064 134408679552832 max_utils.py:273] Attempting to initialize the jax distributed system... INFO:2026-04-19 11:43:28,067:jax._src.distributed:157: Connecting to JAX distributed service on mt-11-optimizer-offload-vpmx3-slice-job-0-0.mt-11-optimizer-offload-vpmx3:8482 I0419 11:43:28.067309 134408679552832 distributed.py:157] Connecting to JAX distributed service on mt-11-optimizer-offload-vpmx3-slice-job-0-0.mt-11-optimizer-offload-vpmx3:8482 I0419 11:43:59.020709 134408679552832 max_utils.py:284] Jax distributed system initialized! I0419 11:44:06.388625 134408679552832 max_utils.py:800] System Information: Jax Version: 0.8.1 I0419 11:44:06.388741 134408679552832 max_utils.py:801] System Information: Jaxlib Version: 0.8.1 I0419 11:44:06.388783 134408679552832 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Nov 12 2025 14:16:36 (1762985796) cl/831091709 I0419 11:44:06.388816 134408679552832 train_utils.py:347] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing. Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/deps/src/maxtext/trainers/pre_train/train.py", line 727, in <module> app.run(main) File "/usr/local/lib/python3.12/site-packages/absl/app.py", line 316, in run _run_main(main, args) File "/usr/local/lib/python3.12/site-packages/absl/app.py", line 261, in _run_main sys.exit(main(argv)) ^^^^^^^^^^ File "/deps/src/maxtext/trainers/pre_train/train.py", line 723, in main train_func() File "/deps/src/maxtext/trainers/pre_train/train.py", line 713, in train_func run(config, recorder, diagnostic_config) File "/deps/src/maxtext/trainers/pre_train/train.py", line 692, in run train_loop(config, recorder) File "/deps/src/maxtext/trainers/pre_train/train.py", line 517, in train_loop ) = train_utils.setup_train_loop(config, recorder) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/deps/src/maxtext/utils/train_utils.py", line 217, in setup_train_loop raise NotImplementedError("Pure NNX support has not been implemented yet.") NotImplementedError: Pure NNX support has not been implemented yet. XPK End: Sun Apr 19 11:44:15 UTC 2026 EXIT_CODE=1