MaxView

← Back to run

Log Summary

XPK Start: Tue Apr 21 02:46:45 UTC 2026
2026-04-21 02:46:49.768418: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1776739609.781129      10 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1776739609.784813      10 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1776739609.795952      10 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1776739609.795968      10 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1776739609.795971      10 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1776739609.795973      10 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2026-04-21 02:47:08.843522: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0421 02:47:09.358703 136685102896960 max_utils.py:273] Attempting to initialize the jax distributed system...
INFO:2026-04-21 02:47:18,400:jax._src.distributed:157: Connecting to JAX distributed service on mt-10-shardy-true-kfsfk-slice-job-0-0.mt-10-shardy-true-kfsfk:8482
I0421 02:47:18.400447 136685102896960 distributed.py:157] Connecting to JAX distributed service on mt-10-shardy-true-kfsfk-slice-job-0-0.mt-10-shardy-true-kfsfk:8482
I0421 02:47:18.993428 136685102896960 max_utils.py:284] Jax distributed system initialized!
I0421 02:47:24.716399 136685102896960 max_utils.py:800] System Information: Jax Version: 0.8.1
I0421 02:47:24.716511 136685102896960 max_utils.py:801] System Information: Jaxlib Version: 0.8.1
I0421 02:47:24.716553 136685102896960 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Nov 12 2025 14:16:36 (1762985796) cl/831091709
I0421 02:47:24.716589 136685102896960 train_utils.py:347] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/deps/src/maxtext/trainers/pre_train/train.py", line 727, in <module>
    app.run(main)
  File "/usr/local/lib/python3.12/site-packages/absl/app.py", line 316, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.12/site-packages/absl/app.py", line 261, in _run_main
    sys.exit(main(argv))
             ^^^^^^^^^^
  File "/deps/src/maxtext/trainers/pre_train/train.py", line 723, in main
    train_func()
  File "/deps/src/maxtext/trainers/pre_train/train.py", line 713, in train_func
    run(config, recorder, diagnostic_config)
  File "/deps/src/maxtext/trainers/pre_train/train.py", line 692, in run
    train_loop(config, recorder)
  File "/deps/src/maxtext/trainers/pre_train/train.py", line 517, in train_loop
    ) = train_utils.setup_train_loop(config, recorder)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/deps/src/maxtext/utils/train_utils.py", line 217, in setup_train_loop
    raise NotImplementedError("Pure NNX support has not been implemented yet.")
NotImplementedError: Pure NNX support has not been implemented yet.
XPK End: Tue Apr 21 02:47:31 UTC 2026
EXIT_CODE=1