MaxView

← Back to run

Log Summary

2026-04-16 02:05:09.524892: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0416 02:05:09.641574 130005558180992 max_utils.py:238] Skipping jax distributed system due to skip_jax_distributed_system=True flag.
I0416 02:05:40.201155 130005558180992 max_utils.py:800] System Information: Jax Version: 0.9.2
I0416 02:05:40.201275 130005558180992 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0416 02:05:40.201312 130005558180992 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Apr 6 2026 20:48:10 (1775533690) cl/895581894
I0416 02:05:40.201340 130005558180992 train_utils.py:364] WARNING: 'dataset_path' might be pointing your local file system
I0416 02:05:40.201427 130005558180992 train.py:812] [DECOUPLED NO-OP] skipping cloud diagnostics wrapper.
W0416 02:05:40.297706 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
I0416 02:05:40.717425 130005558180992 maxtext_utils.py:1687] Num_devices: 8, shape (1, 1, 1, 8, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0416 02:05:40.815174 130005558180992 checkpointing.py:688] Setting up checkpoint logger...
I0416 02:05:40.815280 130005558180992 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True
I0416 02:05:40.815321 130005558180992 pytree_checkpoint_handler.py:577] save_device_host_concurrent_bytes=None
I0416 02:05:40.815510 130005558180992 base_pytree_checkpoint_handler.py:411] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x763cb0ffcf20>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB)
I0416 02:05:43.565864 130005558180992 checkpointing.py:266] Enabling policy for fixed interval checkpointing.
I0416 02:05:43.566299 130005558180992 checkpoint_manager.py:702] [process=0][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7636337ab0b0>}, handler_registry=None
I0416 02:05:43.566823 130005558180992 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7636337ab0b0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`.
I0416 02:05:43.566869 130005558180992 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7636337d04a0>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`.
I0416 02:05:43.566902 130005558180992 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7636337ab0b0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7636337ab0b0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7636337d04a0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7636337d04a0>}).
I0416 02:05:43.567510 130005558180992 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.28
I0416 02:05:43.567586 130005558180992 async_checkpointer.py:177] [process=0][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>.<lambda> at 0x76363478d940> timeout: 600 secs and primary_host=0 for async checkpoint writes
I0416 02:05:43.695189 130005558180992 checkpoint_manager.py:1788] Found 0 checkpoint steps in gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints
I0416 02:05:43.695460 130005558180992 checkpoint_manager.py:921] [process=0][thread=MainThread] CheckpointManager created,  primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_hns=False, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False), root_directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x76363370e810>
I0416 02:05:43.695566 130005558180992 checkpointing.py:302] Checkpoint manager created!
I0416 02:05:43.777825 130005558180992 dataset_info.py:707] Load dataset info from tests/assets/local_datasets/c4_en_dataset_minimal/c4/en/3.1.0
I0416 02:05:43.781096 130005558180992 reader.py:262] Creating a tf.data.Dataset reading 8 files located in folders: tests/assets/local_datasets/c4_en_dataset_minimal/c4/en/3.1.0.
I0416 02:05:43.837271 130005558180992 logging_logger.py:49] Constructing tf.data.Dataset __local_c4_builder for split train, from tests/assets/local_datasets/c4_en_dataset_minimal/c4/en/3.1.0
I0416 02:05:43.870148 130005558180992 tokenizer.py:245] Tokenizer path: src/maxtext/assets/tokenizers/tokenizer.llama2
I0416 02:05:43.870216 130005558180992 tokenizer.py:187] Loading sentencepiece tokenizer: src/maxtext/assets/tokenizers/tokenizer.llama2
I0416 02:05:44.929231 130005558180992 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint
I0416 02:05:44.929349 130005558180992 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint.
[DECOUPLED NO-OP] gcs_storage: using stubs.
[DECOUPLED NO-OP] mldiagnostics: using stub.
[DECOUPLED NO-OP] mldiagnostics: using stub.
[DECOUPLED NO-OP] mldiagnostics: using stub.
[DECOUPLED NO-OP] workload_monitor: using stub.
[DECOUPLED NO-OP] vertex_tensorboard: using stub.
fsdp: 8

I0416 02:05:47.137206 130005558180992 nnx_decoders.py:465] nnx_decoders/carry Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0416 02:05:47.137305 130005558180992 nnx_decoders.py:465] nnx_decoders/carry Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None).
I0416 02:05:47.142355 130005558180992 nnx_decoders.py:465] Unknown Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0416 02:05:47.142401 130005558180992 nnx_decoders.py:465] Unknown Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None).
I0416 02:05:47.157368 130005558180992 attentions.py:1088] attentions/inputs_q Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0416 02:05:47.157418 130005558180992 attentions.py:1088] attentions/inputs_q Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None).
I0416 02:05:47.171796 130005558180992 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0416 02:05:47.171850 130005558180992 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None).
I0416 02:05:47.193572 130005558180992 attentions.py:1154] attentions/query Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0416 02:05:47.193630 130005558180992 attentions.py:1154] attentions/query Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None).
I0416 02:05:47.208197 130005558180992 attentions.py:1155] attentions/key Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0416 02:05:47.208251 130005558180992 attentions.py:1155] attentions/key Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None).
I0416 02:05:47.222656 130005558180992 attentions.py:1156] attentions/value Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0416 02:05:47.222704 130005558180992 attentions.py:1156] attentions/value Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None).
I0416 02:05:47.250278 130005558180992 attentions.py:1197] attentions/out Logical: bfloat16[8,2048,16,128]..................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv').
I0416 02:05:47.250343 130005558180992 attentions.py:1197] attentions/out Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None).
I0416 02:05:47.268361 130005558180992 linears.py:525] linears/x Logical: bfloat16[8,2048,7168]....................................... ('activation_batch', 'activation_length', 'activation_mlp').
I0416 02:05:47.268414 130005558180992 linears.py:525] linears/x Physical: bfloat16[8,2048,7168]....................................... ('fsdp', None, None).
I0416 02:05:54.808367 130005558180992 max_utils.py:791] Total memory size: 3.6 GB, Output size: 1.5 GB, Temp size: 2.0 GB, Argument size: 1.5 GB, Host temp size: 0.0 GB.
I0416 02:05:54.809190 130005558180992 max_utils.py:194] tensorboardX not available; using no-op SummaryWriter.
I0416 02:05:54.810968 130005558180992 metric_logger.py:289] number parameters: 1.104 billion
I0416 02:06:04.859095 130005558180992 checkpointing.py:794] Waiting for step 0 to finish before checkpoint...
I0416 02:06:04.989519 130005558180992 checkpointing.py:798] Waited 0.13040542602539062 seconds for step 0 to finish before starting checkpointing.
I0416 02:06:04.990058 130005558180992 checkpoint_manager.py:1983] [process=0][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0416 02:06:04.990232 130005558180992 checkpoint_manager.py:1501] [process=0] Saving checkpoint at step 0
I0416 02:06:04.990680 130005558180992 async_checkpointer.py:452] [process=0] Started async saving checkpoint to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0.
I0416 02:06:05.114274 130005558180992 signaling_client.py:373] Using ThreadSafeKeyValueSignalingClient
I0416 02:06:05.222167 129890680571456 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0
I0416 02:06:05.227524 130005558180992 jax_array_handlers.py:347] Scheduling D2H of 69 prioritized jax.Array.
I0416 02:06:05.227628 130005558180992 replica_slices.py:410] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0416 02:06:05.933901 129890670085696 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0/items
W0416 02:06:06.022510 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
W0416 02:06:06.043567 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
W0416 02:06:06.057151 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
W0416 02:06:06.065374 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
W0416 02:06:06.070913 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
W0416 02:06:06.076004 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
W0416 02:06:06.080970 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
W0416 02:06:06.085867 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
I0416 02:06:06.102936 129890598782528 checkpoint.py:188] Wrote Metadata={'item_handlers': None, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776305165848019363, 'commit_timestamp_nsecs': None, 'custom_metadata': {}}, json={"item_handlers": null, "metrics": {}, "performance_metrics": {}, "init_timestamp_nsecs": 1776305165848019363, "commit_timestamp_nsecs": null, "custom_metadata": {}} to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0/_CHECKPOINT_METADATA
I0416 02:06:06.510802 3525643 google_auth_provider.cc:149] Using credentials at ~/.config/gcloud/application_default_credentials.json
I0416 02:06:06.510861 3525643 google_auth_provider.cc:156] Using OAuth2 AuthProvider
I0416 02:06:06.620682 130005558180992 base_pytree_checkpoint_handler.py:153] [process=0][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 1.394369s
I0416 02:06:06.628064 130005558180992 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/blocking_gbytes_per_sec: 8.160 GiB/s (total gbytes: 12.3 GiB) (time elapsed: a second) (per-host)
I0416 02:06:06.628241 130005558180992 base_pytree_checkpoint_handler.py:732] [process=0][thread=MainThread] Initiated Pytree async_save. Time taken: 1.512522s (batch_requests_ready=0.101332s, total_serialization_initiated=1.403857s, others=0.007332s)
I0416 02:06:06.628368 130005558180992 composite_checkpoint_handler.py:715] [process=0][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 1.513203s (all_items=0.000027s, per_item={'items': '0.00002694'}, temp_paths=1.513176)
I0416 02:06:06.629967 129890525382208 async_checkpointer.py:79] [process=0][thread=async_save] Background save thread started.
I0416 02:06:06.630072 130005558180992 async_checkpointer.py:561] Finished blocking save. Time taken: 1.639797s. Continuing background save to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0.
I0416 02:06:06.630277 130005558180992 checkpoint_manager.py:1549] [process=0][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize
I0416 02:06:06.630455 129890691057216 async_checkpointer.py:265] [process=0][thread=save_finalize] Waiting for background save thread=async_save.
I0416 02:06:06.630554 130005558180992 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776305164.9900308, 'wait_for_prev_duration_secs': 5.793571472167969e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776305164.9902544, 'checkpointer_blocking_duration_secs': 1.6399345397949219, 'get_old_steps_start_time': 1776305166.6302104, 'get_old_steps_duration_secs': 2.9802322387695312e-05, 'checkpoint_manager_blocking_start_time': 1776305164.9899323, 'checkpoint_manager_blocking_duration_secs': 1.6405937671661377}
I0416 02:06:06.630682 130005558180992 checkpointing.py:409] Started an asynchronous checkpoint save for step 0
I0416 02:06:06.630738 130005558180992 max_utils.py:750] 
Memstats: After params initialized:
I0416 02:06:06.630797 130005558180992 max_utils.py:756] 	Using (GB) 1.59 / 31.25 (5.088000%) on TPU_0(process=0,(0,0,0,0))
I0416 02:06:06.630824 130005558180992 max_utils.py:756] 	Using (GB) 1.59 / 31.25 (5.088000%) on TPU_1(process=0,(1,0,0,0))
I0416 02:06:06.630846 130005558180992 max_utils.py:756] 	Using (GB) 1.59 / 31.25 (5.088000%) on TPU_2(process=0,(0,1,0,0))
I0416 02:06:06.630867 130005558180992 max_utils.py:756] 	Using (GB) 1.59 / 31.25 (5.088000%) on TPU_3(process=0,(1,1,0,0))
I0416 02:06:06.630887 130005558180992 max_utils.py:756] 	Using (GB) 1.59 / 31.25 (5.088000%) on TPU_4(process=0,(0,2,0,0))
I0416 02:06:06.630908 130005558180992 max_utils.py:756] 	Using (GB) 1.59 / 31.25 (5.088000%) on TPU_5(process=0,(1,2,0,0))
I0416 02:06:06.630927 130005558180992 max_utils.py:756] 	Using (GB) 1.59 / 31.25 (5.088000%) on TPU_6(process=0,(0,3,0,0))
I0416 02:06:06.630944 130005558180992 max_utils.py:756] 	Using (GB) 1.59 / 31.25 (5.088000%) on TPU_7(process=0,(1,3,0,0))
W0416 02:06:06.638298 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
W0416 02:06:06.644247 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
W0416 02:06:06.649144 3522579 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions.
I0416 02:06:07.016780 130005558180992 metric_logger.py:185] completed step: 0, seconds: 10.047, TFLOP/s/device: 1.352, Tokens/s/device: 203.849, total_weights: 13328, loss: 10.880
I0416 02:06:07.018973 130005558180992 metric_logger.py:269] To see full metrics 'tensorboard --logdir=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/tensorboard/'
I0416 02:06:07.129155 130005558180992 metric_logger.py:185] completed step: 1, seconds: 2.153, TFLOP/s/device: 6.311, Tokens/s/device: 951.217, total_weights: 12332, loss: 10.862
I0416 02:06:07.246510 130005558180992 metric_logger.py:185] completed step: 2, seconds: 0.024, TFLOP/s/device: 577.389, Tokens/s/device: 87030.427, total_weights: 15161, loss: 9.926
I0416 02:06:07.363709 130005558180992 metric_logger.py:185] completed step: 3, seconds: 0.104, TFLOP/s/device: 130.638, Tokens/s/device: 19691.172, total_weights: 13327, loss: 9.396
I0416 02:06:07.586547 129890556839488 array_metadata_store.py:203] [process=0][thread=array_type_handler] Wrote 69 array_metadata.ArrayMetadata to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0/items/array_metadatas/process_0
I0416 02:06:27.068216 130005558180992 metric_logger.py:185] completed step: 4, seconds: 0.118, TFLOP/s/device: 115.510, Tokens/s/device: 17410.969, total_weights: 11939, loss: 9.005
I0416 02:06:27.081703 130005558180992 metric_logger.py:185] completed step: 5, seconds: 0.117, TFLOP/s/device: 115.906, Tokens/s/device: 17470.676, total_weights: 15502, loss: 8.861
I0416 02:06:27.196373 130005558180992 metric_logger.py:185] completed step: 6, seconds: 19.705, TFLOP/s/device: 0.690, Tokens/s/device: 103.932, total_weights: 13864, loss: 8.738
I0416 02:06:27.313680 130005558180992 metric_logger.py:185] completed step: 7, seconds: 0.009, TFLOP/s/device: 1597.358, Tokens/s/device: 240771.220, total_weights: 12988, loss: 8.651
I0416 02:06:27.430981 130005558180992 metric_logger.py:185] completed step: 8, seconds: 0.116, TFLOP/s/device: 117.269, Tokens/s/device: 17676.048, total_weights: 13820, loss: 8.671
I0416 02:06:27.548914 130005558180992 checkpointing.py:794] Waiting for step 9 to finish before checkpoint...
I0416 02:06:27.550137 130005558180992 checkpointing.py:798] Waited 0.0012373924255371094 seconds for step 9 to finish before starting checkpointing.
I0416 02:06:27.550402 130005558180992 checkpoint_manager.py:1994] [process=0][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0416 02:06:31.757572 129890546353728 base_pytree_checkpoint_handler.py:1217] [process=0][thread=write_metadata_after_commits] Commit + Array metadata written. Time taken: 25.128668s (commit=24.682851s, array_metadata_write=0.445817s)
I0416 02:06:31.758822 129890525382208 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/gbytes_per_sec: 474.299 MiB/s (total gbytes: 12.3 GiB) (time elapsed: 26 seconds) (per-host)
I0416 02:06:31.758878 129890525382208 async_checkpointer.py:90] [process=0][thread=async_save] 3 Handler Commit operations completed. Time taken: 25.128760s.
I0416 02:06:31.980996 129890525382208 checkpoint.py:228] Read Metadata={'item_handlers': None, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776305165848019363, 'commit_timestamp_nsecs': None, 'custom_metadata': {}} from gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0/_CHECKPOINT_METADATA
I0416 02:06:32.163349 129890525382208 array_metadata_store.py:367] [process=0][thread=async_save] Skipped cross-host ArrayMetadata validation because only one process is found: process_index=0.
I0416 02:06:32.353267 129890598782528 checkpoint.py:247] Updated Metadata={'item_handlers': {'items': 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler'}, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776305165848019363, 'commit_timestamp_nsecs': None, 'custom_metadata': {}} to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0/_CHECKPOINT_METADATA
I0416 02:06:32.651651 129890525382208 ocdbt_utils.py:56] Param validation support for Zarr3 will be added later (b/362328389).
I0416 02:06:32.652314 129890525382208 base_pytree_checkpoint_handler.py:1342] [process=0][thread=async_save] Pytree save finalize (merge_ocdbt + ArrayMetadata validation) completed. Time taken: 0.628403s. use_zarr3=True, enable_post_merge_validation=True, directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0/items
I0416 02:06:32.653138 129890525382208 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0/items
I0416 02:06:32.900139 129890525382208 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0
I0416 02:06:33.546155 129890525382208 atomicity.py:794] [process=0][thread=async_save] Finished saving checkpoint (finalized tmp dir) to `gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0`.
I0416 02:06:33.546894 129890525382208 async_checkpointer.py:420] Finished async_save (blocking + background). Time taken: 28.556626s. directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/0
I0416 02:06:33.546971 129890525382208 async_checkpointer.py:144] [process=0][thread=async_save] Background save thread done. Time taken: 26.916854s.
I0416 02:06:33.547162 129890691057216 async_checkpointer.py:273] [process=0][thread=save_finalize] Done with waiting for background save thread=async_save.
I0416 02:06:33.547282 129890691057216 async_checkpointer.py:283] [process=0][thread=save_finalize] No errors found in background save thread=async_save.
I0416 02:06:33.547342 129890691057216 checkpoint_manager.py:2103] [process=0][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts...
I0416 02:06:33.547387 129890691057216 checkpoint_manager.py:2112] [process=0][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts.
I0416 02:06:33.547525 130005558180992 checkpoint_manager.py:2006] [process=0][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0.
W0416 02:06:33.547657 130005558180992 checkpoint_manager.py:1441] Waiting for previous save to complete took 5.997258 seconds. If this number is high, consider checkpointing less frequently.
I0416 02:06:33.548719 130005558180992 checkpoint_manager.py:1501] [process=0] Saving checkpoint at step 9
I0416 02:06:33.549007 130005558180992 async_checkpointer.py:452] [process=0] Started async saving checkpoint to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/9.
I0416 02:06:33.712486 130005558180992 jax_array_handlers.py:347] Scheduling D2H of 69 prioritized jax.Array.
I0416 02:06:33.712589 130005558180992 replica_slices.py:410] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0416 02:06:33.717057 129890691057216 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/9
I0416 02:06:34.454682 129890514896448 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/9/items
I0416 02:06:39.997485 130005558180992 base_pytree_checkpoint_handler.py:153] [process=0][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 6.286068s
I0416 02:06:40.003666 130005558180992 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/blocking_gbytes_per_sec: 1.933 GiB/s (total gbytes: 12.3 GiB) (time elapsed: 6 seconds) (per-host)
I0416 02:06:40.003741 130005558180992 base_pytree_checkpoint_handler.py:732] [process=0][thread=MainThread] Initiated Pytree async_save. Time taken: 6.382859s (batch_requests_ready=0.083027s, total_serialization_initiated=6.293769s, others=0.006063s)
I0416 02:06:40.003820 130005558180992 composite_checkpoint_handler.py:715] [process=0][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 6.383400s (all_items=0.000017s, per_item={'items': '0.00001669'}, temp_paths=6.383383)
I0416 02:06:40.005294 129890546353728 async_checkpointer.py:79] [process=0][thread=async_save] Background save thread started.
I0416 02:06:40.005404 130005558180992 async_checkpointer.py:561] Finished blocking save. Time taken: 6.456640s. Continuing background save to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/9.
I0416 02:06:40.005603 130005558180992 checkpoint_manager.py:1549] [process=0][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize
I0416 02:06:40.005758 129890525382208 async_checkpointer.py:265] [process=0][thread=save_finalize] Waiting for background save thread=async_save.
I0416 02:06:40.005868 130005558180992 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776305187.5503693, 'wait_for_prev_duration_secs': 5.997257709503174, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776305193.5487437, 'checkpointer_blocking_duration_secs': 6.456773042678833, 'get_old_steps_start_time': 1776305200.005533, 'get_old_steps_duration_secs': 3.123283386230469e-05, 'checkpoint_manager_blocking_start_time': 1776305187.5503247, 'checkpoint_manager_blocking_duration_secs': 12.455516576766968}
I0416 02:06:40.005988 130005558180992 checkpointing.py:409] Started an asynchronous checkpoint save for step 9
I0416 02:06:40.006022 130005558180992 checkpoint_manager.py:1994] [process=0][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0416 02:06:40.717854 129890577811008 array_metadata_store.py:203] [process=0][thread=array_type_handler] Wrote 69 array_metadata.ArrayMetadata to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/9/items/array_metadatas/process_0
I0416 02:07:16.741683 129890514896448 base_pytree_checkpoint_handler.py:1217] [process=0][thread=write_metadata_after_commits] Commit + Array metadata written. Time taken: 36.737301s (commit=36.294042s, array_metadata_write=0.443260s)
I0416 02:07:16.743061 129890546353728 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/gbytes_per_sec: 293.047 MiB/s (total gbytes: 12.3 GiB) (time elapsed: 43 seconds) (per-host)
I0416 02:07:16.743180 129890546353728 async_checkpointer.py:90] [process=0][thread=async_save] 3 Handler Commit operations completed. Time taken: 36.737728s.
I0416 02:07:17.159023 129890546353728 array_metadata_store.py:367] [process=0][thread=async_save] Skipped cross-host ArrayMetadata validation because only one process is found: process_index=0.
I0416 02:07:17.607444 129890546353728 ocdbt_utils.py:56] Param validation support for Zarr3 will be added later (b/362328389).
I0416 02:07:17.608245 129890546353728 base_pytree_checkpoint_handler.py:1342] [process=0][thread=async_save] Pytree save finalize (merge_ocdbt + ArrayMetadata validation) completed. Time taken: 0.596120s. use_zarr3=True, enable_post_merge_validation=True, directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/9/items
I0416 02:07:17.609031 129890546353728 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/9/items
I0416 02:07:17.856924 129890546353728 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/9
I0416 02:07:18.557988 129890546353728 atomicity.py:794] [process=0][thread=async_save] Finished saving checkpoint (finalized tmp dir) to `gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/9`.
I0416 02:07:18.558737 129890546353728 async_checkpointer.py:420] Finished async_save (blocking + background). Time taken: 45.009980s. directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_01_base/checkpoints/9
I0416 02:07:18.558830 129890546353728 async_checkpointer.py:144] [process=0][thread=async_save] Background save thread done. Time taken: 38.553380s.
I0416 02:07:18.559052 129890525382208 async_checkpointer.py:273] [process=0][thread=save_finalize] Done with waiting for background save thread=async_save.
I0416 02:07:18.559175 129890525382208 async_checkpointer.py:283] [process=0][thread=save_finalize] No errors found in background save thread=async_save.
I0416 02:07:18.559244 129890525382208 checkpoint_manager.py:2103] [process=0][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts...
I0416 02:07:18.559286 129890525382208 checkpoint_manager.py:2112] [process=0][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts.
I0416 02:07:18.559465 130005558180992 checkpoint_manager.py:2006] [process=0][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9.
I0416 02:07:18.559670 130005558180992 checkpoint_manager.py:1983] [process=0][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0416 02:07:18.560645 130005558180992 metric_logger.py:185] completed step: 9, seconds: 0.118, TFLOP/s/device: 114.901, Tokens/s/device: 17319.092, total_weights: 12300, loss: 8.668
Per train step:
 Total TFLOPs: 13.59 
 split as 93.93% learnable weight flops and 6.07% attention flops