2026-04-16 04:07:59.782604: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0416 04:07:59.901029 125411028421760 max_utils.py:238] Skipping jax distributed system due to skip_jax_distributed_system=True flag. I0416 04:08:56.862286 125411028421760 max_utils.py:800] System Information: Jax Version: 0.9.2 I0416 04:08:56.862399 125411028421760 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0416 04:08:56.862434 125411028421760 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Apr 6 2026 20:48:10 (1775533690) cl/895581894 I0416 04:08:56.862460 125411028421760 train_utils.py:364] WARNING: 'dataset_path' might be pointing your local file system I0416 04:08:56.862542 125411028421760 train.py:818] [DECOUPLED NO-OP] skipping cloud diagnostics wrapper. I0416 04:08:57.375740 125411028421760 maxtext_utils.py:1687] Num_devices: 8, shape (1, 1, 1, 8, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0416 04:08:57.375904 125411028421760 maxtext_utils.py:1687] Num_devices: 8, shape (1, 1, 1, 8, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0416 04:08:57.376082 125411028421760 checkpointing.py:688] Setting up checkpoint logger... I0416 04:08:57.376123 125411028421760 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True I0416 04:08:57.376163 125411028421760 pytree_checkpoint_handler.py:577] save_device_host_concurrent_bytes=None I0416 04:08:57.376588 125411028421760 base_pytree_checkpoint_handler.py:411] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x720ef11f4e00>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0416 04:08:59.755619 125411028421760 checkpointing.py:266] Enabling policy for fixed interval checkpointing. I0416 04:08:59.755797 125411028421760 checkpoint_manager.py:702] [process=0][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x720874f31eb0>}, handler_registry=None I0416 04:08:59.756073 125411028421760 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x720874f31eb0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`. I0416 04:08:59.756117 125411028421760 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x720874f331d0>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`. I0416 04:08:59.756147 125411028421760 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x720874f31eb0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x720874f31eb0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x720874f331d0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x720874f331d0>}). I0416 04:08:59.756556 125411028421760 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.28 I0416 04:08:59.756608 125411028421760 async_checkpointer.py:177] [process=0][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>.<lambda> at 0x720874f3e0c0> timeout: 600 secs and primary_host=0 for async checkpoint writes I0416 04:08:59.875203 125411028421760 checkpoint_manager.py:1788] Found 0 checkpoint steps in gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints I0416 04:08:59.875421 125411028421760 checkpoint_manager.py:921] [process=0][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_hns=False, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False), root_directory=gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x720874f31160> I0416 04:08:59.875499 125411028421760 checkpointing.py:302] Checkpoint manager created! I0416 04:08:59.954575 125411028421760 dataset_info.py:707] Load dataset info from tests/assets/local_datasets/c4_en_dataset_minimal/c4/en/3.1.0 I0416 04:08:59.957550 125411028421760 reader.py:262] Creating a tf.data.Dataset reading 8 files located in folders: tests/assets/local_datasets/c4_en_dataset_minimal/c4/en/3.1.0. I0416 04:09:00.013892 125411028421760 logging_logger.py:49] Constructing tf.data.Dataset __local_c4_builder for split train, from tests/assets/local_datasets/c4_en_dataset_minimal/c4/en/3.1.0 I0416 04:09:00.048743 125411028421760 tokenizer.py:245] Tokenizer path: src/maxtext/assets/tokenizers/tokenizer.llama2 I0416 04:09:00.048816 125411028421760 tokenizer.py:187] Loading sentencepiece tokenizer: src/maxtext/assets/tokenizers/tokenizer.llama2 I0416 04:09:01.084329 125411028421760 nnx_wrappers.py:437] Unknown Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0416 04:09:01.084432 125411028421760 nnx_wrappers.py:437] Unknown Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 04:09:01.187112 125411028421760 attentions.py:1088] attentions/inputs_q Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0416 04:09:01.187192 125411028421760 attentions.py:1088] attentions/inputs_q Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 04:09:01.202337 125411028421760 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0416 04:09:01.202387 125411028421760 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 04:09:01.225090 125411028421760 attentions.py:1154] attentions/query Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0416 04:09:01.225147 125411028421760 attentions.py:1154] attentions/query Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 04:09:01.240162 125411028421760 attentions.py:1155] attentions/key Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0416 04:09:01.240208 125411028421760 attentions.py:1155] attentions/key Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 04:09:01.255175 125411028421760 attentions.py:1156] attentions/value Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0416 04:09:01.255229 125411028421760 attentions.py:1156] attentions/value Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 04:09:01.277568 125411028421760 attentions.py:1197] attentions/out Logical: bfloat16[8,2048,16,128]..................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv'). I0416 04:09:01.277637 125411028421760 attentions.py:1197] attentions/out Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 04:09:01.296497 125411028421760 linears.py:525] linears/x Logical: bfloat16[8,2048,7168]....................................... ('activation_batch', 'activation_length', 'activation_mlp'). I0416 04:09:01.296549 125411028421760 linears.py:525] linears/x Physical: bfloat16[8,2048,7168]....................................... ('fsdp', None, None). I0416 04:09:01.491999 125411028421760 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint I0416 04:09:01.492109 125411028421760 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint. [DECOUPLED NO-OP] gcs_storage: using stubs. [DECOUPLED NO-OP] mldiagnostics: using stub. [DECOUPLED NO-OP] mldiagnostics: using stub. [DECOUPLED NO-OP] mldiagnostics: using stub. [DECOUPLED NO-OP] workload_monitor: using stub. [DECOUPLED NO-OP] vertex_tensorboard: using stub. fsdp: 8 I0416 04:09:02.872182 125411028421760 maxtext_utils.py:1790] params/params/decoder/decoder_norm/scale Shape: float32[2048] Logical: P('norm',) Physical: (None,) I0416 04:09:02.872293 125411028421760 maxtext_utils.py:1790] params/params/decoder/layers/mlp/wi_0/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0416 04:09:02.872336 125411028421760 maxtext_utils.py:1790] params/params/decoder/layers/mlp/wi_1/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0416 04:09:02.872380 125411028421760 maxtext_utils.py:1790] params/params/decoder/layers/mlp/wo/kernel Shape: float32[7168,16,2048] Logical: P('mlp', 'layers', 'embed') Physical: (None, None, 'fsdp') I0416 04:09:02.872422 125411028421760 maxtext_utils.py:1790] params/params/decoder/layers/post_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0416 04:09:02.872451 125411028421760 maxtext_utils.py:1790] params/params/decoder/layers/pre_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0416 04:09:02.872490 125411028421760 maxtext_utils.py:1790] params/params/decoder/layers/self_attention/key/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0416 04:09:02.872529 125411028421760 maxtext_utils.py:1790] params/params/decoder/layers/self_attention/out/kernel Shape: float32[16,16,128,2048] Logical: P('heads', 'layers', 'kv', 'embed') Physical: (None, None, None, 'fsdp') I0416 04:09:02.872557 125411028421760 maxtext_utils.py:1790] params/params/decoder/layers/self_attention/query/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'q_heads', 'kv') Physical: ('fsdp', None, None, None) I0416 04:09:02.872585 125411028421760 maxtext_utils.py:1790] params/params/decoder/layers/self_attention/value/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0416 04:09:02.872620 125411028421760 maxtext_utils.py:1790] params/params/decoder/logits_dense/kernel Shape: float32[2048,32000] Logical: P('embed_vocab', 'vocab') Physical: ('fsdp', None) I0416 04:09:02.872668 125411028421760 maxtext_utils.py:1790] params/params/token_embedder/embedding Shape: float32[32000,2048] Logical: P('vocab', 'embed_vocab') Physical: (None, 'fsdp') I0416 04:09:03.310324 125411028421760 train.py:157] train/xent Logical: float32[8,2048]............................................. ('activation_embed_and_logits_batch', 'activation_length'). I0416 04:09:03.310405 125411028421760 train.py:157] train/xent Physical: float32[8,2048]............................................. ('fsdp', None). I0416 04:09:03.325136 125411028421760 train.py:164] train/z_loss Logical: float32[8,2048]............................................. ('activation_embed_and_logits_batch', 'activation_length'). I0416 04:09:03.325185 125411028421760 train.py:164] train/z_loss Physical: float32[8,2048]............................................. ('fsdp', None). W0416 04:09:03.610138 3660425 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0416 04:09:03.721691 125411028421760 max_utils.py:791] Total memory size: 3.6 GB, Output size: 1.5 GB, Temp size: 2.0 GB, Argument size: 1.5 GB, Host temp size: 0.0 GB. I0416 04:09:03.722191 125411028421760 max_utils.py:194] tensorboardX not available; using no-op SummaryWriter. I0416 04:09:03.722344 125411028421760 metric_logger.py:289] number parameters: 1.104 billion W0416 04:09:06.771944 3660425 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0416 04:09:06.900418 125411028421760 checkpointing.py:794] Waiting for step 0 to finish before checkpoint... I0416 04:09:07.022069 125411028421760 checkpointing.py:798] Waited 0.12163066864013672 seconds for step 0 to finish before starting checkpointing. I0416 04:09:07.022512 125411028421760 checkpoint_manager.py:1983] [process=0][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0416 04:09:07.022671 125411028421760 checkpoint_manager.py:1501] [process=0] Saving checkpoint at step 0 I0416 04:09:07.023058 125411028421760 async_checkpointer.py:452] [process=0] Started async saving checkpoint to gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0. I0416 04:09:07.101912 125411028421760 signaling_client.py:373] Using ThreadSafeKeyValueSignalingClient I0416 04:09:07.196242 125411028421760 jax_array_handlers.py:347] Scheduling D2H of 39 prioritized jax.Array. I0416 04:09:07.196331 125411028421760 replica_slices.py:410] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0416 04:09:07.209231 125296172860992 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0 I0416 04:09:07.872693 125411028421760 base_pytree_checkpoint_handler.py:153] [process=0][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.676875s I0416 04:09:07.873073 125411028421760 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/blocking_gbytes_per_sec: 16.030 GiB/s (total gbytes: 12.3 GiB) (time elapsed: 769 milliseconds) (per-host) I0416 04:09:07.873125 125411028421760 base_pytree_checkpoint_handler.py:732] [process=0][thread=MainThread] Initiated Pytree async_save. Time taken: 0.769966s (batch_requests_ready=0.089183s, total_serialization_initiated=0.680504s, others=0.000280s) I0416 04:09:07.873184 125411028421760 composite_checkpoint_handler.py:715] [process=0][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.770792s (all_items=0.000022s, per_item={'items': '0.00002193'}, temp_paths=0.770770) I0416 04:09:07.873917 125296084780608 async_checkpointer.py:79] [process=0][thread=async_save] Background save thread started. I0416 04:09:07.874016 125411028421760 async_checkpointer.py:561] Finished blocking save. Time taken: 0.851296s. Continuing background save to gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0. I0416 04:09:07.874206 125411028421760 checkpoint_manager.py:1549] [process=0][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize I0416 04:09:07.874428 125296183346752 async_checkpointer.py:265] [process=0][thread=save_finalize] Waiting for background save thread=async_save. I0416 04:09:07.874533 125411028421760 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776312547.0225, 'wait_for_prev_duration_secs': 4.1961669921875e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776312547.0227, 'checkpointer_blocking_duration_secs': 0.8514280319213867, 'get_old_steps_start_time': 1776312547.8741474, 'get_old_steps_duration_secs': 2.1457672119140625e-05, 'checkpoint_manager_blocking_start_time': 1776312547.0224013, 'checkpoint_manager_blocking_duration_secs': 0.8521008491516113} I0416 04:09:07.874639 125411028421760 checkpointing.py:409] Started an asynchronous checkpoint save for step 0 I0416 04:09:07.874681 125411028421760 max_utils.py:750] Memstats: After params initialized: I0416 04:09:07.874758 125411028421760 max_utils.py:756] Using (GB) 1.59 / 31.25 (5.088000%) on TPU_0(process=0,(0,0,0,0)) I0416 04:09:07.874788 125411028421760 max_utils.py:756] Using (GB) 1.59 / 31.25 (5.088000%) on TPU_1(process=0,(1,0,0,0)) I0416 04:09:07.874809 125411028421760 max_utils.py:756] Using (GB) 1.59 / 31.25 (5.088000%) on TPU_2(process=0,(0,1,0,0)) I0416 04:09:07.874826 125411028421760 max_utils.py:756] Using (GB) 1.59 / 31.25 (5.088000%) on TPU_3(process=0,(1,1,0,0)) I0416 04:09:07.874843 125411028421760 max_utils.py:756] Using (GB) 1.59 / 31.25 (5.088000%) on TPU_4(process=0,(0,2,0,0)) I0416 04:09:07.874860 125411028421760 max_utils.py:756] Using (GB) 1.59 / 31.25 (5.088000%) on TPU_5(process=0,(1,2,0,0)) I0416 04:09:07.874877 125411028421760 max_utils.py:756] Using (GB) 1.59 / 31.25 (5.088000%) on TPU_6(process=0,(0,3,0,0)) I0416 04:09:07.874893 125411028421760 max_utils.py:756] Using (GB) 1.59 / 31.25 (5.088000%) on TPU_7(process=0,(1,3,0,0)) I0416 04:09:07.971732 125296162375232 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0/items I0416 04:09:08.120948 125296074294848 checkpoint.py:188] Wrote Metadata={'item_handlers': None, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776312547883391615, 'commit_timestamp_nsecs': None, 'custom_metadata': {}}, json={"item_handlers": null, "metrics": {}, "performance_metrics": {}, "init_timestamp_nsecs": 1776312547883391615, "commit_timestamp_nsecs": null, "custom_metadata": {}} to gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0/_CHECKPOINT_METADATA I0416 04:09:08.289735 125411028421760 metric_logger.py:185] completed step: 0, seconds: 3.178, TFLOP/s/device: 4.276, Tokens/s/device: 644.511, total_weights: 13328, loss: 10.828 I0416 04:09:08.290728 125411028421760 metric_logger.py:269] To see full metrics 'tensorboard --logdir=gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/tensorboard/' I0416 04:09:08.405887 125411028421760 metric_logger.py:185] completed step: 1, seconds: 1.387, TFLOP/s/device: 9.793, Tokens/s/device: 1476.132, total_weights: 12332, loss: 10.828 I0416 04:09:08.523694 125411028421760 metric_logger.py:185] completed step: 2, seconds: 0.009, TFLOP/s/device: 1475.739, Tokens/s/device: 222439.448, total_weights: 15161, loss: 9.823 I0416 04:09:08.641235 125411028421760 metric_logger.py:185] completed step: 3, seconds: 0.117, TFLOP/s/device: 116.117, Tokens/s/device: 17502.478, total_weights: 13327, loss: 9.405 I0416 04:09:08.680671 3663607 google_auth_provider.cc:149] Using credentials at ~/.config/gcloud/application_default_credentials.json I0416 04:09:08.680728 3663607 google_auth_provider.cc:156] Using OAuth2 AuthProvider I0416 04:09:09.615312 125296105752128 array_metadata_store.py:203] [process=0][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0/items/array_metadatas/process_0 I0416 04:09:27.917093 125411028421760 metric_logger.py:185] completed step: 4, seconds: 0.118, TFLOP/s/device: 115.365, Tokens/s/device: 17389.089, total_weights: 11939, loss: 9.045 I0416 04:09:27.926932 125411028421760 metric_logger.py:185] completed step: 5, seconds: 0.118, TFLOP/s/device: 115.013, Tokens/s/device: 17335.952, total_weights: 15502, loss: 8.953 I0416 04:09:28.043518 125411028421760 metric_logger.py:185] completed step: 6, seconds: 19.276, TFLOP/s/device: 0.705, Tokens/s/device: 106.248, total_weights: 13864, loss: 8.775 I0416 04:09:28.161014 125411028421760 metric_logger.py:185] completed step: 7, seconds: 0.006, TFLOP/s/device: 2181.270, Tokens/s/device: 328784.717, total_weights: 12988, loss: 8.688 I0416 04:09:28.278553 125411028421760 metric_logger.py:185] completed step: 8, seconds: 0.118, TFLOP/s/device: 115.287, Tokens/s/device: 17377.286, total_weights: 13820, loss: 8.740 I0416 04:09:28.396168 125411028421760 checkpointing.py:794] Waiting for step 9 to finish before checkpoint... I0416 04:09:28.396921 125411028421760 checkpointing.py:798] Waited 0.0007803440093994141 seconds for step 9 to finish before starting checkpointing. I0416 04:09:28.397235 125411028421760 checkpoint_manager.py:1994] [process=0][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0416 04:09:37.736347 125296095266368 base_pytree_checkpoint_handler.py:1217] [process=0][thread=write_metadata_after_commits] Commit + Array metadata written. Time taken: 29.052727s (commit=28.611681s, array_metadata_write=0.441047s) I0416 04:09:37.737463 125296084780608 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/gbytes_per_sec: 412.505 MiB/s (total gbytes: 12.3 GiB) (time elapsed: 30 seconds) (per-host) I0416 04:09:37.737596 125296084780608 async_checkpointer.py:90] [process=0][thread=async_save] 3 Handler Commit operations completed. Time taken: 29.863498s. I0416 04:09:37.965153 125296084780608 checkpoint.py:228] Read Metadata={'item_handlers': None, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776312547883391615, 'commit_timestamp_nsecs': None, 'custom_metadata': {}} from gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0/_CHECKPOINT_METADATA I0416 04:09:38.152084 125296084780608 array_metadata_store.py:367] [process=0][thread=async_save] Skipped cross-host ArrayMetadata validation because only one process is found: process_index=0. I0416 04:09:38.344161 125296074294848 checkpoint.py:247] Updated Metadata={'item_handlers': {'items': 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler'}, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776312547883391615, 'commit_timestamp_nsecs': None, 'custom_metadata': {}} to gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0/_CHECKPOINT_METADATA I0416 04:09:38.597682 125296084780608 ocdbt_utils.py:56] Param validation support for Zarr3 will be added later (b/362328389). I0416 04:09:38.598259 125296084780608 base_pytree_checkpoint_handler.py:1342] [process=0][thread=async_save] Pytree save finalize (merge_ocdbt + ArrayMetadata validation) completed. Time taken: 0.592804s. use_zarr3=True, enable_post_merge_validation=True, directory=gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0/items I0416 04:09:38.598874 125296084780608 atomicity.py:608] Finalizing gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0/items I0416 04:09:38.844353 125296084780608 atomicity.py:608] Finalizing gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0 I0416 04:09:39.498536 125296084780608 atomicity.py:794] [process=0][thread=async_save] Finished saving checkpoint (finalized tmp dir) to `gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0`. I0416 04:09:39.499189 125296084780608 async_checkpointer.py:420] Finished async_save (blocking + background). Time taken: 32.476473s. directory=gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/0 I0416 04:09:39.499267 125296084780608 async_checkpointer.py:144] [process=0][thread=async_save] Background save thread done. Time taken: 31.625193s. I0416 04:09:39.499457 125296183346752 async_checkpointer.py:273] [process=0][thread=save_finalize] Done with waiting for background save thread=async_save. I0416 04:09:39.499564 125296183346752 async_checkpointer.py:283] [process=0][thread=save_finalize] No errors found in background save thread=async_save. I0416 04:09:39.499652 125296183346752 checkpoint_manager.py:2103] [process=0][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts... I0416 04:09:39.499701 125296183346752 checkpoint_manager.py:2112] [process=0][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts. I0416 04:09:39.499880 125411028421760 checkpoint_manager.py:2006] [process=0][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0. W0416 04:09:39.500006 125411028421760 checkpoint_manager.py:1441] Waiting for previous save to complete took 11.102795 seconds. If this number is high, consider checkpointing less frequently. I0416 04:09:39.501262 125411028421760 checkpoint_manager.py:1501] [process=0] Saving checkpoint at step 9 I0416 04:09:39.501560 125411028421760 async_checkpointer.py:452] [process=0] Started async saving checkpoint to gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/9. I0416 04:09:39.690371 125296183346752 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/9 I0416 04:09:39.695670 125411028421760 jax_array_handlers.py:347] Scheduling D2H of 39 prioritized jax.Array. I0416 04:09:39.695772 125411028421760 replica_slices.py:410] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0416 04:09:40.387630 125295937979968 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/9/items I0416 04:09:45.191144 125411028421760 base_pytree_checkpoint_handler.py:153] [process=0][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 5.495888s I0416 04:09:45.195245 125411028421760 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/blocking_gbytes_per_sec: 2.205 GiB/s (total gbytes: 12.3 GiB) (time elapsed: 5 seconds) (per-host) I0416 04:09:45.195311 125411028421760 base_pytree_checkpoint_handler.py:732] [process=0][thread=MainThread] Initiated Pytree async_save. Time taken: 5.597783s (batch_requests_ready=0.094705s, total_serialization_initiated=5.499088s, others=0.003991s) I0416 04:09:45.195384 125411028421760 composite_checkpoint_handler.py:715] [process=0][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 5.598290s (all_items=0.000023s, per_item={'items': '0.00002289'}, temp_paths=5.598267) I0416 04:09:45.197592 125296183346752 async_checkpointer.py:79] [process=0][thread=async_save] Background save thread started. I0416 04:09:45.197674 125411028421760 async_checkpointer.py:561] Finished blocking save. Time taken: 5.696365s. Continuing background save to gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/9. I0416 04:09:45.197885 125411028421760 checkpoint_manager.py:1549] [process=0][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize I0416 04:09:45.198077 125296084780608 async_checkpointer.py:265] [process=0][thread=save_finalize] Waiting for background save thread=async_save. I0416 04:09:45.198188 125411028421760 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776312568.3971844, 'wait_for_prev_duration_secs': 11.102795124053955, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776312579.5012882, 'checkpointer_blocking_duration_secs': 5.696494102478027, 'get_old_steps_start_time': 1776312585.1978009, 'get_old_steps_duration_secs': 4.38690185546875e-05, 'checkpoint_manager_blocking_start_time': 1776312568.3971386, 'checkpoint_manager_blocking_duration_secs': 16.801023244857788} I0416 04:09:45.198314 125411028421760 checkpointing.py:409] Started an asynchronous checkpoint save for step 9 I0416 04:09:45.198348 125411028421760 checkpoint_manager.py:1994] [process=0][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0416 04:09:45.919955 125296151889472 array_metadata_store.py:203] [process=0][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/9/items/array_metadatas/process_0 I0416 04:10:23.901663 125295937979968 base_pytree_checkpoint_handler.py:1217] [process=0][thread=write_metadata_after_commits] Commit + Array metadata written. Time taken: 38.705570s (commit=38.209890s, array_metadata_write=0.495680s) I0416 04:10:23.902651 125296183346752 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/gbytes_per_sec: 285.222 MiB/s (total gbytes: 12.3 GiB) (time elapsed: 44 seconds) (per-host) I0416 04:10:23.902712 125296183346752 async_checkpointer.py:90] [process=0][thread=async_save] 3 Handler Commit operations completed. Time taken: 38.704992s. I0416 04:10:24.326128 125296183346752 array_metadata_store.py:367] [process=0][thread=async_save] Skipped cross-host ArrayMetadata validation because only one process is found: process_index=0. I0416 04:10:24.742341 125296183346752 ocdbt_utils.py:56] Param validation support for Zarr3 will be added later (b/362328389). I0416 04:10:24.742992 125296183346752 base_pytree_checkpoint_handler.py:1342] [process=0][thread=async_save] Pytree save finalize (merge_ocdbt + ArrayMetadata validation) completed. Time taken: 0.550574s. use_zarr3=True, enable_post_merge_validation=True, directory=gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/9/items I0416 04:10:24.743620 125296183346752 atomicity.py:608] Finalizing gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/9/items I0416 04:10:24.976944 125296183346752 atomicity.py:608] Finalizing gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/9 I0416 04:10:25.670652 125296183346752 atomicity.py:794] [process=0][thread=async_save] Finished saving checkpoint (finalized tmp dir) to `gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/9`. I0416 04:10:25.671307 125296183346752 async_checkpointer.py:420] Finished async_save (blocking + background). Time taken: 46.170004s. directory=gs://wanglance-maxtext/linen_ckpt_feat_nnx_trainstate_and_training_loop_20260416_033532/linen_feat_nnx_trainstate_and_training_loop_20260416_033532_10_shardy_false/checkpoints/9 I0416 04:10:25.671379 125296183346752 async_checkpointer.py:144] [process=0][thread=async_save] Background save thread done. Time taken: 40.473660s. I0416 04:10:25.671544 125296084780608 async_checkpointer.py:273] [process=0][thread=save_finalize] Done with waiting for background save thread=async_save. I0416 04:10:25.671642 125296084780608 async_checkpointer.py:283] [process=0][thread=save_finalize] No errors found in background save thread=async_save. I0416 04:10:25.671689 125296084780608 checkpoint_manager.py:2103] [process=0][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts... I0416 04:10:25.671726 125296084780608 checkpoint_manager.py:2112] [process=0][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts. I0416 04:10:25.672090 125411028421760 checkpoint_manager.py:2006] [process=0][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9. I0416 04:10:25.672330 125411028421760 checkpoint_manager.py:1983] [process=0][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0416 04:10:25.672894 125411028421760 metric_logger.py:185] completed step: 9, seconds: 0.117, TFLOP/s/device: 116.200, Tokens/s/device: 17514.902, total_weights: 12300, loss: 8.710 Per train step: Total TFLOPs: 13.59 split as 93.93% learnable weight flops and 6.07% attention flops