2026-04-16 19:52:40.479753: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0416 19:52:40.595794 131282426465408 max_utils.py:238] Skipping jax distributed system due to skip_jax_distributed_system=True flag. I0416 19:53:39.746158 131282426465408 max_utils.py:800] System Information: Jax Version: 0.9.2 I0416 19:53:39.746266 131282426465408 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0416 19:53:39.746300 131282426465408 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Apr 6 2026 20:48:10 (1775533690) cl/895581894 I0416 19:53:39.746325 131282426465408 train_utils.py:364] WARNING: 'dataset_path' might be pointing your local file system I0416 19:53:39.746346 131282426465408 train_utils.py:377] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing. I0416 19:53:39.746420 131282426465408 train.py:811] [DECOUPLED NO-OP] skipping cloud diagnostics wrapper. W0416 19:53:39.838667 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0416 19:53:40.242889 131282426465408 maxtext_utils.py:1687] Num_devices: 8, shape (1, 1, 1, 8, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0416 19:53:40.340749 131282426465408 checkpointing.py:688] Setting up checkpoint logger... I0416 19:53:40.340845 131282426465408 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True I0416 19:53:40.340883 131282426465408 pytree_checkpoint_handler.py:577] save_device_host_concurrent_bytes=None I0416 19:53:40.341073 131282426465408 base_pytree_checkpoint_handler.py:411] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7765fc1dbc80>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0416 19:53:42.689882 131282426465408 checkpointing.py:266] Enabling policy for fixed interval checkpointing. I0416 19:53:42.690329 131282426465408 checkpoint_manager.py:702] [process=0][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x775f7ed3d9a0>}, handler_registry=None I0416 19:53:42.690913 131282426465408 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x775f7ed3d9a0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`. I0416 19:53:42.690964 131282426465408 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x775f80d2f170>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`. I0416 19:53:42.690995 131282426465408 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x775f7ed3d9a0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x775f7ed3d9a0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x775f80d2f170>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x775f80d2f170>}). I0416 19:53:42.691856 131282426465408 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.28 I0416 19:53:42.691941 131282426465408 async_checkpointer.py:177] [process=0][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>.<lambda> at 0x775f7fd816c0> timeout: 600 secs and primary_host=0 for async checkpoint writes I0416 19:53:42.825283 131282426465408 checkpoint_manager.py:1788] Found 0 checkpoint steps in gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints I0416 19:53:42.825556 131282426465408 checkpoint_manager.py:921] [process=0][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_hns=False, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False), root_directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x775f7ed0b3b0> I0416 19:53:42.825660 131282426465408 checkpointing.py:302] Checkpoint manager created! I0416 19:53:43.706089 131282426465408 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint I0416 19:53:43.706196 131282426465408 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint. W0416 19:53:43.817190 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. [DECOUPLED NO-OP] gcs_storage: using stubs. [DECOUPLED NO-OP] mldiagnostics: using stub. [DECOUPLED NO-OP] mldiagnostics: using stub. [DECOUPLED NO-OP] mldiagnostics: using stub. [DECOUPLED NO-OP] workload_monitor: using stub. [DECOUPLED NO-OP] vertex_tensorboard: using stub. fsdp: 8 I0416 19:53:43.908254 131282426465408 maxtext_utils.py:1805] decoder/decoder_norm/scale/value Shape: float32[2048] Physical: (None,) I0416 19:53:43.908354 131282426465408 maxtext_utils.py:1805] decoder/layers/mlp/wi_0/kernel/value Shape: float32[2048,16,7168] Physical: ('fsdp', None, None) I0416 19:53:43.908394 131282426465408 maxtext_utils.py:1805] decoder/layers/mlp/wi_1/kernel/value Shape: float32[2048,16,7168] Physical: ('fsdp', None, None) I0416 19:53:43.908428 131282426465408 maxtext_utils.py:1805] decoder/layers/mlp/wo/kernel/value Shape: float32[7168,16,2048] Physical: (None, None, 'fsdp') I0416 19:53:43.908460 131282426465408 maxtext_utils.py:1805] decoder/layers/post_self_attention_layer_norm/scale/value Shape: float32[2048,16] Physical: (None, None) I0416 19:53:43.908484 131282426465408 maxtext_utils.py:1805] decoder/layers/pre_self_attention_layer_norm/scale/value Shape: float32[2048,16] Physical: (None, None) I0416 19:53:43.908512 131282426465408 maxtext_utils.py:1805] decoder/layers/self_attention/key/kernel/value Shape: float32[2048,16,16,128] Physical: ('fsdp', None, None, None) I0416 19:53:43.908537 131282426465408 maxtext_utils.py:1805] decoder/layers/self_attention/out/kernel/value Shape: float32[16,16,128,2048] Physical: (None, None, None, 'fsdp') I0416 19:53:43.908561 131282426465408 maxtext_utils.py:1805] decoder/layers/self_attention/query/kernel/value Shape: float32[2048,16,16,128] Physical: ('fsdp', None, None, None) I0416 19:53:43.908586 131282426465408 maxtext_utils.py:1805] decoder/layers/self_attention/value/kernel/value Shape: float32[2048,16,16,128] Physical: ('fsdp', None, None, None) I0416 19:53:43.908608 131282426465408 maxtext_utils.py:1805] decoder/logits_dense/kernel/value Shape: float32[2048,32000] Physical: ('fsdp', None) I0416 19:53:43.908632 131282426465408 maxtext_utils.py:1805] token_embedder/embedding/value Shape: float32[32000,2048] Physical: (None, 'fsdp') I0416 19:53:44.002997 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048]............................................... Unknown. I0416 19:53:44.003086 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048]............................................... (None,). I0416 19:53:44.016398 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,16,7168]....................................... Unknown. I0416 19:53:44.016444 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,16,7168]....................................... ('fsdp', None, None). I0416 19:53:44.042902 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[7168,16,2048]....................................... Unknown. I0416 19:53:44.042966 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[7168,16,2048]....................................... (None, None, 'fsdp'). I0416 19:53:44.056174 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,16]............................................ Unknown. I0416 19:53:44.056218 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,16]............................................ (None, None). I0416 19:53:44.082547 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,16,16,128]..................................... Unknown. I0416 19:53:44.082600 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,16,16,128]..................................... ('fsdp', None, None, None). I0416 19:53:44.095771 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[16,16,128,2048]..................................... Unknown. I0416 19:53:44.095818 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[16,16,128,2048]..................................... (None, None, None, 'fsdp'). I0416 19:53:44.135278 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,32000]......................................... Unknown. I0416 19:53:44.135337 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,32000]......................................... ('fsdp', None). I0416 19:53:44.148540 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[32000,2048]......................................... Unknown. I0416 19:53:44.148588 131282426465408 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[32000,2048]......................................... (None, 'fsdp'). I0416 19:53:44.359084 131282426465408 nnx_decoders.py:465] nnx_decoders/carry Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0416 19:53:44.359297 131282426465408 nnx_decoders.py:465] nnx_decoders/carry Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 19:53:44.367206 131282426465408 nnx_decoders.py:465] Unknown Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0416 19:53:44.367301 131282426465408 nnx_decoders.py:465] Unknown Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 19:53:44.382257 131282426465408 attentions.py:1088] attentions/inputs_q Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0416 19:53:44.382376 131282426465408 attentions.py:1088] attentions/inputs_q Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 19:53:44.396539 131282426465408 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0416 19:53:44.396657 131282426465408 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 19:53:44.424357 131282426465408 attentions.py:1154] attentions/query Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0416 19:53:44.424486 131282426465408 attentions.py:1154] attentions/query Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 19:53:44.438874 131282426465408 attentions.py:1155] attentions/key Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0416 19:53:44.439000 131282426465408 attentions.py:1155] attentions/key Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 19:53:44.453202 131282426465408 attentions.py:1156] attentions/value Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0416 19:53:44.453321 131282426465408 attentions.py:1156] attentions/value Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 19:53:44.478875 131282426465408 attentions.py:1197] attentions/out Logical: bfloat16[8,2048,16,128]..................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv'). I0416 19:53:44.479014 131282426465408 attentions.py:1197] attentions/out Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 19:53:44.498354 131282426465408 linears.py:525] linears/x Logical: bfloat16[8,2048,7168]....................................... ('activation_batch', 'activation_length', 'activation_mlp'). I0416 19:53:44.498450 131282426465408 linears.py:525] linears/x Physical: bfloat16[8,2048,7168]....................................... ('fsdp', None, None). W0416 19:53:45.409671 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0416 19:53:45.549479 131282426465408 max_utils.py:791] Total memory size: 4.5 GB, Output size: 1.5 GB, Temp size: 2.9 GB, Argument size: 1.5 GB, Host temp size: 0.0 GB. I0416 19:53:45.550013 131282426465408 max_utils.py:194] tensorboardX not available; using no-op SummaryWriter. I0416 19:53:45.551348 131282426465408 metric_logger.py:289] number parameters: 1.104 billion W0416 19:53:46.714574 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0416 19:53:46.852435 131282426465408 checkpointing.py:794] Waiting for step 0 to finish before checkpoint... I0416 19:53:47.279059 131282426465408 checkpointing.py:798] Waited 0.4266035556793213 seconds for step 0 to finish before starting checkpointing. I0416 19:53:47.279690 131282426465408 checkpoint_manager.py:1983] [process=0][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0416 19:53:47.279869 131282426465408 checkpoint_manager.py:1501] [process=0] Saving checkpoint at step 0 I0416 19:53:47.280381 131282426465408 async_checkpointer.py:452] [process=0] Started async saving checkpoint to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0. I0416 19:53:47.364576 131282426465408 signaling_client.py:373] Using ThreadSafeKeyValueSignalingClient I0416 19:53:47.456299 131174921930304 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0 I0416 19:53:47.468599 131282426465408 jax_array_handlers.py:347] Scheduling D2H of 69 prioritized jax.Array. I0416 19:53:47.468692 131282426465408 replica_slices.py:410] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0416 19:53:48.089351 131174911444544 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0/items W0416 19:53:48.123805 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 19:53:48.134860 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 19:53:48.146514 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 19:53:48.152422 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 19:53:48.165537 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 19:53:48.175398 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 19:53:48.181430 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 19:53:48.185436 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0416 19:53:48.253441 131174871598656 checkpoint.py:188] Wrote Metadata={'item_handlers': None, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776369228005475147, 'commit_timestamp_nsecs': None, 'custom_metadata': {}}, json={"item_handlers": null, "metrics": {}, "performance_metrics": {}, "init_timestamp_nsecs": 1776369228005475147, "commit_timestamp_nsecs": null, "custom_metadata": {}} to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0/_CHECKPOINT_METADATA I0416 19:53:48.633165 4068422 google_auth_provider.cc:149] Using credentials at ~/.config/gcloud/application_default_credentials.json I0416 19:53:48.633215 4068422 google_auth_provider.cc:156] Using OAuth2 AuthProvider I0416 19:53:48.707707 131282426465408 base_pytree_checkpoint_handler.py:153] [process=0][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 1.240145s I0416 19:53:48.714983 131282426465408 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/blocking_gbytes_per_sec: 9.149 GiB/s (total gbytes: 12.3 GiB) (time elapsed: a second) (per-host) I0416 19:53:48.715060 131282426465408 base_pytree_checkpoint_handler.py:732] [process=0][thread=MainThread] Initiated Pytree async_save. Time taken: 1.348921s (batch_requests_ready=0.093863s, total_serialization_initiated=1.247875s, others=0.007183s) I0416 19:53:48.715135 131282426465408 composite_checkpoint_handler.py:715] [process=0][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 1.349821s (all_items=0.000038s, per_item={'items': '0.00003791'}, temp_paths=1.349783) I0416 19:53:48.716318 131174932416064 async_checkpointer.py:79] [process=0][thread=async_save] Background save thread started. I0416 19:53:48.716400 131282426465408 async_checkpointer.py:561] Finished blocking save. Time taken: 1.436486s. Continuing background save to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0. I0416 19:53:48.716614 131282426465408 checkpoint_manager.py:1549] [process=0][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize I0416 19:53:48.716793 131174787712576 async_checkpointer.py:265] [process=0][thread=save_finalize] Waiting for background save thread=async_save. I0416 19:53:48.716883 131282426465408 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776369227.27967, 'wait_for_prev_duration_secs': 4.601478576660156e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776369227.2798905, 'checkpointer_blocking_duration_secs': 1.4366283416748047, 'get_old_steps_start_time': 1776369228.7165406, 'get_old_steps_duration_secs': 3.361701965332031e-05, 'checkpoint_manager_blocking_start_time': 1776369227.2795036, 'checkpoint_manager_blocking_duration_secs': 1.4373538494110107} I0416 19:53:48.716994 131282426465408 checkpointing.py:409] Started an asynchronous checkpoint save for step 0 I0416 19:53:48.717059 131282426465408 max_utils.py:750] Memstats: After params initialized: I0416 19:53:48.717103 131282426465408 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_0(process=0,(0,0,0,0)) I0416 19:53:48.717127 131282426465408 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_1(process=0,(1,0,0,0)) I0416 19:53:48.717146 131282426465408 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_2(process=0,(0,1,0,0)) I0416 19:53:48.717165 131282426465408 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_3(process=0,(1,1,0,0)) I0416 19:53:48.717182 131282426465408 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_4(process=0,(0,2,0,0)) I0416 19:53:48.717200 131282426465408 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_5(process=0,(1,2,0,0)) I0416 19:53:48.717216 131282426465408 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_6(process=0,(0,3,0,0)) I0416 19:53:48.717232 131282426465408 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_7(process=0,(1,3,0,0)) W0416 19:53:48.722765 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 19:53:48.727878 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 19:53:48.732272 4066106 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0416 19:53:49.040105 131282426465408 metric_logger.py:185] completed step: 0, seconds: 1.300, TFLOP/s/device: 41.813, Tokens/s/device: 6302.435, total_weights: 65536, loss: 10.880 I0416 19:53:49.041127 131282426465408 metric_logger.py:269] To see full metrics 'tensorboard --logdir=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/tensorboard/' I0416 19:53:49.458279 131282426465408 metric_logger.py:185] completed step: 1, seconds: 2.184, TFLOP/s/device: 24.885, Tokens/s/device: 3750.983, total_weights: 65536, loss: 10.880 I0416 19:53:49.478254 131174819169856 array_metadata_store.py:203] [process=0][thread=array_type_handler] Wrote 69 array_metadata.ArrayMetadata to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0/items/array_metadatas/process_0 I0416 19:53:49.893421 131282426465408 metric_logger.py:185] completed step: 2, seconds: 0.018, TFLOP/s/device: 2988.481, Tokens/s/device: 450456.395, total_weights: 65536, loss: 10.270 I0416 19:53:50.323079 131282426465408 metric_logger.py:185] completed step: 3, seconds: 0.411, TFLOP/s/device: 132.360, Tokens/s/device: 19950.805, total_weights: 65536, loss: 9.741 I0416 19:54:28.682774 131282426465408 metric_logger.py:185] completed step: 4, seconds: 0.437, TFLOP/s/device: 124.378, Tokens/s/device: 18747.626, total_weights: 65536, loss: 9.284 I0416 19:54:28.693859 131282426465408 metric_logger.py:185] completed step: 5, seconds: 0.454, TFLOP/s/device: 119.823, Tokens/s/device: 18061.040, total_weights: 65536, loss: 8.897 I0416 19:54:29.113354 131282426465408 metric_logger.py:185] completed step: 6, seconds: 38.336, TFLOP/s/device: 1.418, Tokens/s/device: 213.692, total_weights: 65536, loss: 8.598 I0416 19:54:29.536765 131282426465408 metric_logger.py:185] completed step: 7, seconds: 0.007, TFLOP/s/device: 7536.890, Tokens/s/device: 1136042.158, total_weights: 65536, loss: 8.390 I0416 19:54:29.971163 131282426465408 metric_logger.py:185] completed step: 8, seconds: 0.420, TFLOP/s/device: 129.283, Tokens/s/device: 19486.992, total_weights: 65536, loss: 8.261 I0416 19:54:30.401827 131282426465408 checkpointing.py:794] Waiting for step 9 to finish before checkpoint... I0416 19:54:30.402950 131282426465408 checkpointing.py:798] Waited 0.0011382102966308594 seconds for step 9 to finish before starting checkpointing. I0416 19:54:30.403280 131282426465408 checkpoint_manager.py:1994] [process=0][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0416 19:54:31.471859 131174808684096 base_pytree_checkpoint_handler.py:1217] [process=0][thread=write_metadata_after_commits] Commit + Array metadata written. Time taken: 42.756106s (commit=42.286618s, array_metadata_write=0.469488s) I0416 19:54:31.473032 131174932416064 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/gbytes_per_sec: 286.504 MiB/s (total gbytes: 12.3 GiB) (time elapsed: 44 seconds) (per-host) I0416 19:54:31.473105 131174932416064 async_checkpointer.py:90] [process=0][thread=async_save] 3 Handler Commit operations completed. Time taken: 42.756652s. I0416 19:54:31.687231 131174932416064 checkpoint.py:228] Read Metadata={'item_handlers': None, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776369228005475147, 'commit_timestamp_nsecs': None, 'custom_metadata': {}} from gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0/_CHECKPOINT_METADATA I0416 19:54:31.859706 131174932416064 array_metadata_store.py:367] [process=0][thread=async_save] Skipped cross-host ArrayMetadata validation because only one process is found: process_index=0. I0416 19:54:32.088392 131174871598656 checkpoint.py:247] Updated Metadata={'item_handlers': {'items': 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler'}, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776369228005475147, 'commit_timestamp_nsecs': None, 'custom_metadata': {}} to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0/_CHECKPOINT_METADATA I0416 19:54:32.267110 131174932416064 ocdbt_utils.py:56] Param validation support for Zarr3 will be added later (b/362328389). I0416 19:54:32.267715 131174932416064 base_pytree_checkpoint_handler.py:1342] [process=0][thread=async_save] Pytree save finalize (merge_ocdbt + ArrayMetadata validation) completed. Time taken: 0.540533s. use_zarr3=True, enable_post_merge_validation=True, directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0/items I0416 19:54:32.268425 131174932416064 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0/items I0416 19:54:32.500395 131174932416064 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0 I0416 19:54:33.182477 131174932416064 atomicity.py:794] [process=0][thread=async_save] Finished saving checkpoint (finalized tmp dir) to `gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0`. I0416 19:54:33.183119 131174932416064 async_checkpointer.py:420] Finished async_save (blocking + background). Time taken: 45.903209s. directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/0 I0416 19:54:33.183197 131174932416064 async_checkpointer.py:144] [process=0][thread=async_save] Background save thread done. Time taken: 44.466748s. I0416 19:54:33.183388 131174787712576 async_checkpointer.py:273] [process=0][thread=save_finalize] Done with waiting for background save thread=async_save. I0416 19:54:33.183498 131174787712576 async_checkpointer.py:283] [process=0][thread=save_finalize] No errors found in background save thread=async_save. I0416 19:54:33.183565 131174787712576 checkpoint_manager.py:2103] [process=0][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts... I0416 19:54:33.183636 131174787712576 checkpoint_manager.py:2112] [process=0][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts. I0416 19:54:33.183867 131282426465408 checkpoint_manager.py:2006] [process=0][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0. W0416 19:54:33.183993 131282426465408 checkpoint_manager.py:1441] Waiting for previous save to complete took 2.780754 seconds. If this number is high, consider checkpointing less frequently. I0416 19:54:33.184937 131282426465408 checkpoint_manager.py:1501] [process=0] Saving checkpoint at step 9 I0416 19:54:33.185256 131282426465408 async_checkpointer.py:452] [process=0] Started async saving checkpoint to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/9. I0416 19:54:33.353977 131174787712576 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/9 I0416 19:54:33.376309 131282426465408 jax_array_handlers.py:347] Scheduling D2H of 69 prioritized jax.Array. I0416 19:54:33.376406 131282426465408 replica_slices.py:410] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0416 19:54:34.034190 131174424905280 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/9/items I0416 19:54:36.350198 131282426465408 base_pytree_checkpoint_handler.py:153] [process=0][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 2.974873s I0416 19:54:36.357625 131282426465408 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/blocking_gbytes_per_sec: 3.996 GiB/s (total gbytes: 12.3 GiB) (time elapsed: 3 seconds) (per-host) I0416 19:54:36.357693 131282426465408 base_pytree_checkpoint_handler.py:732] [process=0][thread=MainThread] Initiated Pytree async_save. Time taken: 3.088616s (batch_requests_ready=0.098955s, total_serialization_initiated=2.982446s, others=0.007215s) I0416 19:54:36.357785 131282426465408 composite_checkpoint_handler.py:715] [process=0][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 3.089204s (all_items=0.000025s, per_item={'items': '0.00002480'}, temp_paths=3.089179) I0416 19:54:36.359338 131174829655616 async_checkpointer.py:79] [process=0][thread=async_save] Background save thread started. I0416 19:54:36.359439 131282426465408 async_checkpointer.py:561] Finished blocking save. Time taken: 3.174457s. Continuing background save to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/9. I0416 19:54:36.359756 131282426465408 checkpoint_manager.py:1549] [process=0][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize I0416 19:54:36.359935 131174932416064 async_checkpointer.py:265] [process=0][thread=save_finalize] Waiting for background save thread=async_save. I0416 19:54:36.360037 131282426465408 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776369270.4032102, 'wait_for_prev_duration_secs': 2.780754327774048, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776369273.1849604, 'checkpointer_blocking_duration_secs': 3.174659013748169, 'get_old_steps_start_time': 1776369276.359654, 'get_old_steps_duration_secs': 5.340576171875e-05, 'checkpoint_manager_blocking_start_time': 1776369270.4031515, 'checkpoint_manager_blocking_duration_secs': 5.956845760345459} I0416 19:54:36.360149 131282426465408 checkpointing.py:409] Started an asynchronous checkpoint save for step 9 I0416 19:54:36.360183 131282426465408 checkpoint_manager.py:1994] [process=0][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0416 19:54:37.093241 131174840141376 array_metadata_store.py:203] [process=0][thread=array_type_handler] Wrote 69 array_metadata.ArrayMetadata to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/9/items/array_metadatas/process_0 I0416 19:55:14.526316 131174787712576 base_pytree_checkpoint_handler.py:1217] [process=0][thread=write_metadata_after_commits] Commit + Array metadata written. Time taken: 38.167842s (commit=37.732038s, array_metadata_write=0.435803s) I0416 19:55:14.527556 131174829655616 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/gbytes_per_sec: 306.284 MiB/s (total gbytes: 12.3 GiB) (time elapsed: 41 seconds) (per-host) I0416 19:55:14.527667 131174829655616 async_checkpointer.py:90] [process=0][thread=async_save] 3 Handler Commit operations completed. Time taken: 38.168183s. I0416 19:55:14.937082 131174829655616 array_metadata_store.py:367] [process=0][thread=async_save] Skipped cross-host ArrayMetadata validation because only one process is found: process_index=0. I0416 19:55:15.450811 131174829655616 ocdbt_utils.py:56] Param validation support for Zarr3 will be added later (b/362328389). I0416 19:55:15.451426 131174829655616 base_pytree_checkpoint_handler.py:1342] [process=0][thread=async_save] Pytree save finalize (merge_ocdbt + ArrayMetadata validation) completed. Time taken: 0.652072s. use_zarr3=True, enable_post_merge_validation=True, directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/9/items I0416 19:55:15.452086 131174829655616 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/9/items I0416 19:55:15.689690 131174829655616 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/9 I0416 19:55:16.370364 131174829655616 atomicity.py:794] [process=0][thread=async_save] Finished saving checkpoint (finalized tmp dir) to `gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/9`. I0416 19:55:16.371029 131174829655616 async_checkpointer.py:420] Finished async_save (blocking + background). Time taken: 43.186055s. directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_post_train_fixes_20260416_181521/nnx_feat_nnx_post_train_fixes_20260416_181521_06_grad_accum/checkpoints/9 I0416 19:55:16.371114 131174829655616 async_checkpointer.py:144] [process=0][thread=async_save] Background save thread done. Time taken: 40.011631s. I0416 19:55:16.371266 131174932416064 async_checkpointer.py:273] [process=0][thread=save_finalize] Done with waiting for background save thread=async_save. I0416 19:55:16.371318 131174932416064 async_checkpointer.py:283] [process=0][thread=save_finalize] No errors found in background save thread=async_save. I0416 19:55:16.371386 131174932416064 checkpoint_manager.py:2103] [process=0][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts... I0416 19:55:16.371429 131174932416064 checkpoint_manager.py:2112] [process=0][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts. I0416 19:55:16.372795 131282426465408 checkpoint_manager.py:2006] [process=0][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9. I0416 19:55:16.372979 131282426465408 checkpoint_manager.py:1983] [process=0][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0416 19:55:16.374474 131282426465408 metric_logger.py:185] completed step: 9, seconds: 0.425, TFLOP/s/device: 127.857, Tokens/s/device: 19271.939, total_weights: 65536, loss: 8.185 Per train step: Total TFLOPs: 54.35 split as 93.93% learnable weight flops and 6.07% attention flops