2026-04-16 02:12:50.202241: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0416 02:12:50.319557 126158329224320 max_utils.py:238] Skipping jax distributed system due to skip_jax_distributed_system=True flag. I0416 02:13:21.177665 126158329224320 max_utils.py:800] System Information: Jax Version: 0.9.2 I0416 02:13:21.177785 126158329224320 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0416 02:13:21.177820 126158329224320 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Apr 6 2026 20:48:10 (1775533690) cl/895581894 I0416 02:13:21.177845 126158329224320 train_utils.py:364] WARNING: 'dataset_path' might be pointing your local file system I0416 02:13:21.177927 126158329224320 train.py:812] [DECOUPLED NO-OP] skipping cloud diagnostics wrapper. W0416 02:13:21.272425 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0416 02:13:21.689864 126158329224320 maxtext_utils.py:1687] Num_devices: 8, shape (1, 1, 1, 8, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0416 02:13:22.177489 126158329224320 checkpointing.py:688] Setting up checkpoint logger... I0416 02:13:22.177708 126158329224320 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True I0416 02:13:22.177803 126158329224320 pytree_checkpoint_handler.py:577] save_device_host_concurrent_bytes=None I0416 02:13:22.178093 126158329224320 base_pytree_checkpoint_handler.py:411] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x72bcef7d37d0>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0416 02:13:24.525238 126158329224320 checkpointing.py:266] Enabling policy for fixed interval checkpointing. I0416 02:13:24.525521 126158329224320 checkpoint_manager.py:702] [process=0][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72b672111d30>}, handler_registry=None I0416 02:13:24.525966 126158329224320 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72b672111d30>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`. I0416 02:13:24.526008 126158329224320 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x72b672110fe0>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`. I0416 02:13:24.526040 126158329224320 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72b672111d30>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72b672111d30>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x72b672110fe0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x72b672110fe0>}). I0416 02:13:24.526379 126158329224320 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.28 I0416 02:13:24.526445 126158329224320 async_checkpointer.py:177] [process=0][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>.<lambda> at 0x72b67212ac00> timeout: 600 secs and primary_host=0 for async checkpoint writes I0416 02:13:24.660138 126158329224320 checkpoint_manager.py:1788] Found 0 checkpoint steps in gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints I0416 02:13:24.660403 126158329224320 checkpoint_manager.py:921] [process=0][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_hns=False, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False), root_directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x72b672110740> I0416 02:13:24.660495 126158329224320 checkpointing.py:302] Checkpoint manager created! I0416 02:13:24.744459 126158329224320 dataset_info.py:707] Load dataset info from tests/assets/local_datasets/c4_en_dataset_minimal/c4/en/3.1.0 I0416 02:13:24.747720 126158329224320 reader.py:262] Creating a tf.data.Dataset reading 8 files located in folders: tests/assets/local_datasets/c4_en_dataset_minimal/c4/en/3.1.0. I0416 02:13:24.804527 126158329224320 logging_logger.py:49] Constructing tf.data.Dataset __local_c4_builder for split train, from tests/assets/local_datasets/c4_en_dataset_minimal/c4/en/3.1.0 I0416 02:13:24.837591 126158329224320 tokenizer.py:245] Tokenizer path: src/maxtext/assets/tokenizers/tokenizer.llama2 I0416 02:13:24.837656 126158329224320 tokenizer.py:187] Loading sentencepiece tokenizer: src/maxtext/assets/tokenizers/tokenizer.llama2 I0416 02:13:26.230353 126158329224320 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint I0416 02:13:26.230497 126158329224320 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint. [DECOUPLED NO-OP] gcs_storage: using stubs. [DECOUPLED NO-OP] mldiagnostics: using stub. [DECOUPLED NO-OP] mldiagnostics: using stub. [DECOUPLED NO-OP] mldiagnostics: using stub. [DECOUPLED NO-OP] workload_monitor: using stub. [DECOUPLED NO-OP] vertex_tensorboard: using stub. fsdp: 8 I0416 02:13:29.269059 126158329224320 nnx_decoders.py:465] nnx_decoders/carry Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0416 02:13:29.269212 126158329224320 nnx_decoders.py:465] nnx_decoders/carry Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 02:13:29.274642 126158329224320 nnx_decoders.py:465] Unknown Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0416 02:13:29.274689 126158329224320 nnx_decoders.py:465] Unknown Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 02:13:29.289949 126158329224320 attentions.py:1088] attentions/inputs_q Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0416 02:13:29.290011 126158329224320 attentions.py:1088] attentions/inputs_q Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 02:13:29.304648 126158329224320 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0416 02:13:29.304702 126158329224320 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, None). I0416 02:13:29.375841 126158329224320 attentions.py:1154] attentions/query Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0416 02:13:29.375928 126158329224320 attentions.py:1154] attentions/query Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 02:13:29.390712 126158329224320 attentions.py:1155] attentions/key Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0416 02:13:29.390771 126158329224320 attentions.py:1155] attentions/key Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 02:13:29.405299 126158329224320 attentions.py:1156] attentions/value Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0416 02:13:29.405350 126158329224320 attentions.py:1156] attentions/value Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 02:13:29.433315 126158329224320 attentions.py:1197] attentions/out Logical: bfloat16[8,2048,16,128]..................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv'). I0416 02:13:29.433377 126158329224320 attentions.py:1197] attentions/out Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, None, None). I0416 02:13:29.504405 126158329224320 linears.py:525] linears/x Logical: bfloat16[8,2048,7168]....................................... ('activation_batch', 'activation_length', 'activation_mlp'). I0416 02:13:29.504488 126158329224320 linears.py:525] linears/x Physical: bfloat16[8,2048,7168]....................................... ('fsdp', None, None). I0416 02:13:45.784286 126158329224320 max_utils.py:791] Total memory size: 3.7 GB, Output size: 1.5 GB, Temp size: 2.2 GB, Argument size: 1.5 GB, Host temp size: 0.0 GB. I0416 02:13:45.785366 126158329224320 max_utils.py:194] tensorboardX not available; using no-op SummaryWriter. I0416 02:13:45.788213 126158329224320 metric_logger.py:289] number parameters: 1.104 billion I0416 02:14:03.954136 126158329224320 checkpointing.py:794] Waiting for step 0 to finish before checkpoint... I0416 02:14:04.102483 126158329224320 checkpointing.py:798] Waited 0.14839649200439453 seconds for step 0 to finish before starting checkpointing. I0416 02:14:04.103434 126158329224320 checkpoint_manager.py:1983] [process=0][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0416 02:14:04.103688 126158329224320 checkpoint_manager.py:1501] [process=0] Saving checkpoint at step 0 I0416 02:14:04.104536 126158329224320 async_checkpointer.py:452] [process=0] Started async saving checkpoint to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0. I0416 02:14:04.199067 126158329224320 signaling_client.py:373] Using ThreadSafeKeyValueSignalingClient I0416 02:14:04.309791 126042847053376 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0 I0416 02:14:04.310429 126158329224320 jax_array_handlers.py:347] Scheduling D2H of 111 prioritized jax.Array. I0416 02:14:04.311404 126158329224320 replica_slices.py:410] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0416 02:14:05.023443 126042836567616 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0/items I0416 02:14:05.537809 126042796721728 checkpoint.py:188] Wrote Metadata={'item_handlers': None, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776305644932734546, 'commit_timestamp_nsecs': None, 'custom_metadata': {}}, json={"item_handlers": null, "metrics": {}, "performance_metrics": {}, "init_timestamp_nsecs": 1776305644932734546, "commit_timestamp_nsecs": null, "custom_metadata": {}} to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0/_CHECKPOINT_METADATA W0416 02:14:05.565795 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 02:14:05.576572 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 02:14:05.583764 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 02:14:05.589598 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 02:14:05.595565 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 02:14:05.600789 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 02:14:05.605943 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 02:14:05.611110 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0416 02:14:05.934527 3535558 google_auth_provider.cc:149] Using credentials at ~/.config/gcloud/application_default_credentials.json I0416 02:14:05.934584 3535558 google_auth_provider.cc:156] Using OAuth2 AuthProvider I0416 02:14:06.117810 126158329224320 base_pytree_checkpoint_handler.py:153] [process=0][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 1.809630s I0416 02:14:06.127126 126158329224320 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/blocking_gbytes_per_sec: 6.407 GiB/s (total gbytes: 12.3 GiB) (time elapsed: a second) (per-host) I0416 02:14:06.127204 126158329224320 base_pytree_checkpoint_handler.py:732] [process=0][thread=MainThread] Initiated Pytree async_save. Time taken: 1.926223s (batch_requests_ready=0.091713s, total_serialization_initiated=1.825301s, others=0.009209s) I0416 02:14:06.127288 126158329224320 composite_checkpoint_handler.py:715] [process=0][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 1.927604s (all_items=0.000051s, per_item={'items': '0.00005102'}, temp_paths=1.927553) I0416 02:14:06.129789 126042723321408 async_checkpointer.py:79] [process=0][thread=async_save] Background save thread started. I0416 02:14:06.129899 126158329224320 async_checkpointer.py:561] Finished blocking save. Time taken: 2.026159s. Continuing background save to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0. I0416 02:14:06.130155 126158329224320 checkpoint_manager.py:1549] [process=0][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize I0416 02:14:06.130366 126042712835648 async_checkpointer.py:265] [process=0][thread=save_finalize] Waiting for background save thread=async_save. I0416 02:14:06.130475 126158329224320 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776305644.103407, 'wait_for_prev_duration_secs': 6.628036499023438e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776305644.1037118, 'checkpointer_blocking_duration_secs': 2.026326894760132, 'get_old_steps_start_time': 1776305646.1300704, 'get_old_steps_duration_secs': 4.363059997558594e-05, 'checkpoint_manager_blocking_start_time': 1776305644.103181, 'checkpoint_manager_blocking_duration_secs': 2.027263879776001} I0416 02:14:06.130607 126158329224320 checkpointing.py:409] Started an asynchronous checkpoint save for step 0 I0416 02:14:06.130690 126158329224320 max_utils.py:750] Memstats: After params initialized: I0416 02:14:06.130745 126158329224320 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_0(process=0,(0,0,0,0)) I0416 02:14:06.130769 126158329224320 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_1(process=0,(1,0,0,0)) I0416 02:14:06.130789 126158329224320 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_2(process=0,(0,1,0,0)) I0416 02:14:06.130808 126158329224320 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_3(process=0,(1,1,0,0)) I0416 02:14:06.130826 126158329224320 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_4(process=0,(0,2,0,0)) I0416 02:14:06.130843 126158329224320 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_5(process=0,(1,2,0,0)) I0416 02:14:06.130860 126158329224320 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_6(process=0,(0,3,0,0)) I0416 02:14:06.130876 126158329224320 max_utils.py:756] Using (GB) 1.6 / 31.25 (5.120000%) on TPU_7(process=0,(1,3,0,0)) W0416 02:14:06.137902 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 02:14:06.144486 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. W0416 02:14:06.149931 3532269 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0416 02:14:06.516354 126158329224320 metric_logger.py:185] completed step: 0, seconds: 18.164, TFLOP/s/device: 0.748, Tokens/s/device: 112.753, total_weights: 13328, loss: 10.880 I0416 02:14:06.518833 126158329224320 metric_logger.py:269] To see full metrics 'tensorboard --logdir=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/tensorboard/' I0416 02:14:06.625694 126158329224320 metric_logger.py:185] completed step: 1, seconds: 2.553, TFLOP/s/device: 5.322, Tokens/s/device: 802.201, total_weights: 12332, loss: 10.851 I0416 02:14:06.746891 126158329224320 metric_logger.py:185] completed step: 2, seconds: 0.040, TFLOP/s/device: 336.490, Tokens/s/device: 50719.433, total_weights: 15161, loss: 9.862 I0416 02:14:06.868099 126158329224320 metric_logger.py:185] completed step: 3, seconds: 0.091, TFLOP/s/device: 149.436, Tokens/s/device: 22524.554, total_weights: 13327, loss: 9.404 I0416 02:14:06.884474 126042754778688 array_metadata_store.py:203] [process=0][thread=array_type_handler] Wrote 111 array_metadata.ArrayMetadata to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0/items/array_metadatas/process_0 I0416 02:14:34.319456 126158329224320 metric_logger.py:185] completed step: 4, seconds: 0.121, TFLOP/s/device: 112.550, Tokens/s/device: 16964.737, total_weights: 11939, loss: 9.133 I0416 02:14:34.334413 126158329224320 metric_logger.py:185] completed step: 5, seconds: 0.121, TFLOP/s/device: 112.012, Tokens/s/device: 16883.620, total_weights: 15502, loss: 9.029 I0416 02:14:34.452090 126158329224320 metric_logger.py:185] completed step: 6, seconds: 27.452, TFLOP/s/device: 0.495, Tokens/s/device: 74.602, total_weights: 13864, loss: 8.894 I0416 02:14:34.573569 126158329224320 metric_logger.py:185] completed step: 7, seconds: 0.010, TFLOP/s/device: 1393.838, Tokens/s/device: 210094.378, total_weights: 12988, loss: 8.806 I0416 02:14:34.695126 126158329224320 metric_logger.py:185] completed step: 8, seconds: 0.119, TFLOP/s/device: 114.371, Tokens/s/device: 17239.202, total_weights: 13820, loss: 8.803 I0416 02:14:34.818568 126158329224320 checkpointing.py:794] Waiting for step 9 to finish before checkpoint... I0416 02:14:34.820400 126158329224320 checkpointing.py:798] Waited 0.0018415451049804688 seconds for step 9 to finish before starting checkpointing. I0416 02:14:34.820845 126158329224320 checkpoint_manager.py:1994] [process=0][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0416 02:14:38.849864 126042744292928 base_pytree_checkpoint_handler.py:1217] [process=0][thread=write_metadata_after_commits] Commit + Array metadata written. Time taken: 32.721890s (commit=32.294870s, array_metadata_write=0.427021s) I0416 02:14:38.851236 126042723321408 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/gbytes_per_sec: 364.697 MiB/s (total gbytes: 12.3 GiB) (time elapsed: 34 seconds) (per-host) I0416 02:14:38.851362 126042723321408 async_checkpointer.py:90] [process=0][thread=async_save] 3 Handler Commit operations completed. Time taken: 32.721399s. I0416 02:14:39.064648 126042723321408 checkpoint.py:228] Read Metadata={'item_handlers': None, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776305644932734546, 'commit_timestamp_nsecs': None, 'custom_metadata': {}} from gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0/_CHECKPOINT_METADATA I0416 02:14:39.247169 126042723321408 array_metadata_store.py:367] [process=0][thread=async_save] Skipped cross-host ArrayMetadata validation because only one process is found: process_index=0. I0416 02:14:39.423247 126042796721728 checkpoint.py:247] Updated Metadata={'item_handlers': {'items': 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler'}, 'metrics': {}, 'performance_metrics': {}, 'init_timestamp_nsecs': 1776305644932734546, 'commit_timestamp_nsecs': None, 'custom_metadata': {}} to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0/_CHECKPOINT_METADATA I0416 02:14:39.641017 126042723321408 ocdbt_utils.py:56] Param validation support for Zarr3 will be added later (b/362328389). I0416 02:14:39.642007 126042723321408 base_pytree_checkpoint_handler.py:1342] [process=0][thread=async_save] Pytree save finalize (merge_ocdbt + ArrayMetadata validation) completed. Time taken: 0.540427s. use_zarr3=True, enable_post_merge_validation=True, directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0/items I0416 02:14:39.642822 126042723321408 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0/items I0416 02:14:39.882260 126042723321408 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0 I0416 02:14:40.537599 126042723321408 atomicity.py:794] [process=0][thread=async_save] Finished saving checkpoint (finalized tmp dir) to `gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0`. I0416 02:14:40.538339 126042723321408 async_checkpointer.py:420] Finished async_save (blocking + background). Time taken: 36.434606s. directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/0 I0416 02:14:40.538424 126042723321408 async_checkpointer.py:144] [process=0][thread=async_save] Background save thread done. Time taken: 34.408465s. I0416 02:14:40.538649 126042712835648 async_checkpointer.py:273] [process=0][thread=save_finalize] Done with waiting for background save thread=async_save. I0416 02:14:40.538775 126042712835648 async_checkpointer.py:283] [process=0][thread=save_finalize] No errors found in background save thread=async_save. I0416 02:14:40.538848 126042712835648 checkpoint_manager.py:2103] [process=0][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts... I0416 02:14:40.538908 126042712835648 checkpoint_manager.py:2112] [process=0][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts. I0416 02:14:40.539021 126158329224320 checkpoint_manager.py:2006] [process=0][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0. W0416 02:14:40.539105 126158329224320 checkpoint_manager.py:1441] Waiting for previous save to complete took 5.718271 seconds. If this number is high, consider checkpointing less frequently. I0416 02:14:40.539640 126158329224320 checkpoint_manager.py:1501] [process=0] Saving checkpoint at step 9 I0416 02:14:40.539948 126158329224320 async_checkpointer.py:452] [process=0] Started async saving checkpoint to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/9. I0416 02:14:40.734415 126042712835648 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/9 I0416 02:14:40.734601 126158329224320 jax_array_handlers.py:347] Scheduling D2H of 111 prioritized jax.Array. I0416 02:14:40.734856 126158329224320 replica_slices.py:410] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0416 02:14:41.468090 126042857539136 atomicity.py:137] Creating tmp directory gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/9/items I0416 02:14:45.059122 126158329224320 base_pytree_checkpoint_handler.py:153] [process=0][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 4.326508s I0416 02:14:45.068125 126158329224320 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/blocking_gbytes_per_sec: 2.778 GiB/s (total gbytes: 12.3 GiB) (time elapsed: 4 seconds) (per-host) I0416 02:14:45.068194 126158329224320 base_pytree_checkpoint_handler.py:732] [process=0][thread=MainThread] Initiated Pytree async_save. Time taken: 4.442709s (batch_requests_ready=0.092940s, total_serialization_initiated=4.340899s, others=0.008870s) I0416 02:14:45.068279 126158329224320 composite_checkpoint_handler.py:715] [process=0][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 4.443296s (all_items=0.000017s, per_item={'items': '0.00001693'}, temp_paths=4.443280) I0416 02:14:45.069802 126042765264448 async_checkpointer.py:79] [process=0][thread=async_save] Background save thread started. I0416 02:14:45.069905 126158329224320 async_checkpointer.py:561] Finished blocking save. Time taken: 4.530210s. Continuing background save to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/9. I0416 02:14:45.070161 126158329224320 checkpoint_manager.py:1549] [process=0][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize I0416 02:14:45.070319 126042723321408 async_checkpointer.py:265] [process=0][thread=save_finalize] Waiting for background save thread=async_save. I0416 02:14:45.070415 126158329224320 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776305674.8208144, 'wait_for_prev_duration_secs': 5.718271017074585, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776305680.539664, 'checkpointer_blocking_duration_secs': 4.530403137207031, 'get_old_steps_start_time': 1776305685.0700915, 'get_old_steps_duration_secs': 2.47955322265625e-05, 'checkpoint_manager_blocking_start_time': 1776305674.8207629, 'checkpoint_manager_blocking_duration_secs': 10.24962329864502} I0416 02:14:45.070740 126158329224320 checkpointing.py:409] Started an asynchronous checkpoint save for step 9 I0416 02:14:45.070775 126158329224320 checkpoint_manager.py:1994] [process=0][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0416 02:14:45.794945 126042775750208 array_metadata_store.py:203] [process=0][thread=array_type_handler] Wrote 111 array_metadata.ArrayMetadata to gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/9/items/array_metadatas/process_0 I0416 02:15:14.520780 126042712835648 base_pytree_checkpoint_handler.py:1217] [process=0][thread=write_metadata_after_commits] Commit + Array metadata written. Time taken: 29.451802s (commit=29.024343s, array_metadata_write=0.427459s) I0416 02:15:14.522118 126042765264448 base_pytree_checkpoint_handler.py:128] [process=0] /jax/checkpoint/write/gbytes_per_sec: 372.804 MiB/s (total gbytes: 12.3 GiB) (time elapsed: 33 seconds) (per-host) I0416 02:15:14.522186 126042765264448 async_checkpointer.py:90] [process=0][thread=async_save] 3 Handler Commit operations completed. Time taken: 29.452221s. I0416 02:15:14.943097 126042765264448 array_metadata_store.py:367] [process=0][thread=async_save] Skipped cross-host ArrayMetadata validation because only one process is found: process_index=0. I0416 02:15:15.364385 126042765264448 ocdbt_utils.py:56] Param validation support for Zarr3 will be added later (b/362328389). I0416 02:15:15.365497 126042765264448 base_pytree_checkpoint_handler.py:1342] [process=0][thread=async_save] Pytree save finalize (merge_ocdbt + ArrayMetadata validation) completed. Time taken: 0.548508s. use_zarr3=True, enable_post_merge_validation=True, directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/9/items I0416 02:15:15.366243 126042765264448 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/9/items I0416 02:15:15.596633 126042765264448 atomicity.py:608] Finalizing gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/9 I0416 02:15:16.253687 126042765264448 atomicity.py:794] [process=0][thread=async_save] Finished saving checkpoint (finalized tmp dir) to `gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/9`. I0416 02:15:16.254436 126042765264448 async_checkpointer.py:420] Finished async_save (blocking + background). Time taken: 35.714760s. directory=gs://wanglance-maxtext/nnx_ckpt_feat_nnx_trainstate_and_training_loop_20260416_004836/nnx_feat_nnx_trainstate_and_training_loop_20260416_004836_04_int8/checkpoints/9 I0416 02:15:16.254509 126042765264448 async_checkpointer.py:144] [process=0][thread=async_save] Background save thread done. Time taken: 31.184544s. I0416 02:15:16.254671 126042723321408 async_checkpointer.py:273] [process=0][thread=save_finalize] Done with waiting for background save thread=async_save. I0416 02:15:16.254791 126042723321408 async_checkpointer.py:283] [process=0][thread=save_finalize] No errors found in background save thread=async_save. I0416 02:15:16.254891 126042723321408 checkpoint_manager.py:2103] [process=0][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts... I0416 02:15:16.254936 126042723321408 checkpoint_manager.py:2112] [process=0][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts. I0416 02:15:16.255130 126158329224320 checkpoint_manager.py:2006] [process=0][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9. I0416 02:15:16.255318 126158329224320 checkpoint_manager.py:1983] [process=0][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0416 02:15:16.256345 126158329224320 metric_logger.py:185] completed step: 9, seconds: 0.122, TFLOP/s/device: 111.682, Tokens/s/device: 16833.937, total_weights: 12300, loss: 8.755 Per train step: Total TFLOPs: 13.59 split as 93.93% learnable weight flops and 6.07% attention flops