MaxView

← Back to run

Log Summary

XPK Start: Fri Apr 24 12:12:46 UTC 2026
PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
`rope_parameters`'s factor field must be a float >= 1, got 40
`rope_parameters`'s beta_fast field must be a float, got 32
`rope_parameters`'s beta_slow field must be a float, got 1
DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 
2026-04-24 12:13:11.440322: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0424 12:13:11.653586 136755528881984 max_utils.py:273] Attempting to initialize the jax distributed system...
I0424 12:13:20.695039 136755528881984 distributed.py:149] Starting JAX distributed service on [::]:8482
I0424 12:13:20.697438 136755528881984 distributed.py:172] Connecting to JAX distributed service on mt-03-dropout-hyzbf-slice-job-0-0.mt-03-dropout-hyzbf:8482
I0424 12:13:22.137372 136755528881984 max_utils.py:284] Jax distributed system initialized!
I0424 12:13:28.233647 136755528881984 max_utils.py:800] System Information: Jax Version: 0.9.2
I0424 12:13:28.233754 136755528881984 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0424 12:13:28.233793 136755528881984 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365
I0424 12:13:28.233828 136755528881984 train_utils.py:391] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
I0424 12:13:28.935302 136755528881984 maxtext_utils.py:1771] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0424 12:13:28.935886 136755528881984 maxtext_utils.py:1771] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0424 12:13:28.936066 136755528881984 checkpointing.py:688] Setting up checkpoint logger...
I0424 12:13:28.936130 136755528881984 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True
I0424 12:13:28.936174 136755528881984 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None
I0424 12:13:28.936538 136755528881984 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7c6029050ec0>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB)
I0424 12:13:31.850528 136755528881984 checkpointing.py:266] Enabling policy for fixed interval checkpointing.
I0424 12:13:31.850763 136755528881984 checkpoint_manager.py:708] [process=6][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7c5f1cfa9a00>}, handler_registry=None
I0424 12:13:31.851010 136755528881984 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7c5f1cfa9a00>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`.
I0424 12:13:31.851060 136755528881984 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7c4bfc5b1520>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`.
I0424 12:13:31.851108 136755528881984 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7c5f1cfa9a00>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7c5f1cfa9a00>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7c4bfc5b1520>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7c4bfc5b1520>}).
I0424 12:13:31.851431 136755528881984 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34
I0424 12:13:31.851505 136755528881984 async_checkpointer.py:192] [process=6][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7c4b1c685e40> timeout: 1200 secs and primary_host=0 for async checkpoint writes
I0424 12:13:33.193690 136755528881984 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260424_120657/linen_xpk_feat_nnx_post_train_fixes_20260424_120657_03_dropout/checkpoints
I0424 12:13:33.211019 136755528881984 checkpoint_manager.py:929] [process=6][thread=MainThread] CheckpointManager created,  primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260424_120657/linen_xpk_feat_nnx_post_train_fixes_20260424_120657_03_dropout/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7c4bfc5b0a10>
I0424 12:13:33.211162 136755528881984 checkpointing.py:302] Checkpoint manager created!
I0424 12:13:33.566815 136755528881984 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,128,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0424 12:13:33.566922 136755528881984 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,128,2048]....................................... ('fsdp', None, None).
I0424 12:13:33.948999 136755528881984 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,128,2048]....................................... ('activation_batch_attn', 'activation_length_attn', 'activation_embed_attn').
I0424 12:13:33.949091 136755528881984 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,128,2048]....................................... ('fsdp', None, None).
I0424 12:13:33.965599 136755528881984 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,128,2048]....................................... ('activation_batch_attn', 'activation_length_attn', 'activation_embed_attn').
I0424 12:13:33.965657 136755528881984 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,128,2048]....................................... ('fsdp', None, None).
I0424 12:13:33.989233 136755528881984 attentions.py:1154] attentions/query Logical: bfloat16[32,128,16,128]..................................... ('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim').
I0424 12:13:33.989302 136755528881984 attentions.py:1154] attentions/query Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0424 12:13:34.005859 136755528881984 attentions.py:1155] attentions/key Logical: bfloat16[32,128,16,128]..................................... ('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim').
I0424 12:13:34.005921 136755528881984 attentions.py:1155] attentions/key Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0424 12:13:34.022460 136755528881984 attentions.py:1156] attentions/value Logical: bfloat16[32,128,16,128]..................................... ('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim').
I0424 12:13:34.022527 136755528881984 attentions.py:1156] attentions/value Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0424 12:13:34.047634 136755528881984 attentions.py:1198] attentions/out Logical: bfloat16[32,128,16,128]..................................... ('activation_batch_attn', 'activation_length_attn', 'activation_heads', 'activation_kv').
I0424 12:13:34.047704 136755528881984 attentions.py:1198] attentions/out Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0424 12:13:34.074625 136755528881984 linears.py:525] linears/x Logical: bfloat16[32,128,7168]....................................... ('activation_batch', 'activation_length', 'activation_mlp').
I0424 12:13:34.074694 136755528881984 linears.py:525] linears/x Physical: bfloat16[32,128,7168]....................................... ('fsdp', None, None).
I0424 12:13:34.305864 136755528881984 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint
I0424 12:13:34.305972 136755528881984 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint.
fsdp: 32
I0424 12:13:35.764491 136755528881984 maxtext_utils.py:1880]  params/params/decoder/decoder_norm/scale
    Shape:     float32[2048]
    Logical:   P('norm',)
    Physical:  (None,)
I0424 12:13:35.764617 136755528881984 maxtext_utils.py:1880]  params/params/decoder/layers/mlp/wi_0/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0424 12:13:35.764672 136755528881984 maxtext_utils.py:1880]  params/params/decoder/layers/mlp/wi_1/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0424 12:13:35.764730 136755528881984 maxtext_utils.py:1880]  params/params/decoder/layers/mlp/wo/kernel
    Shape:     float32[7168,16,2048]
    Logical:   P('mlp', 'layers', 'embed')
    Physical:  (None, None, 'fsdp')
I0424 12:13:35.764781 136755528881984 maxtext_utils.py:1880]  params/params/decoder/layers/post_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0424 12:13:35.764819 136755528881984 maxtext_utils.py:1880]  params/params/decoder/layers/pre_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0424 12:13:35.764870 136755528881984 maxtext_utils.py:1880]  params/params/decoder/layers/self_attention/key/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0424 12:13:35.764922 136755528881984 maxtext_utils.py:1880]  params/params/decoder/layers/self_attention/out/kernel
    Shape:     float32[16,16,128,2048]
    Logical:   P('heads', 'layers', 'kv', 'embed')
    Physical:  (None, None, None, 'fsdp')
I0424 12:13:35.764960 136755528881984 maxtext_utils.py:1880]  params/params/decoder/layers/self_attention/query/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'q_heads', 'kv')
    Physical:  ('fsdp', None, None, None)
I0424 12:13:35.764996 136755528881984 maxtext_utils.py:1880]  params/params/decoder/layers/self_attention/value/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0424 12:13:35.765043 136755528881984 maxtext_utils.py:1880]  params/params/decoder/logits_dense/kernel
    Shape:     float32[2048,32000]
    Logical:   P('embed_vocab', 'vocab')
    Physical:  ('fsdp', None)
I0424 12:13:35.765107 136755528881984 maxtext_utils.py:1880]  params/params/token_embedder/embedding
    Shape:     float32[32000,2048]
    Logical:   P('vocab', 'embed_vocab')
    Physical:  (None, 'fsdp')

I0424 12:13:36.292967 136755528881984 train.py:157] train/xent Logical: float32[32,128]............................................. ('activation_embed_and_logits_batch', 'activation_length').
I0424 12:13:36.293062 136755528881984 train.py:157] train/xent Physical: float32[32,128]............................................. ('fsdp', None).
I0424 12:13:36.308821 136755528881984 train.py:164] train/z_loss Logical: float32[32,128]............................................. ('activation_embed_and_logits_batch', 'activation_length').
I0424 12:13:36.308885 136755528881984 train.py:164] train/z_loss Physical: float32[32,128]............................................. ('fsdp', None).
I0424 12:13:39.992684 136755528881984 max_utils.py:791] Total memory size: 0.8 GB, Output size: 0.4 GB, Temp size: 0.4 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB.
I0424 12:13:39.993485 136755528881984 metric_logger.py:301] number parameters: 1.104 billion
I0424 12:13:44.649955 136755528881984 checkpointing.py:794] Waiting for step 0 to finish before checkpoint...
I0424 12:13:44.723999 136755528881984 checkpointing.py:798] Waited 0.07402944564819336 seconds for step 0 to finish before starting checkpointing.
I0424 12:13:44.726446 136755528881984 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0424 12:13:44.728405 136755528881984 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 0
I0424 12:13:44.729849 136755528881984 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260424_120657/linen_xpk_feat_nnx_post_train_fixes_20260424_120657_03_dropout/checkpoints/0.
I0424 12:13:45.416939 136755528881984 signaling_client.py:364] Using JaxDistributedSignalingClient
I0424 12:13:45.417976 136755528881984 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0424 12:13:45.418035 136755528881984 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0424 12:13:45.699783 136755528881984 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.282919s
I0424 12:13:45.699958 136755528881984 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 5.347 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.288470983505249 s) (per-host)
I0424 12:13:45.700009 136755528881984 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.288532s (batch_requests_ready=0.002298s, total_serialization_initiated=0.286165s, others=0.000069s)
I0424 12:13:45.700134 136755528881984 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.292783s (all_items=0.000018s, per_item={'items': '0.00001788'}, temp_paths=0.292765)
I0424 12:13:45.700861 136755528881984 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.97 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260424_120657/linen_xpk_feat_nnx_post_train_fixes_20260424_120657_03_dropout/checkpoints/0.
I0424 12:13:45.701213 136627775059712 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-24 12:33:45.701176
I0424 12:13:46.148196 136755528881984 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize
I0424 12:13:46.148602 136625622615808 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0424 12:13:46.148771 136755528881984 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260424_120657/linen_xpk_feat_nnx_post_train_fixes_20260424_120657_03_dropout/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1777032824.7264283, 'wait_for_prev_duration_secs': 6.222724914550781e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1777032824.728444, 'checkpointer_blocking_duration_secs': 0.9729235172271729, 'get_old_steps_start_time': 1777032825.7013907, 'get_old_steps_duration_secs': 3.075599670410156e-05, 'checkpoint_manager_blocking_start_time': 1777032824.7245095, 'checkpoint_manager_blocking_duration_secs': 1.4242186546325684}
I0424 12:13:46.148884 136755528881984 checkpointing.py:409] Started an asynchronous checkpoint save for step 0
I0424 12:13:46.148935 136755528881984 max_utils.py:750] 
Memstats: After params initialized:
I0424 12:13:46.148990 136755528881984 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_24(process=6,(0,6,0,0))
I0424 12:13:46.149023 136755528881984 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_25(process=6,(1,6,0,0))
I0424 12:13:46.149051 136755528881984 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_28(process=6,(0,7,0,0))
I0424 12:13:46.149075 136755528881984 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_29(process=6,(1,7,0,0))
I0424 12:13:46.472021 136755528881984 metric_logger.py:196] completed step: 0, seconds: 4.656, TFLOP/s/device: 0.172, Tokens/s/device: 27.489, total_weights: 4096, loss: 10.889, lm_loss: 10.889, perplexity: 53560.699
I0424 12:13:46.554493 136755528881984 metric_logger.py:196] completed step: 1, seconds: 1.821, TFLOP/s/device: 0.440, Tokens/s/device: 70.308, total_weights: 4096, loss: 10.902, lm_loss: 10.902, perplexity: 54298.523
I0424 12:13:46.999675 136755528881984 metric_logger.py:196] completed step: 2, seconds: 0.012, TFLOP/s/device: 66.918, Tokens/s/device: 10695.187, total_weights: 4096, loss: 9.956, lm_loss: 9.956, perplexity: 21082.051
I0424 12:13:47.070309 136755528881984 metric_logger.py:196] completed step: 3, seconds: 0.446, TFLOP/s/device: 1.797, Tokens/s/device: 287.204, total_weights: 4096, loss: 9.139, lm_loss: 9.139, perplexity: 9314.691
I0424 12:13:47.212166 136755528881984 metric_logger.py:196] completed step: 4, seconds: 0.076, TFLOP/s/device: 10.528, Tokens/s/device: 1682.705, total_weights: 4096, loss: 8.459, lm_loss: 8.459, perplexity: 4715.493
I0424 12:13:47.218228 136755528881984 metric_logger.py:196] completed step: 5, seconds: 0.071, TFLOP/s/device: 11.349, Tokens/s/device: 1813.776, total_weights: 4096, loss: 7.924, lm_loss: 7.924, perplexity: 2762.373
I0424 12:13:48.596203    2806 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com
I0424 12:13:51.172135 136626168653568 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260424_120657/linen_xpk_feat_nnx_post_train_fixes_20260424_120657_03_dropout/checkpoints/0/items/array_metadatas/process_6
I0424 12:13:56.670817 136755528881984 metric_logger.py:196] completed step: 6, seconds: 0.142, TFLOP/s/device: 5.624, Tokens/s/device: 898.801, total_weights: 4096, loss: 7.553, lm_loss: 7.553, perplexity: 1906.617
I0424 12:13:56.741431 136755528881984 metric_logger.py:196] completed step: 7, seconds: 9.383, TFLOP/s/device: 0.085, Tokens/s/device: 13.641, total_weights: 4096, loss: 7.267, lm_loss: 7.267, perplexity: 1431.653
I0424 12:13:56.812115 136755528881984 metric_logger.py:196] completed step: 8, seconds: 0.075, TFLOP/s/device: 10.699, Tokens/s/device: 1709.973, total_weights: 4096, loss: 7.111, lm_loss: 7.111, perplexity: 1225.983
I0424 12:13:56.882043 136755528881984 checkpointing.py:794] Waiting for step 9 to finish before checkpoint...
I0424 12:13:56.882714 136755528881984 checkpointing.py:798] Waited 0.0006883144378662109 seconds for step 9 to finish before starting checkpointing.
I0424 12:13:56.884745 136755528881984 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0424 12:14:17.966122 136627775059712 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 48.521 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 32.55458664894104 s) (per-host)
I0424 12:14:17.966228 136627775059712 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 32.264898s.
I0424 12:14:26.849570 136627775059712 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 41.148224s.
I0424 12:14:26.849887 136625622615808 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0424 12:14:26.850012 136625622615808 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0424 12:14:26.850062 136625622615808 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts...
I0424 12:14:26.851864 136625622615808 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts.
I0424 12:14:26.852059 136755528881984 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0.
W0424 12:14:26.852201 136755528881984 checkpoint_manager.py:1452] Waiting for previous save to complete took 29.967455 seconds. If this number is high, consider checkpointing less frequently.
I0424 12:14:26.854113 136755528881984 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 9
I0424 12:14:26.856111 136755528881984 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260424_120657/linen_xpk_feat_nnx_post_train_fixes_20260424_120657_03_dropout/checkpoints/9.
I0424 12:14:27.136856 136755528881984 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0424 12:14:27.136944 136755528881984 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0424 12:14:27.173284 136755528881984 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.037608s
I0424 12:14:27.173456 136755528881984 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 37.485 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.041152000427246094 s) (per-host)
I0424 12:14:27.173509 136755528881984 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.041213s (batch_requests_ready=0.001899s, total_serialization_initiated=0.039246s, others=0.000069s)
I0424 12:14:27.173601 136755528881984 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.045691s (all_items=0.000015s, per_item={'items': '0.00001478'}, temp_paths=0.045677)
I0424 12:14:27.174221 136755528881984 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.32 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260424_120657/linen_xpk_feat_nnx_post_train_fixes_20260424_120657_03_dropout/checkpoints/9.
I0424 12:14:27.174532 136626168653568 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-24 12:34:27.174497
I0424 12:14:27.176326 136755528881984 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize
I0424 12:14:27.176567 136625622615808 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0424 12:14:27.176669 136755528881984 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260424_120657/linen_xpk_feat_nnx_post_train_fixes_20260424_120657_03_dropout/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1777032836.8847153, 'wait_for_prev_duration_secs': 29.9674551486969, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1777032866.8541515, 'checkpointer_blocking_duration_secs': 0.3205256462097168, 'get_old_steps_start_time': 1777032867.1746976, 'get_old_steps_duration_secs': 2.8848648071289062e-05, 'checkpoint_manager_blocking_start_time': 1777032836.8829358, 'checkpoint_manager_blocking_duration_secs': 30.293700456619263}
I0424 12:14:27.176834 136755528881984 checkpointing.py:409] Started an asynchronous checkpoint save for step 9
I0424 12:14:27.176881 136755528881984 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0424 12:14:32.584252 136602948867840 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260424_120657/linen_xpk_feat_nnx_post_train_fixes_20260424_120657_03_dropout/checkpoints/9/items/array_metadatas/process_6
I0424 12:15:08.390806 136626168653568 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 38.285 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 41.25846600532532 s) (per-host)
I0424 12:15:08.390916 136626168653568 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 41.216273s.
I0424 12:15:17.433508 136626168653568 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 50.258849s.
I0424 12:15:17.433801 136625622615808 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0424 12:15:17.433922 136625622615808 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0424 12:15:17.433968 136625622615808 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts...
I0424 12:15:17.435527 136625622615808 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts.
I0424 12:15:17.435689 136755528881984 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9.
I0424 12:15:17.435835 136755528881984 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0424 12:15:17.436747 136755528881984 metric_logger.py:196] completed step: 9, seconds: 0.070, TFLOP/s/device: 11.362, Tokens/s/device: 1815.938, total_weights: 4096, loss: 6.981, lm_loss: 6.981, perplexity: 1075.867
Per train step:
 Total TFLOPs: 0.80 
 split as 99.60% learnable weight flops and 0.40% attention flops
XPK End: Fri Apr 24 12:15:26 UTC 2026
EXIT_CODE=0