MaxView

‹ 14_async_ckpt_false_resumeCase: 14_async_ckpt_false_save15_ocdbt_false_resume ›

Metrics: main (8a17c3d19) vs test/pipeline-scan-nnx (5b21b66fc)

Metricmain  8a17c3d19test/pipeline-scan-nnx  5b21b66fcDiff (test/pipeline-scan-nnx − main)
Parameters1.104 billion1.104 billion
Final loss7.18107.18100
TFLOP/s92.07690.342-1.734
Tok/s13878.713617.3-261.431
Avg s/step13.96114.035+0.074
Memory %1.381.380
JAX0.9.20.9.2

Diff = branch value − main value. Green = branch improved. Red = branch regressed.

main  ·  8a17c3d19  ·  main_20260422_071422  ·  full log
XPK Start: Wed Apr 22 08:02:51 UTC 2026
PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
`rope_parameters`'s factor field must be a float >= 1, got 40
`rope_parameters`'s beta_fast field must be a float, got 32
`rope_parameters`'s beta_slow field must be a float, got 1
DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 
2026-04-22 08:03:16.688771: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0422 08:03:16.902641 140495987898176 max_utils.py:273] Attempting to initialize the jax distributed system...
I0422 08:03:25.944175 140495987898176 distributed.py:149] Starting JAX distributed service on [::]:8482
I0422 08:03:25.946637 140495987898176 distributed.py:172] Connecting to JAX distributed service on mt-14-async-ckpt-false--a30fr-slice-job-0-0.mt-14-async-ckpt-false--a30fr:8482
I0422 08:03:27.136630 140495987898176 max_utils.py:284] Jax distributed system initialized!
I0422 08:03:33.178517 140495987898176 max_utils.py:800] System Information: Jax Version: 0.9.2
I0422 08:03:33.178622 140495987898176 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0422 08:03:33.178665 140495987898176 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365
I0422 08:03:33.178701 140495987898176 train_utils.py:361] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
I0422 08:03:33.878379 140495987898176 maxtext_utils.py:1565] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0422 08:03:33.878660 140495987898176 checkpointing.py:677] Setting up checkpoint logger...
I0422 08:03:33.878727 140495987898176 checkpointing.py:233] Creating checkpoint manager with ocdbt=True and zarr3=True
I0422 08:03:33.878773 140495987898176 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None
I0422 08:03:33.879111 140495987898176 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7fc72271cb30>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB)
I0422 08:03:36.778207 140495987898176 checkpointing.py:265] Enabling policy for fixed interval checkpointing.
I0422 08:03:36.778443 140495987898176 checkpoint_manager.py:708] [process=2][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7fb3007b5c10>}, handler_registry=None
I0422 08:03:36.778681 140495987898176 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7fb3007b5c10>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`.
I0422 08:03:36.778742 140495987898176 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7fb3007b7200>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`.
I0422 08:03:36.778781 140495987898176 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7fb3007b5c10>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7fb3007b5c10>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7fb3007b7200>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7fb3007b7200>}).
I0422 08:03:36.779106 140495987898176 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34
I0422 08:03:37.874363 140495987898176 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints
I0422 08:03:38.320190 140495987898176 checkpoint_manager.py:929] [process=2][thread=MainThread] CheckpointManager created,  primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=False, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=5), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7fb32013be30>
I0422 08:03:38.320366 140495987898176 checkpointing.py:301] Checkpoint manager created!
I0422 08:03:39.251370 140495987898176 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0422 08:03:39.251495 140495987898176 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0422 08:03:39.634537 140495987898176 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0422 08:03:39.634631 140495987898176 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0422 08:03:39.651384 140495987898176 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0422 08:03:39.651453 140495987898176 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0422 08:03:39.675600 140495987898176 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0422 08:03:39.675669 140495987898176 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0422 08:03:39.692471 140495987898176 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0422 08:03:39.692533 140495987898176 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0422 08:03:39.709197 140495987898176 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0422 08:03:39.709259 140495987898176 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0422 08:03:39.734417 140495987898176 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv').
I0422 08:03:39.734495 140495987898176 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0422 08:03:39.756904 140495987898176 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp').
I0422 08:03:39.756982 140495987898176 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None).
I0422 08:03:39.970249 140495987898176 checkpointing.py:577] checkpoint manager exists so trying to load this run's existing checkpoint
I0422 08:03:39.970357 140495987898176 checkpointing.py:665] No existing checkpoints found, not restoring checkpoint.
fsdp: 32
I0422 08:03:41.401160 140495987898176 maxtext_utils.py:1668]  params/params/decoder/decoder_norm/scale
    Shape:     float32[2048]
    Logical:   P('norm',)
    Physical:  (None,)
I0422 08:03:41.401312 140495987898176 maxtext_utils.py:1668]  params/params/decoder/layers/mlp/wi_0/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0422 08:03:41.401371 140495987898176 maxtext_utils.py:1668]  params/params/decoder/layers/mlp/wi_1/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0422 08:03:41.401429 140495987898176 maxtext_utils.py:1668]  params/params/decoder/layers/mlp/wo/kernel
    Shape:     float32[7168,16,2048]
    Logical:   P('mlp', 'layers', 'embed')
    Physical:  (None, None, 'fsdp')
I0422 08:03:41.401482 140495987898176 maxtext_utils.py:1668]  params/params/decoder/layers/post_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0422 08:03:41.401522 140495987898176 maxtext_utils.py:1668]  params/params/decoder/layers/pre_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0422 08:03:41.401576 140495987898176 maxtext_utils.py:1668]  params/params/decoder/layers/self_attention/key/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0422 08:03:41.401631 140495987898176 maxtext_utils.py:1668]  params/params/decoder/layers/self_attention/out/kernel
    Shape:     float32[16,16,128,2048]
    Logical:   P('heads', 'layers', 'kv', 'embed')
    Physical:  (None, None, None, 'fsdp')
I0422 08:03:41.401674 140495987898176 maxtext_utils.py:1668]  params/params/decoder/layers/self_attention/query/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'q_heads', 'kv')
    Physical:  ('fsdp', None, None, None)
I0422 08:03:41.401795 140495987898176 maxtext_utils.py:1668]  params/params/decoder/layers/self_attention/value/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0422 08:03:41.401906 140495987898176 maxtext_utils.py:1668]  params/params/decoder/logits_dense/kernel
    Shape:     float32[2048,32000]
    Logical:   P('embed_vocab', 'vocab')
    Physical:  ('fsdp', None)
I0422 08:03:41.402005 140495987898176 maxtext_utils.py:1668]  params/params/token_embedder/embedding
    Shape:     float32[32000,2048]
    Logical:   P('vocab', 'embed_vocab')
    Physical:  (None, 'fsdp')

I0422 08:03:41.896525 140495987898176 train.py:155] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length').
I0422 08:03:41.896624 140495987898176 train.py:155] train/xent Physical: float32[32,2048]............................................ ('fsdp', None).
I0422 08:03:41.912287 140495987898176 train.py:162] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length').
I0422 08:03:41.912349 140495987898176 train.py:162] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None).
I0422 08:03:52.944402 140495987898176 max_utils.py:791] Total memory size: 1.5 GB, Output size: 0.4 GB, Temp size: 1.1 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB.
I0422 08:03:52.945148 140495987898176 metric_logger.py:301] number parameters: 1.104 billion
I0422 08:04:04.384413 140495987898176 checkpointing.py:772] Waiting for step 0 to finish before checkpoint...
I0422 08:04:04.563305 140495987898176 checkpointing.py:776] Waited 0.17887282371520996 seconds for step 0 to finish before starting checkpointing.
I0422 08:04:04.565620 140495987898176 checkpoint_manager.py:2009] [process=2][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0422 08:04:04.567290 140495987898176 checkpoint_manager.py:1512] [process=2] Saving checkpoint at step 0
I0422 08:04:04.568584 140495987898176 event_tracking.py:70] [process=2] [sync] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/0.
I0422 08:04:08.127854 140495987898176 signaling_client.py:364] Using JaxDistributedSignalingClient
I0422 08:04:08.129228 140495987898176 future.py:372] [process=2][thread=MainThread][operation_id=1] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 08:04:09.978343 140495987898176 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0422 08:04:09.978438 140495987898176 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0422 08:04:10.259701 140495987898176 base_pytree_checkpoint_handler.py:154] [process=2][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.282604s
I0422 08:04:10.259902 140495987898176 base_pytree_checkpoint_handler.py:130] [process=2] /jax/orbax/write/blocking_gbytes_per_sec: 5.346 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.2885594367980957 s) (per-host)
I0422 08:04:10.259960 140495987898176 base_pytree_checkpoint_handler.py:768] [process=2][thread=MainThread] Initiated Pytree async_save. Time taken: 0.288628s (batch_requests_ready=0.002662s, total_serialization_initiated=0.285889s, others=0.000077s)
I0422 08:04:10.260061 140495987898176 composite_checkpoint_handler.py:715] [process=2][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 2.132291s (all_items=0.000034s, per_item={'items': '0.00003433'}, temp_paths=2.132257)
I0422 08:04:10.260108 140495987898176 future.py:372] [process=2][thread=MainThread][operation_id=1] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 08:04:10.260149 140495987898176 future.py:372] [process=2][thread=MainThread][operation_id=1] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 08:04:10.271404    2798 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com
I0422 08:04:12.328279 140366678267648 array_metadata_store.py:203] [process=2][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/0/items/array_metadatas/process_2
I0422 08:04:44.364286 140495987898176 base_pytree_checkpoint_handler.py:130] [process=2] /jax/orbax/write/gbytes_per_sec: 45.928 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 34.392887115478516 s) (per-host)
I0422 08:04:48.809377 140495987898176 event_tracking.py:125] [process=2] [sync] Finished blocking save in 44.24 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/0.
I0422 08:04:53.571806 140495987898176 event_tracking.py:138] [process=2] [sync] Finished save in 49.00 seconds @ gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/0
I0422 08:04:53.573696 140495987898176 checkpoint_manager.py:2137] [process=2][thread=MainThread][step=0] CheckpointManager Save Finalize is syncing with other hosts...
I0422 08:04:53.575495 140495987898176 checkpoint_manager.py:2146] [process=2][thread=MainThread][step=0] CheckpointManager Save Finalize is done on all hosts.
I0422 08:04:53.575545 140495987898176 checkpoint_manager.py:1581] [process=2][thread=MainThread][step=0] Finished synchronous save.
I0422 08:04:53.575600 140495987898176 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': True, 'wait_for_prev_start_time': 1776845044.5656025, 'wait_for_prev_duration_secs': 5.91278076171875e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776845044.5673308, 'checkpointer_blocking_duration_secs': 49.00459623336792, 'get_old_steps_start_time': 1776845093.5719512, 'get_old_steps_duration_secs': 2.4318695068359375e-05, 'checkpoint_manager_blocking_start_time': 1776845044.5638444, 'checkpoint_manager_blocking_duration_secs': 49.01172590255737}
I0422 08:04:53.575689 140495987898176 checkpointing.py:410] Saved a checkpoint at step 0.
I0422 08:04:53.575745 140495987898176 max_utils.py:750] 
Memstats: After params initialized:
I0422 08:04:53.575792 140495987898176 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_8(process=2,(0,2,0,0))
I0422 08:04:53.575822 140495987898176 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_9(process=2,(1,2,0,0))
I0422 08:04:53.575849 140495987898176 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_12(process=2,(0,3,0,0))
I0422 08:04:53.575872 140495987898176 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_13(process=2,(1,3,0,0))
I0422 08:04:53.891792 140495987898176 metric_logger.py:196] completed step: 0, seconds: 11.439, TFLOP/s/device: 1.188, Tokens/s/device: 179.034, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52779.875
I0422 08:04:54.481966 140495987898176 metric_logger.py:196] completed step: 1, seconds: 49.500, TFLOP/s/device: 0.274, Tokens/s/device: 41.374, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52779.875
I0422 08:04:54.629484 140495987898176 metric_logger.py:196] completed step: 2, seconds: 0.450, TFLOP/s/device: 30.167, Tokens/s/device: 4547.140, total_weights: 65536, loss: 10.560, lm_loss: 10.560, perplexity: 38547.719
I0422 08:04:54.776902 140495987898176 metric_logger.py:196] completed step: 3, seconds: 0.152, TFLOP/s/device: 89.537, Tokens/s/device: 13495.970, total_weights: 65536, loss: 9.980, lm_loss: 9.980, perplexity: 21590.506
I0422 08:04:54.781429 140495987898176 checkpointing.py:772] Waiting for step 5 to finish before checkpoint...
I0422 08:04:55.070913 140495987898176 checkpointing.py:776] Waited 0.2894461154937744 seconds for step 5 to finish before starting checkpointing.
I0422 08:04:55.073765 140495987898176 checkpoint_manager.py:2009] [process=2][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0422 08:04:55.075452 140495987898176 checkpoint_manager.py:1512] [process=2] Saving checkpoint at step 5
I0422 08:04:55.077126 140495987898176 event_tracking.py:70] [process=2] [sync] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/5.
I0422 08:04:58.858564 140495987898176 future.py:372] [process=2][thread=MainThread][operation_id=2] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 08:05:00.238579 140495987898176 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0422 08:05:00.238677 140495987898176 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0422 08:05:00.274747 140495987898176 base_pytree_checkpoint_handler.py:154] [process=2][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.037436s
I0422 08:05:00.274929 140495987898176 base_pytree_checkpoint_handler.py:130] [process=2] /jax/orbax/write/blocking_gbytes_per_sec: 37.358 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.04129195213317871 s) (per-host)
I0422 08:05:00.274986 140495987898176 base_pytree_checkpoint_handler.py:768] [process=2][thread=MainThread] Initiated Pytree async_save. Time taken: 0.041358s (batch_requests_ready=0.001861s, total_serialization_initiated=0.039423s, others=0.000074s)
I0422 08:05:00.275085 140495987898176 composite_checkpoint_handler.py:715] [process=2][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 1.417835s (all_items=0.000027s, per_item={'items': '0.00002718'}, temp_paths=1.417808)
I0422 08:05:00.275129 140495987898176 future.py:372] [process=2][thread=MainThread][operation_id=2] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 08:05:00.275166 140495987898176 future.py:372] [process=2][thread=MainThread][operation_id=2] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 08:05:02.186886 140366678267648 array_metadata_store.py:203] [process=2][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/5/items/array_metadatas/process_2
I0422 08:05:39.467065 140495987898176 base_pytree_checkpoint_handler.py:130] [process=2] /jax/orbax/write/gbytes_per_sec: 40.262 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 39.23338961601257 s) (per-host)
I0422 08:05:43.114797 140495987898176 event_tracking.py:125] [process=2] [sync] Finished blocking save in 48.04 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/5.
I0422 08:05:47.359539 140495987898176 event_tracking.py:138] [process=2] [sync] Finished save in 52.28 seconds @ gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/5
I0422 08:05:47.361454 140495987898176 checkpoint_manager.py:2137] [process=2][thread=MainThread][step=5] CheckpointManager Save Finalize is syncing with other hosts...
I0422 08:05:47.363417 140495987898176 checkpoint_manager.py:2146] [process=2][thread=MainThread][step=5] CheckpointManager Save Finalize is done on all hosts.
I0422 08:05:47.363467 140495987898176 checkpoint_manager.py:1581] [process=2][thread=MainThread][step=5] Finished synchronous save.
I0422 08:05:47.363523 140495987898176 standard_logger.py:34] {'step': 5, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': True, 'wait_for_prev_start_time': 1776845095.073744, 'wait_for_prev_duration_secs': 6.031990051269531e-05, 'time_between_consecutive_saves_sec': 1.4982106685638428, 'checkpointer_blocking_start_time': 1776845095.07549, 'checkpointer_blocking_duration_secs': 52.28417730331421, 'get_old_steps_start_time': 1776845147.3596895, 'get_old_steps_duration_secs': 3.790855407714844e-05, 'checkpoint_manager_blocking_start_time': 1776845095.071548, 'checkpoint_manager_blocking_duration_secs': 52.29194974899292}
I0422 08:05:47.363612 140495987898176 checkpointing.py:410] Saved a checkpoint at step 5.
I0422 08:05:47.364438 140495987898176 metric_logger.py:196] completed step: 4, seconds: 0.147, TFLOP/s/device: 92.155, Tokens/s/device: 13890.679, total_weights: 65536, loss: 9.460, lm_loss: 9.460, perplexity: 12840.355
I0422 08:05:47.381500 140495987898176 metric_logger.py:196] completed step: 5, seconds: 0.147, TFLOP/s/device: 92.204, Tokens/s/device: 13898.031, total_weights: 65536, loss: 8.959, lm_loss: 8.959, perplexity: 7776.467
I0422 08:06:09.882964 140495987898176 metric_logger.py:196] completed step: 6, seconds: 52.588, TFLOP/s/device: 0.258, Tokens/s/device: 38.945, total_weights: 65536, loss: 8.469, lm_loss: 8.469, perplexity: 4762.740
I0422 08:06:10.030769 140495987898176 metric_logger.py:196] completed step: 7, seconds: 22.366, TFLOP/s/device: 0.607, Tokens/s/device: 91.566, total_weights: 65536, loss: 8.003, lm_loss: 8.003, perplexity: 2989.151
I0422 08:06:10.178293 140495987898176 metric_logger.py:196] completed step: 8, seconds: 0.152, TFLOP/s/device: 89.140, Tokens/s/device: 13436.116, total_weights: 65536, loss: 7.572, lm_loss: 7.572, perplexity: 1943.098
I0422 08:06:10.325129 140495987898176 checkpointing.py:772] Waiting for step 9 to finish before checkpoint...
I0422 08:06:10.325857 140495987898176 checkpointing.py:776] Waited 0.0007460117340087891 seconds for step 9 to finish before starting checkpointing.
I0422 08:06:10.327908 140495987898176 checkpoint_manager.py:2009] [process=2][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0422 08:06:10.329895 140495987898176 checkpoint_manager.py:1512] [process=2] Saving checkpoint at step 9
I0422 08:06:10.331211 140495987898176 event_tracking.py:70] [process=2] [sync] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/9.
I0422 08:06:13.316981 140495987898176 future.py:372] [process=2][thread=MainThread][operation_id=3] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 08:06:15.184128 140495987898176 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0422 08:06:15.184224 140495987898176 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0422 08:06:15.220050 140495987898176 base_pytree_checkpoint_handler.py:154] [process=2][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.036995s
I0422 08:06:15.220219 140495987898176 base_pytree_checkpoint_handler.py:130] [process=2] /jax/orbax/write/blocking_gbytes_per_sec: 37.635 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.04098796844482422 s) (per-host)
I0422 08:06:15.220285 140495987898176 base_pytree_checkpoint_handler.py:768] [process=2][thread=MainThread] Initiated Pytree async_save. Time taken: 0.041062s (batch_requests_ready=0.001780s, total_serialization_initiated=0.039204s, others=0.000079s)
I0422 08:06:15.220414 140495987898176 composite_checkpoint_handler.py:715] [process=2][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 1.904814s (all_items=0.000025s, per_item={'items': '0.00002527'}, temp_paths=1.904789)
I0422 08:06:15.220472 140495987898176 future.py:372] [process=2][thread=MainThread][operation_id=3] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 08:06:15.220524 140495987898176 future.py:372] [process=2][thread=MainThread][operation_id=3] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 08:06:17.294601 140369876023040 array_metadata_store.py:203] [process=2][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/9/items/array_metadatas/process_2
I0422 08:06:54.568014 140495987898176 base_pytree_checkpoint_handler.py:130] [process=2] /jax/orbax/write/gbytes_per_sec: 40.103 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 39.38874626159668 s) (per-host)
I0422 08:06:57.695990 140495987898176 event_tracking.py:125] [process=2] [sync] Finished blocking save in 47.37 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/9.
I0422 08:07:01.837840 140495987898176 event_tracking.py:138] [process=2] [sync] Finished save in 51.51 seconds @ gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints/9
I0422 08:07:01.839885 140495987898176 checkpoint_manager.py:2137] [process=2][thread=MainThread][step=9] CheckpointManager Save Finalize is syncing with other hosts...
I0422 08:07:01.841584 140495987898176 checkpoint_manager.py:2146] [process=2][thread=MainThread][step=9] CheckpointManager Save Finalize is done on all hosts.
I0422 08:07:01.841635 140495987898176 checkpoint_manager.py:1581] [process=2][thread=MainThread][step=9] Finished synchronous save.
I0422 08:07:01.841689 140495987898176 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_main_20260422_071422/linen_xpk_main_20260422_071422_14_async_ckpt_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': True, 'wait_for_prev_start_time': 1776845170.3278913, 'wait_for_prev_duration_secs': 5.793571472167969e-05, 'time_between_consecutive_saves_sec': 22.964435577392578, 'checkpointer_blocking_start_time': 1776845170.3299341, 'checkpointer_blocking_duration_secs': 51.508028984069824, 'get_old_steps_start_time': 1776845221.837977, 'get_old_steps_duration_secs': 2.193450927734375e-05, 'checkpoint_manager_blocking_start_time': 1776845170.326112, 'checkpoint_manager_blocking_duration_secs': 51.51555013656616}
I0422 08:07:01.841818 140495987898176 checkpointing.py:410] Saved a checkpoint at step 9.
I0422 08:07:01.841872 140495987898176 checkpoint_manager.py:2009] [process=2][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0422 08:07:01.841918 140495987898176 checkpoint_manager.py:2009] [process=2][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0422 08:07:01.842619 140495987898176 metric_logger.py:196] completed step: 9, seconds: 0.148, TFLOP/s/device: 92.076, Tokens/s/device: 13878.724, total_weights: 65536, loss: 7.181, lm_loss: 7.181, perplexity: 1314.495
Per train step:
 Total TFLOPs: 13.59 
 split as 93.93% learnable weight flops and 6.07% attention flops
XPK End: Wed Apr 22 08:07:12 UTC 2026
EXIT_CODE=0
XPK Start: Wed Apr 22 22:14:46 UTC 2026
PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
`rope_parameters`'s factor field must be a float >= 1, got 40
`rope_parameters`'s beta_fast field must be a float, got 32
`rope_parameters`'s beta_slow field must be a float, got 1
DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 
2026-04-22 22:15:11.145153: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0422 22:15:11.355818 138916018452288 max_utils.py:273] Attempting to initialize the jax distributed system...
I0422 22:15:20.398844 138916018452288 distributed.py:149] Starting JAX distributed service on [::]:8482
I0422 22:15:20.401191 138916018452288 distributed.py:172] Connecting to JAX distributed service on mt-14-async-ckpt-false--z93n3-slice-job-0-0.mt-14-async-ckpt-false--z93n3:8482
I0422 22:15:21.573841 138916018452288 max_utils.py:284] Jax distributed system initialized!
I0422 22:15:26.702132 138916018452288 max_utils.py:800] System Information: Jax Version: 0.9.2
I0422 22:15:26.702236 138916018452288 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0422 22:15:26.702277 138916018452288 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365
I0422 22:15:26.702317 138916018452288 train_utils.py:361] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
I0422 22:15:27.398022 138916018452288 maxtext_utils.py:1565] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0422 22:15:27.398330 138916018452288 checkpointing.py:677] Setting up checkpoint logger...
I0422 22:15:27.398383 138916018452288 checkpointing.py:233] Creating checkpoint manager with ocdbt=True and zarr3=True
I0422 22:15:27.398427 138916018452288 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None
I0422 22:15:27.398765 138916018452288 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7e573044d130>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB)
I0422 22:15:30.876454 138916018452288 checkpointing.py:265] Enabling policy for fixed interval checkpointing.
I0422 22:15:30.876703 138916018452288 checkpoint_manager.py:708] [process=6][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7e42a8533ce0>}, handler_registry=None
I0422 22:15:30.876942 138916018452288 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7e42a8533ce0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`.
I0422 22:15:30.876990 138916018452288 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7e42a8535190>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`.
I0422 22:15:30.877026 138916018452288 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7e42a8533ce0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7e42a8533ce0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7e42a8535190>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7e42a8535190>}).
I0422 22:15:30.877362 138916018452288 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34
I0422 22:15:32.016659 138916018452288 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints
I0422 22:15:32.058280 138916018452288 checkpoint_manager.py:929] [process=6][thread=MainThread] CheckpointManager created,  primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=False, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=5), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7e42a8534260>
I0422 22:15:32.058428 138916018452288 checkpointing.py:301] Checkpoint manager created!
I0422 22:15:32.993141 138916018452288 nnx_wrappers.py:453] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0422 22:15:32.993261 138916018452288 nnx_wrappers.py:453] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0422 22:15:33.373222 138916018452288 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0422 22:15:33.373316 138916018452288 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0422 22:15:33.389802 138916018452288 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0422 22:15:33.389865 138916018452288 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0422 22:15:33.413597 138916018452288 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0422 22:15:33.413664 138916018452288 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0422 22:15:33.430263 138916018452288 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0422 22:15:33.430328 138916018452288 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0422 22:15:33.446890 138916018452288 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0422 22:15:33.446952 138916018452288 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0422 22:15:33.471943 138916018452288 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv').
I0422 22:15:33.472016 138916018452288 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0422 22:15:33.493217 138916018452288 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp').
I0422 22:15:33.493286 138916018452288 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None).
I0422 22:15:33.703299 138916018452288 checkpointing.py:577] checkpoint manager exists so trying to load this run's existing checkpoint
I0422 22:15:33.703410 138916018452288 checkpointing.py:665] No existing checkpoints found, not restoring checkpoint.
fsdp: 32
I0422 22:15:35.132592 138916018452288 maxtext_utils.py:1668]  params/params/decoder/decoder_norm/scale
    Shape:     float32[2048]
    Logical:   P('norm',)
    Physical:  (None,)
I0422 22:15:35.132721 138916018452288 maxtext_utils.py:1668]  params/params/decoder/layers/mlp/wi_0/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0422 22:15:35.132774 138916018452288 maxtext_utils.py:1668]  params/params/decoder/layers/mlp/wi_1/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0422 22:15:35.132832 138916018452288 maxtext_utils.py:1668]  params/params/decoder/layers/mlp/wo/kernel
    Shape:     float32[7168,16,2048]
    Logical:   P('mlp', 'layers', 'embed')
    Physical:  (None, None, 'fsdp')
I0422 22:15:35.132884 138916018452288 maxtext_utils.py:1668]  params/params/decoder/layers/post_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0422 22:15:35.132922 138916018452288 maxtext_utils.py:1668]  params/params/decoder/layers/pre_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0422 22:15:35.132973 138916018452288 maxtext_utils.py:1668]  params/params/decoder/layers/self_attention/key/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0422 22:15:35.133027 138916018452288 maxtext_utils.py:1668]  params/params/decoder/layers/self_attention/out/kernel
    Shape:     float32[16,16,128,2048]
    Logical:   P('heads', 'layers', 'kv', 'embed')
    Physical:  (None, None, None, 'fsdp')
I0422 22:15:35.133066 138916018452288 maxtext_utils.py:1668]  params/params/decoder/layers/self_attention/query/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'q_heads', 'kv')
    Physical:  ('fsdp', None, None, None)
I0422 22:15:35.133125 138916018452288 maxtext_utils.py:1668]  params/params/decoder/layers/self_attention/value/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0422 22:15:35.133178 138916018452288 maxtext_utils.py:1668]  params/params/decoder/logits_dense/kernel
    Shape:     float32[2048,32000]
    Logical:   P('embed_vocab', 'vocab')
    Physical:  ('fsdp', None)
I0422 22:15:35.133227 138916018452288 maxtext_utils.py:1668]  params/params/token_embedder/embedding
    Shape:     float32[32000,2048]
    Logical:   P('vocab', 'embed_vocab')

    Physical:  (None, 'fsdp')
I0422 22:15:35.625232 138916018452288 train.py:155] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length').
I0422 22:15:35.625334 138916018452288 train.py:155] train/xent Physical: float32[32,2048]............................................ ('fsdp', None).
I0422 22:15:35.640844 138916018452288 train.py:162] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length').
I0422 22:15:35.640906 138916018452288 train.py:162] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None).
I0422 22:15:46.677766 138916018452288 max_utils.py:791] Total memory size: 1.5 GB, Output size: 0.4 GB, Temp size: 1.1 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB.
I0422 22:15:46.678561 138916018452288 metric_logger.py:301] number parameters: 1.104 billion
I0422 22:15:58.515357 138916018452288 checkpointing.py:772] Waiting for step 0 to finish before checkpoint...
I0422 22:15:58.667711 138916018452288 checkpointing.py:776] Waited 0.15233373641967773 seconds for step 0 to finish before starting checkpointing.
I0422 22:15:58.670238 138916018452288 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0422 22:15:58.671821 138916018452288 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 0
I0422 22:15:58.673548 138916018452288 event_tracking.py:70] [process=6] [sync] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/0.
I0422 22:16:01.671791 138916018452288 signaling_client.py:364] Using JaxDistributedSignalingClient
I0422 22:16:01.673086 138916018452288 future.py:372] [process=6][thread=MainThread][operation_id=1] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 22:16:03.127886 138916018452288 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0422 22:16:03.127982 138916018452288 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0422 22:16:03.401810 138916018452288 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.274889s
I0422 22:16:03.401994 138916018452288 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 5.494 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.28075623512268066 s) (per-host)
I0422 22:16:03.402052 138916018452288 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.280823s (batch_requests_ready=0.002489s, total_serialization_initiated=0.278260s, others=0.000075s)
I0422 22:16:03.402171 138916018452288 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 1.730464s (all_items=0.000058s, per_item={'items': '0.00005770'}, temp_paths=1.730407)
I0422 22:16:03.402222 138916018452288 future.py:372] [process=6][thread=MainThread][operation_id=1] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 22:16:03.402266 138916018452288 future.py:372] [process=6][thread=MainThread][operation_id=1] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 22:16:03.415613    2737 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com
I0422 22:16:05.922316 138788953122560 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/0/items/array_metadatas/process_6
I0422 22:16:37.519771 138916018452288 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 45.921 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 34.39849519729614 s) (per-host)
I0422 22:16:42.780587 138916018452288 event_tracking.py:125] [process=6] [sync] Finished blocking save in 44.11 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/0.
I0422 22:16:46.845862 138916018452288 event_tracking.py:138] [process=6] [sync] Finished save in 48.17 seconds @ gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/0
I0422 22:16:46.848512 138916018452288 checkpoint_manager.py:2137] [process=6][thread=MainThread][step=0] CheckpointManager Save Finalize is syncing with other hosts...
I0422 22:16:46.850353 138916018452288 checkpoint_manager.py:2146] [process=6][thread=MainThread][step=0] CheckpointManager Save Finalize is done on all hosts.
I0422 22:16:46.850404 138916018452288 checkpoint_manager.py:1581] [process=6][thread=MainThread][step=0] Finished synchronous save.
I0422 22:16:46.850461 138916018452288 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': True, 'wait_for_prev_start_time': 1776896158.6702204, 'wait_for_prev_duration_secs': 6.0558319091796875e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776896158.671906, 'checkpointer_blocking_duration_secs': 48.174073696136475, 'get_old_steps_start_time': 1776896206.846004, 'get_old_steps_duration_secs': 2.5272369384765625e-05, 'checkpoint_manager_blocking_start_time': 1776896158.6682467, 'checkpoint_manager_blocking_duration_secs': 48.18218445777893}
I0422 22:16:46.850553 138916018452288 checkpointing.py:410] Saved a checkpoint at step 0.
I0422 22:16:46.850596 138916018452288 max_utils.py:750] 
Memstats: After params initialized:
I0422 22:16:46.850639 138916018452288 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_24(process=6,(0,6,0,0))
I0422 22:16:46.850669 138916018452288 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_25(process=6,(1,6,0,0))
I0422 22:16:46.850694 138916018452288 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_28(process=6,(0,7,0,0))
I0422 22:16:46.850718 138916018452288 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_29(process=6,(1,7,0,0))
I0422 22:16:47.165581 138916018452288 metric_logger.py:196] completed step: 0, seconds: 11.837, TFLOP/s/device: 1.148, Tokens/s/device: 173.021, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52779.875
I0422 22:16:47.754491 138916018452288 metric_logger.py:196] completed step: 1, seconds: 48.643, TFLOP/s/device: 0.279, Tokens/s/device: 42.103, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52779.875
I0422 22:16:47.902077 138916018452288 metric_logger.py:196] completed step: 2, seconds: 0.448, TFLOP/s/device: 30.295, Tokens/s/device: 4566.465, total_weights: 65536, loss: 10.560, lm_loss: 10.560, perplexity: 38547.719
I0422 22:16:48.049438 138916018452288 metric_logger.py:196] completed step: 3, seconds: 0.152, TFLOP/s/device: 89.467, Tokens/s/device: 13485.395, total_weights: 65536, loss: 9.980, lm_loss: 9.980, perplexity: 21590.506
I0422 22:16:48.053678 138916018452288 checkpointing.py:772] Waiting for step 5 to finish before checkpoint...
I0422 22:16:48.343674 138916018452288 checkpointing.py:776] Waited 0.2899606227874756 seconds for step 5 to finish before starting checkpointing.
I0422 22:16:48.346531 138916018452288 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0422 22:16:48.348011 138916018452288 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 5
I0422 22:16:48.349826 138916018452288 event_tracking.py:70] [process=6] [sync] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/5.
I0422 22:16:50.782265 138916018452288 future.py:372] [process=6][thread=MainThread][operation_id=2] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 22:16:52.569413 138916018452288 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0422 22:16:52.569509 138916018452288 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0422 22:16:52.606283 138916018452288 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.038271s
I0422 22:16:52.606460 138916018452288 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 36.687 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.04204678535461426 s) (per-host)
I0422 22:16:52.606515 138916018452288 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.042111s (batch_requests_ready=0.002063s, total_serialization_initiated=0.039977s, others=0.000072s)
I0422 22:16:52.606615 138916018452288 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 1.825535s (all_items=0.000028s, per_item={'items': '0.00002837'}, temp_paths=1.825507)
I0422 22:16:52.606659 138916018452288 future.py:372] [process=6][thread=MainThread][operation_id=2] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 22:16:52.606697 138916018452288 future.py:372] [process=6][thread=MainThread][operation_id=2] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 22:16:55.009686 138788953122560 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/5/items/array_metadatas/process_6
I0422 22:17:31.429599 138916018452288 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 40.643 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 38.86514639854431 s) (per-host)
I0422 22:17:37.370396 138916018452288 event_tracking.py:125] [process=6] [sync] Finished blocking save in 49.02 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/5.
I0422 22:17:41.747212 138916018452288 event_tracking.py:138] [process=6] [sync] Finished save in 53.40 seconds @ gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/5
I0422 22:17:41.749162 138916018452288 checkpoint_manager.py:2137] [process=6][thread=MainThread][step=5] CheckpointManager Save Finalize is syncing with other hosts...
I0422 22:17:41.750829 138916018452288 checkpoint_manager.py:2146] [process=6][thread=MainThread][step=5] CheckpointManager Save Finalize is done on all hosts.
I0422 22:17:41.750885 138916018452288 checkpoint_manager.py:1581] [process=6][thread=MainThread][step=5] Finished synchronous save.
I0422 22:17:41.750941 138916018452288 standard_logger.py:34] {'step': 5, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': True, 'wait_for_prev_start_time': 1776896208.3465102, 'wait_for_prev_duration_secs': 6.151199340820312e-05, 'time_between_consecutive_saves_sec': 1.496117353439331, 'checkpointer_blocking_start_time': 1776896208.3480482, 'checkpointer_blocking_duration_secs': 53.39929437637329, 'get_old_steps_start_time': 1776896261.747363, 'get_old_steps_duration_secs': 2.6464462280273438e-05, 'checkpoint_manager_blocking_start_time': 1776896208.3443384, 'checkpoint_manager_blocking_duration_secs': 53.40657639503479}
I0422 22:17:41.751032 138916018452288 checkpointing.py:410] Saved a checkpoint at step 5.
I0422 22:17:41.751850 138916018452288 metric_logger.py:196] completed step: 4, seconds: 0.148, TFLOP/s/device: 92.102, Tokens/s/device: 13882.675, total_weights: 65536, loss: 9.460, lm_loss: 9.460, perplexity: 12840.355
I0422 22:17:41.768640 138916018452288 metric_logger.py:196] completed step: 5, seconds: 0.147, TFLOP/s/device: 92.188, Tokens/s/device: 13895.580, total_weights: 65536, loss: 8.959, lm_loss: 8.959, perplexity: 7776.467
I0422 22:18:04.675010 138916018452288 metric_logger.py:196] completed step: 6, seconds: 53.702, TFLOP/s/device: 0.253, Tokens/s/device: 38.136, total_weights: 65536, loss: 8.469, lm_loss: 8.469, perplexity: 4762.740
I0422 22:18:04.822372 138916018452288 metric_logger.py:196] completed step: 7, seconds: 22.771, TFLOP/s/device: 0.597, Tokens/s/device: 89.938, total_weights: 65536, loss: 8.003, lm_loss: 8.003, perplexity: 2989.151
I0422 22:18:04.969815 138916018452288 metric_logger.py:196] completed step: 8, seconds: 0.152, TFLOP/s/device: 89.385, Tokens/s/device: 13473.064, total_weights: 65536, loss: 7.572, lm_loss: 7.572, perplexity: 1943.098
I0422 22:18:05.116382 138916018452288 checkpointing.py:772] Waiting for step 9 to finish before checkpoint...
I0422 22:18:05.117029 138916018452288 checkpointing.py:776] Waited 0.00067138671875 seconds for step 9 to finish before starting checkpointing.
I0422 22:18:05.119334 138916018452288 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0422 22:18:05.121020 138916018452288 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 9
I0422 22:18:05.122427 138916018452288 event_tracking.py:70] [process=6] [sync] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/9.
I0422 22:18:07.872259 138916018452288 future.py:372] [process=6][thread=MainThread][operation_id=3] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 22:18:09.309973 138916018452288 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0422 22:18:09.310071 138916018452288 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0422 22:18:09.350906 138916018452288 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.042050s
I0422 22:18:09.351086 138916018452288 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 33.660 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.04582858085632324 s) (per-host)
I0422 22:18:09.351151 138916018452288 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.045905s (batch_requests_ready=0.001889s, total_serialization_initiated=0.043935s, others=0.000081s)
I0422 22:18:09.351253 138916018452288 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 1.480309s (all_items=0.000041s, per_item={'items': '0.00004077'}, temp_paths=1.480269)
I0422 22:18:09.351295 138916018452288 future.py:372] [process=6][thread=MainThread][operation_id=3] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 22:18:09.351356 138916018452288 future.py:372] [process=6][thread=MainThread][operation_id=3] _SignalingThread.join() waiting for signals ([]) blocking the main thread will slow down blocking save times. This is likely due to main thread calling result() on a CommitFuture.
I0422 22:18:11.824390 138793814324992 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/9/items/array_metadatas/process_6
I0422 22:18:48.410407 138916018452288 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 40.394 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 39.1051139831543 s) (per-host)
I0422 22:18:52.718608 138916018452288 event_tracking.py:125] [process=6] [sync] Finished blocking save in 47.60 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/9.
I0422 22:18:57.063739 138916018452288 event_tracking.py:138] [process=6] [sync] Finished save in 51.94 seconds @ gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints/9
I0422 22:18:57.066158 138916018452288 checkpoint_manager.py:2137] [process=6][thread=MainThread][step=9] CheckpointManager Save Finalize is syncing with other hosts...
I0422 22:18:57.067442 138916018452288 checkpoint_manager.py:2146] [process=6][thread=MainThread][step=9] CheckpointManager Save Finalize is done on all hosts.
I0422 22:18:57.067493 138916018452288 checkpoint_manager.py:1581] [process=6][thread=MainThread][step=9] Finished synchronous save.
I0422 22:18:57.067543 138916018452288 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_test_pipeline_scan_nnx_20260422_212603/linen_xpk_test_pipeline_scan_nnx_20260422_212603_14_async_ckpt_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': True, 'wait_for_prev_start_time': 1776896285.1193154, 'wait_for_prev_duration_secs': 5.817413330078125e-05, 'time_between_consecutive_saves_sec': 23.368441343307495, 'checkpointer_blocking_start_time': 1776896285.121058, 'checkpointer_blocking_duration_secs': 51.94280743598938, 'get_old_steps_start_time': 1776896337.0638802, 'get_old_steps_duration_secs': 2.002716064453125e-05, 'checkpoint_manager_blocking_start_time': 1776896285.1172762, 'checkpoint_manager_blocking_duration_secs': 51.95024347305298}
I0422 22:18:57.067636 138916018452288 checkpointing.py:410] Saved a checkpoint at step 9.
I0422 22:18:57.067669 138916018452288 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0422 22:18:57.067699 138916018452288 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0422 22:18:57.068412 138916018452288 metric_logger.py:196] completed step: 9, seconds: 0.150, TFLOP/s/device: 90.342, Tokens/s/device: 13617.293, total_weights: 65536, loss: 7.181, lm_loss: 7.181, perplexity: 1314.495
Per train step:
 Total TFLOPs: 13.59 
 split as 93.93% learnable weight flops and 6.07% attention flops
XPK End: Wed Apr 22 22:19:06 UTC 2026
EXIT_CODE=0