MaxView

← Back to run

Log Summary

XPK Start: Sat Apr 18 06:15:12 UTC 2026
PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
2026-04-18 06:15:37.312469: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0418 06:15:37.493368 139655138232128 max_utils.py:273] Attempting to initialize the jax distributed system...
I0418 06:15:46.535242 139655138232128 distributed.py:149] Starting JAX distributed service on [::]:8482
I0418 06:15:46.537690 139655138232128 distributed.py:172] Connecting to JAX distributed service on mt-04-int8-djpul-slice-job-0-0.mt-04-int8-djpul:8482
I0418 06:15:48.134695 139655138232128 max_utils.py:284] Jax distributed system initialized!
I0418 06:15:53.381285 139655138232128 max_utils.py:800] System Information: Jax Version: 0.9.2
I0418 06:15:53.381388 139655138232128 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0418 06:15:53.381428 139655138232128 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365
I0418 06:15:53.381464 139655138232128 train_utils.py:378] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
I0418 06:15:54.433964 139655138232128 maxtext_utils.py:1718] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0418 06:15:54.434542 139655138232128 maxtext_utils.py:1718] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0418 06:15:54.434857 139655138232128 checkpointing.py:688] Setting up checkpoint logger...
I0418 06:15:54.434914 139655138232128 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True
I0418 06:15:54.434956 139655138232128 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None
I0418 06:15:54.435278 139655138232128 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7f03475481d0>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB)
I0418 06:15:57.588247 139655138232128 checkpointing.py:266] Enabling policy for fixed interval checkpointing.
I0418 06:15:57.588593 139655138232128 checkpoint_manager.py:708] [process=6][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f023ac95700>}, handler_registry=None
I0418 06:15:57.588845 139655138232128 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f023ac95700>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`.
I0418 06:15:57.588894 139655138232128 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7f023add7200>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`.
I0418 06:15:57.588929 139655138232128 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f023ac95700>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f023ac95700>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7f023add7200>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7f023add7200>}).
I0418 06:15:57.589246 139655138232128 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34
I0418 06:15:57.589315 139655138232128 async_checkpointer.py:192] [process=6][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7eeec84ef740> timeout: 1200 secs and primary_host=0 for async checkpoint writes
I0418 06:16:00.001181 139655138232128 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141/linen_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141_04_int8/checkpoints
I0418 06:16:00.912124 139655138232128 checkpoint_manager.py:929] [process=6][thread=MainThread] CheckpointManager created,  primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141/linen_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141_04_int8/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7f023ac94860>
I0418 06:16:00.912299 139655138232128 checkpointing.py:302] Checkpoint manager created!
I0418 06:16:02.081090 139655138232128 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0418 06:16:02.081190 139655138232128 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0418 06:16:02.467809 139655138232128 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0418 06:16:02.467912 139655138232128 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0418 06:16:02.484341 139655138232128 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0418 06:16:02.484398 139655138232128 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0418 06:16:02.550425 139655138232128 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0418 06:16:02.550511 139655138232128 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0418 06:16:02.567021 139655138232128 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0418 06:16:02.567081 139655138232128 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0418 06:16:02.583773 139655138232128 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0418 06:16:02.583853 139655138232128 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0418 06:16:02.608445 139655138232128 attentions.py:1197] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv').
I0418 06:16:02.608515 139655138232128 attentions.py:1197] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0418 06:16:02.674335 139655138232128 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp').
I0418 06:16:02.674414 139655138232128 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None).
I0418 06:16:03.193494 139655138232128 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint
I0418 06:16:03.193607 139655138232128 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint.
fsdp: 32
I0418 06:16:05.237080 139655138232128 maxtext_utils.py:1821]  params/params/decoder/decoder_norm/scale
    Shape:     float32[2048]
    Logical:   P('norm',)
    Physical:  (None,)
I0418 06:16:05.237203 139655138232128 maxtext_utils.py:1821]  params/params/decoder/layers/mlp/wi_0/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0418 06:16:05.237256 139655138232128 maxtext_utils.py:1821]  params/params/decoder/layers/mlp/wi_1/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0418 06:16:05.237313 139655138232128 maxtext_utils.py:1821]  params/params/decoder/layers/mlp/wo/kernel
    Shape:     float32[7168,16,2048]
    Logical:   P('mlp', 'layers', 'embed')
    Physical:  (None, None, 'fsdp')
I0418 06:16:05.237365 139655138232128 maxtext_utils.py:1821]  params/params/decoder/layers/post_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0418 06:16:05.237403 139655138232128 maxtext_utils.py:1821]  params/params/decoder/layers/pre_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0418 06:16:05.237455 139655138232128 maxtext_utils.py:1821]  params/params/decoder/layers/self_attention/key/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0418 06:16:05.237508 139655138232128 maxtext_utils.py:1821]  params/params/decoder/layers/self_attention/out/kernel
    Shape:     float32[16,16,128,2048]
    Logical:   P('heads', 'layers', 'kv', 'embed')
    Physical:  (None, None, None, 'fsdp')
I0418 06:16:05.237548 139655138232128 maxtext_utils.py:1821]  params/params/decoder/layers/self_attention/query/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'q_heads', 'kv')
    Physical:  ('fsdp', None, None, None)
I0418 06:16:05.237584 139655138232128 maxtext_utils.py:1821]  params/params/decoder/layers/self_attention/value/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0418 06:16:05.237631 139655138232128 maxtext_utils.py:1821]  params/params/decoder/logits_dense/kernel
    Shape:     float32[2048,32000]
    Logical:   P('embed_vocab', 'vocab')
    Physical:  ('fsdp', None)

I0418 06:16:05.237676 139655138232128 maxtext_utils.py:1821]  params/params/token_embedder/embedding
    Shape:     float32[32000,2048]
    Logical:   P('vocab', 'embed_vocab')
    Physical:  (None, 'fsdp')
I0418 06:16:07.088831 139655138232128 train.py:157] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length').
I0418 06:16:07.088926 139655138232128 train.py:157] train/xent Physical: float32[32,2048]............................................ ('fsdp', None).
I0418 06:16:07.104424 139655138232128 train.py:164] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length').
I0418 06:16:07.104483 139655138232128 train.py:164] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None).
I0418 06:16:20.281158 139655138232128 max_utils.py:791] Total memory size: 1.8 GB, Output size: 0.4 GB, Temp size: 1.4 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB.
I0418 06:16:20.281982 139655138232128 metric_logger.py:301] number parameters: 1.104 billion
I0418 06:16:35.350025 139655138232128 checkpointing.py:794] Waiting for step 0 to finish before checkpoint...
I0418 06:16:35.594563 139655138232128 checkpointing.py:798] Waited 0.2445213794708252 seconds for step 0 to finish before starting checkpointing.
I0418 06:16:35.597553 139655138232128 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0418 06:16:35.599435 139655138232128 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 0
I0418 06:16:35.600867 139655138232128 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141/linen_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141_04_int8/checkpoints/0.
I0418 06:16:36.758458 139655138232128 signaling_client.py:364] Using JaxDistributedSignalingClient
I0418 06:16:36.759403 139655138232128 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0418 06:16:36.759461 139655138232128 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0418 06:16:37.064249 139655138232128 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.305864s
I0418 06:16:37.064414 139655138232128 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 4.956 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.3112499713897705 s) (per-host)
I0418 06:16:37.064465 139655138232128 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.311313s (batch_requests_ready=0.002209s, total_serialization_initiated=0.309033s, others=0.000071s)
I0418 06:16:37.064554 139655138232128 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.317778s (all_items=0.000018s, per_item={'items': '0.00001836'}, temp_paths=0.317760)
I0418 06:16:37.065324 139655138232128 event_tracking.py:125] [process=6] [async] Finished blocking save in 1.47 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141/linen_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141_04_int8/checkpoints/0.
I0418 06:16:37.065663 139524478850816 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-18 06:36:37.065624
I0418 06:16:37.067689 139655138232128 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize
I0418 06:16:37.068005 139523911067392 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0418 06:16:37.068137 139655138232128 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141/linen_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141_04_int8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776492995.5975342, 'wait_for_prev_duration_secs': 6.437301635742188e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776492995.5994744, 'checkpointer_blocking_duration_secs': 1.4663572311401367, 'get_old_steps_start_time': 1776492997.0658572, 'get_old_steps_duration_secs': 3.0994415283203125e-05, 'checkpoint_manager_blocking_start_time': 1776492995.5951161, 'checkpoint_manager_blocking_duration_secs': 1.472982406616211}
I0418 06:16:37.068291 139655138232128 checkpointing.py:409] Started an asynchronous checkpoint save for step 0
I0418 06:16:37.068342 139655138232128 max_utils.py:750] 
Memstats: After params initialized:
I0418 06:16:37.068391 139655138232128 max_utils.py:756] 	Using (GB) 0.44 / 31.25 (1.408000%) on TPU_24(process=6,(0,6,0,0))
I0418 06:16:37.068422 139655138232128 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_25(process=6,(1,6,0,0))
I0418 06:16:37.068460 139655138232128 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_28(process=6,(0,7,0,0))
I0418 06:16:37.068485 139655138232128 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_29(process=6,(1,7,0,0))
I0418 06:16:37.389843 139655138232128 metric_logger.py:196] completed step: 0, seconds: 15.068, TFLOP/s/device: 0.902, Tokens/s/device: 135.918, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52796.277
I0418 06:16:37.641008 139655138232128 metric_logger.py:196] completed step: 1, seconds: 2.038, TFLOP/s/device: 6.666, Tokens/s/device: 1004.788, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52796.277
I0418 06:16:38.066019 139655138232128 metric_logger.py:196] completed step: 2, seconds: 0.022, TFLOP/s/device: 618.271, Tokens/s/device: 93192.574, total_weights: 65536, loss: 10.262, lm_loss: 10.262, perplexity: 28634.820
I0418 06:16:38.296772 139655138232128 metric_logger.py:196] completed step: 3, seconds: 0.425, TFLOP/s/device: 31.941, Tokens/s/device: 4814.507, total_weights: 65536, loss: 9.731, lm_loss: 9.731, perplexity: 16827.199
I0418 06:16:38.757309 139655138232128 metric_logger.py:196] completed step: 4, seconds: 0.235, TFLOP/s/device: 57.864, Tokens/s/device: 8721.871, total_weights: 65536, loss: 9.272, lm_loss: 9.272, perplexity: 10638.531
I0418 06:16:38.763423 139655138232128 metric_logger.py:196] completed step: 5, seconds: 0.231, TFLOP/s/device: 58.787, Tokens/s/device: 8861.083, total_weights: 65536, loss: 8.887, lm_loss: 8.887, perplexity: 7233.941
I0418 06:16:41.415647    2880 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com
I0418 06:16:43.877037 139524455331584 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141/linen_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141_04_int8/checkpoints/0/items/array_metadatas/process_6
I0418 06:17:09.419642 139655138232128 metric_logger.py:196] completed step: 6, seconds: 0.461, TFLOP/s/device: 29.481, Tokens/s/device: 4443.769, total_weights: 65536, loss: 8.588, lm_loss: 8.588, perplexity: 5365.468
I0418 06:17:09.650327 139655138232128 metric_logger.py:196] completed step: 7, seconds: 30.428, TFLOP/s/device: 0.447, Tokens/s/device: 67.306, total_weights: 65536, loss: 8.380, lm_loss: 8.380, perplexity: 4358.779
I0418 06:17:09.881041 139655138232128 metric_logger.py:196] completed step: 8, seconds: 0.234, TFLOP/s/device: 58.037, Tokens/s/device: 8747.912, total_weights: 65536, loss: 8.251, lm_loss: 8.251, perplexity: 3831.453
I0418 06:17:09.906788 139524478850816 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 47.645 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 33.15358805656433 s) (per-host)
I0418 06:17:09.906922 139524478850816 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 32.841147s.
I0418 06:17:10.109930 139655138232128 checkpointing.py:794] Waiting for step 9 to finish before checkpoint...
I0418 06:17:10.110698 139655138232128 checkpointing.py:798] Waited 0.0007851123809814453 seconds for step 9 to finish before starting checkpointing.
I0418 06:17:10.112875 139655138232128 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0418 06:17:21.709805 139524478850816 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 44.644014s.
I0418 06:17:21.710108 139523911067392 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0418 06:17:21.710241 139523911067392 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0418 06:17:21.710292 139523911067392 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts...
I0418 06:17:21.712769 139523911067392 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts.
I0418 06:17:21.712972 139655138232128 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0.
W0418 06:17:21.713109 139655138232128 checkpoint_manager.py:1452] Waiting for previous save to complete took 11.600235 seconds. If this number is high, consider checkpointing less frequently.
I0418 06:17:21.715085 139655138232128 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 9
I0418 06:17:21.717232 139655138232128 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141/linen_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141_04_int8/checkpoints/9.
I0418 06:17:22.431860 139655138232128 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0418 06:17:22.431952 139655138232128 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0418 06:17:22.470129 139655138232128 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.039418s
I0418 06:17:22.470266 139655138232128 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 36.026 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.04281806945800781 s) (per-host)
I0418 06:17:22.470311 139655138232128 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.042873s (batch_requests_ready=0.001793s, total_serialization_initiated=0.041019s, others=0.000061s)
I0418 06:17:22.470393 139655138232128 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.046895s (all_items=0.000015s, per_item={'items': '0.00001478'}, temp_paths=0.046880)
I0418 06:17:22.471057 139655138232128 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.76 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141/linen_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141_04_int8/checkpoints/9.
I0418 06:17:22.471423 139523911067392 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-18 06:37:22.471386
I0418 06:17:22.473427 139655138232128 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize
I0418 06:17:22.473721 139499321386752 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0418 06:17:22.473894 139655138232128 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141/linen_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141_04_int8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776493030.1128435, 'wait_for_prev_duration_secs': 11.600234985351562, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776493041.7151227, 'checkpointer_blocking_duration_secs': 0.7564740180969238, 'get_old_steps_start_time': 1776493042.4716215, 'get_old_steps_duration_secs': 3.0517578125e-05, 'checkpoint_manager_blocking_start_time': 1776493030.1109943, 'checkpoint_manager_blocking_duration_secs': 12.362865209579468}
I0418 06:17:22.474003 139655138232128 checkpointing.py:409] Started an asynchronous checkpoint save for step 9
I0418 06:17:22.474045 139655138232128 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0418 06:17:31.590474 139524455331584 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141/linen_xpk_feat_nnx_trainstate_and_training_loop_20260418_060141_04_int8/checkpoints/9/items/array_metadatas/process_6
I0418 06:18:06.746337 139523911067392 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 35.642 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 44.31883454322815 s) (per-host)
I0418 06:18:06.746500 139523911067392 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 44.274958s.
I0418 06:18:17.245983 139523911067392 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 54.774429s.
I0418 06:18:17.246256 139499321386752 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0418 06:18:17.246378 139499321386752 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0418 06:18:17.246429 139499321386752 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts...
I0418 06:18:17.248119 139499321386752 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts.
I0418 06:18:17.248301 139655138232128 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9.
I0418 06:18:17.248456 139655138232128 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0418 06:18:17.249387 139655138232128 metric_logger.py:196] completed step: 9, seconds: 0.230, TFLOP/s/device: 58.965, Tokens/s/device: 8887.809, total_weights: 65536, loss: 8.175, lm_loss: 8.175, perplexity: 3552.333
Per train step:
 Total TFLOPs: 13.59 
 split as 93.93% learnable weight flops and 6.07% attention flops
XPK End: Sat Apr 18 06:18:28 UTC 2026
EXIT_CODE=0