XPK Start: Thu Apr 23 09:50:33 UTC 2026
PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
`rope_parameters`'s factor field must be a float >= 1, got 40
`rope_parameters`'s beta_fast field must be a float, got 32
`rope_parameters`'s beta_slow field must be a float, got 1
DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config.
2026-04-23 09:50:58.330230: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0423 09:50:58.541360 138825087305536 max_utils.py:273] Attempting to initialize the jax distributed system...
I0423 09:51:07.583580 138825087305536 distributed.py:149] Starting JAX distributed service on [::]:8482
I0423 09:51:07.585988 138825087305536 distributed.py:172] Connecting to JAX distributed service on mt-05-fp8-0kw3o-slice-job-0-0.mt-05-fp8-0kw3o:8482
I0423 09:51:09.091899 138825087305536 max_utils.py:284] Jax distributed system initialized!
I0423 09:51:15.150042 138825087305536 max_utils.py:800] System Information: Jax Version: 0.9.2
I0423 09:51:15.150166 138825087305536 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0423 09:51:15.150209 138825087305536 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365
I0423 09:51:15.150250 138825087305536 train_utils.py:391] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
I0423 09:51:15.849210 138825087305536 maxtext_utils.py:1771] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0423 09:51:15.849797 138825087305536 maxtext_utils.py:1771] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0423 09:51:15.849979 138825087305536 checkpointing.py:688] Setting up checkpoint logger...
I0423 09:51:15.850029 138825087305536 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True
I0423 09:51:15.850073 138825087305536 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None
I0423 09:51:15.850423 138825087305536 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7e4220918050>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB)
I0423 09:51:18.811202 138825087305536 checkpointing.py:266] Enabling policy for fixed interval checkpointing.
I0423 09:51:18.811453 138825087305536 checkpoint_manager.py:708] [process=6][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7e2da040a960>}, handler_registry=None
I0423 09:51:18.811695 138825087305536 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7e2da040a960>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`.
I0423 09:51:18.811743 138825087305536 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7e2da040c3b0>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`.
I0423 09:51:18.811780 138825087305536 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7e2da040a960>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7e2da040a960>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7e2da040c3b0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7e2da040c3b0>}).
I0423 09:51:18.812119 138825087305536 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34
I0423 09:51:18.812193 138825087305536 async_checkpointer.py:192] [process=6][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7e2cd01e9e40> timeout: 1200 secs and primary_host=0 for async checkpoint writes
I0423 09:51:19.696192 138825087305536 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints
I0423 09:51:19.727934 138825087305536 checkpoint_manager.py:929] [process=6][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7e2da040be30>
I0423 09:51:19.728071 138825087305536 checkpointing.py:302] Checkpoint manager created!
I0423 09:51:20.825339 138825087305536 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0423 09:51:20.825449 138825087305536 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0423 09:51:21.214681 138825087305536 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0423 09:51:21.214776 138825087305536 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0423 09:51:21.231836 138825087305536 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0423 09:51:21.231895 138825087305536 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0423 09:51:21.284602 138825087305536 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 09:51:21.284690 138825087305536 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0423 09:51:21.301831 138825087305536 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 09:51:21.301895 138825087305536 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0423 09:51:21.318913 138825087305536 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 09:51:21.318976 138825087305536 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0423 09:51:21.345054 138825087305536 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv').
I0423 09:51:21.345135 138825087305536 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0423 09:51:21.395857 138825087305536 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp').
I0423 09:51:21.395936 138825087305536 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None).
I0423 09:51:21.840169 138825087305536 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint
I0423 09:51:21.840291 138825087305536 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint.
fsdp: 32
I0423 09:51:23.760398 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/input_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.760526 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/input_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.760574 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/kernel_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.760611 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/kernel_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.760645 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/output_grad_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.760677 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/output_grad_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.760708 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/input_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.760738 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/input_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.760766 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/kernel_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.760796 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/kernel_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.760824 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/output_grad_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.760854 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/output_grad_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.760884 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/input_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.760911 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/input_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.760939 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/kernel_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.760967 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/kernel_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.760995 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/output_grad_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761021 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/output_grad_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761055 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/input_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761124 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/input_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761175 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/kernel_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761216 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/kernel_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761255 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/output_grad_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761294 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/output_grad_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761332 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/input_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761369 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/input_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761407 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/kernel_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761443 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/kernel_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761485 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/output_grad_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761523 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/output_grad_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761555 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/input_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761582 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/input_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761608 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/kernel_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761634 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/kernel_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761659 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/output_grad_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761685 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/output_grad_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761711 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/input_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761737 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/input_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761763 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/kernel_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761790 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/kernel_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761816 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/output_grad_amax_history
Shape: float32[16,1024]
Logical: P()
Physical: ()
I0423 09:51:23.761841 138825087305536 maxtext_utils.py:1874] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/output_grad_scale
Shape: float32[16,1]
Logical: P()
Physical: ()
I0423 09:51:23.761891 138825087305536 maxtext_utils.py:1874] params/params/decoder/decoder_norm/scale
Shape: float32[2048]
Logical: P('norm',)
Physical: (None,)
I0423 09:51:23.761951 138825087305536 maxtext_utils.py:1874] params/params/decoder/layers/mlp/wi_0/kernel
Shape: float32[2048,16,7168]
Logical: P('embed', 'layers', 'mlp')
Physical: ('fsdp', None, None)
I0423 09:51:23.761988 138825087305536 maxtext_utils.py:1874] params/params/decoder/layers/mlp/wi_1/kernel
Shape: float32[2048,16,7168]
Logical: P('embed', 'layers', 'mlp')
Physical: ('fsdp', None, None)
I0423 09:51:23.762034 138825087305536 maxtext_utils.py:1874] params/params/decoder/layers/mlp/wo/kernel
Shape: float32[7168,16,2048]
Logical: P('mlp', 'layers', 'embed')
Physical: (None, None, 'fsdp')
I0423 09:51:23.762079 138825087305536 maxtext_utils.py:1874] params/params/decoder/layers/post_self_attention_layer_norm/scale
Shape: float32[2048,16]
Logical: P('norm', 'layers')
Physical: (None, None)
I0423 09:51:23.762154 138825087305536 maxtext_utils.py:1874] params/params/decoder/layers/pre_self_attention_layer_norm/scale
Shape: float32[2048,16]
Logical: P('norm', 'layers')
Physical: (None, None)
I0423 09:51:23.762213 138825087305536 maxtext_utils.py:1874] params/params/decoder/layers/self_attention/key/kernel
Shape: float32[2048,16,16,128]
Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim')
Physical: ('fsdp', None, None, None)
I0423 09:51:23.762267 138825087305536 maxtext_utils.py:1874] params/params/decoder/layers/self_attention/out/kernel
Shape: float32[16,16,128,2048]
Logical: P('heads', 'layers', 'kv', 'embed')
Physical: (None, None, None, 'fsdp')
I0423 09:51:23.762304 138825087305536 maxtext_utils.py:1874] params/params/decoder/layers/self_attention/query/kernel
Shape: float32[2048,16,16,128]
Logical: P('embed', 'layers', 'q_heads', 'kv')
Physical: ('fsdp', None, None, None)
I0423 09:51:23.762337 138825087305536 maxtext_utils.py:1874] params/params/decoder/layers/self_attention/value/kernel
Shape: float32[2048,16,16,128]
Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim')
Physical: ('fsdp', None, None, None)
I0423 09:51:23.762381 138825087305536 maxtext_utils.py:1874] params/params/decoder/logits_dense/kernel
Shape: float32[2048,32000]
Logical: P('embed_vocab', 'vocab')
Physical: ('fsdp', None)
I0423 09:51:23.762425 138825087305536 maxtext_utils.py:1874] params/params/token_embedder/embedding
Shape: float32[32000,2048]
Logical: P('vocab', 'embed_vocab')
Physical: (None, 'fsdp')
I0423 09:51:25.382460 138825087305536 train.py:157] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length').
I0423 09:51:25.382558 138825087305536 train.py:157] train/xent Physical: float32[32,2048]............................................ ('fsdp', None).
I0423 09:51:25.398348 138825087305536 train.py:164] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length').
I0423 09:51:25.398408 138825087305536 train.py:164] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None).
I0423 09:51:39.148524 138825087305536 max_utils.py:791] Total memory size: 1.5 GB, Output size: 0.4 GB, Temp size: 1.1 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB.
I0423 09:51:39.149465 138825087305536 metric_logger.py:301] number parameters: 1.105 billion
I0423 09:51:54.411923 138825087305536 checkpointing.py:794] Waiting for step 0 to finish before checkpoint...
I0423 09:51:54.584414 138825087305536 checkpointing.py:798] Waited 0.17247366905212402 seconds for step 0 to finish before starting checkpointing.
I0423 09:51:54.586767 138825087305536 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0423 09:51:54.588713 138825087305536 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 0
I0423 09:51:54.590119 138825087305536 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints/0.
I0423 09:51:54.922884 138825087305536 signaling_client.py:364] Using JaxDistributedSignalingClient
I0423 09:51:54.923828 138825087305536 jax_array_handlers.py:360] Scheduling D2H of 81 prioritized jax.Array.
I0423 09:51:54.923887 138825087305536 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0423 09:51:55.331158 138825087305536 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.408391s
I0423 09:51:55.331336 138825087305536 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 3.701 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.41681694984436035 s) (per-host)
I0423 09:51:55.331393 138825087305536 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.416884s (batch_requests_ready=0.003637s, total_serialization_initiated=0.413172s, others=0.000075s)
I0423 09:51:55.331501 138825087305536 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.421484s (all_items=0.000018s, per_item={'items': '0.00001836'}, temp_paths=0.421465)
I0423 09:51:55.332354 138825087305536 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.74 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints/0.
I0423 09:51:55.332662 138695060870912 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 10:11:55.332630
I0423 09:51:55.345950 138825087305536 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize
I0423 09:51:55.346273 138694522914560 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0423 09:51:55.346429 138825087305536 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776937914.5867486, 'wait_for_prev_duration_secs': 6.031990051269531e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776937914.5887527, 'checkpointer_blocking_duration_secs': 0.7440574169158936, 'get_old_steps_start_time': 1776937915.3328357, 'get_old_steps_duration_secs': 3.0040740966796875e-05, 'checkpoint_manager_blocking_start_time': 1776937914.5849845, 'checkpoint_manager_blocking_duration_secs': 0.7614061832427979}
I0423 09:51:55.346554 138825087305536 checkpointing.py:409] Started an asynchronous checkpoint save for step 0
I0423 09:51:55.346610 138825087305536 max_utils.py:750]
Memstats: After params initialized:
I0423 09:51:55.346661 138825087305536 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_24(process=6,(0,6,0,0))
I0423 09:51:55.346694 138825087305536 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_25(process=6,(1,6,0,0))
I0423 09:51:55.346722 138825087305536 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_28(process=6,(0,7,0,0))
I0423 09:51:55.346745 138825087305536 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_29(process=6,(1,7,0,0))
I0423 09:51:55.668955 138825087305536 metric_logger.py:196] completed step: 0, seconds: 15.262, TFLOP/s/device: 0.890, Tokens/s/device: 134.186, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52776.805
I0423 09:51:55.854650 138825087305536 metric_logger.py:196] completed step: 1, seconds: 1.255, TFLOP/s/device: 10.827, Tokens/s/device: 1631.938, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52761.840
I0423 09:51:56.277973 138825087305536 metric_logger.py:196] completed step: 2, seconds: 0.028, TFLOP/s/device: 482.172, Tokens/s/device: 72678.236, total_weights: 65536, loss: 10.816, lm_loss: 10.816, perplexity: 49832.398
I0423 09:51:56.436296 138825087305536 metric_logger.py:196] completed step: 3, seconds: 0.423, TFLOP/s/device: 32.091, Tokens/s/device: 4837.171, total_weights: 65536, loss: 10.431, lm_loss: 10.431, perplexity: 33901.820
I0423 09:51:56.753606 138825087305536 metric_logger.py:196] completed step: 4, seconds: 0.164, TFLOP/s/device: 82.786, Tokens/s/device: 12478.370, total_weights: 65536, loss: 9.992, lm_loss: 9.992, perplexity: 21847.639
I0423 09:51:56.759927 138825087305536 metric_logger.py:196] completed step: 5, seconds: 0.158, TFLOP/s/device: 85.763, Tokens/s/device: 12927.171, total_weights: 65536, loss: 9.549, lm_loss: 9.549, perplexity: 14035.006
I0423 09:51:59.451875 2911 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com
I0423 09:52:01.926505 138694531307264 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 81 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints/0/items/array_metadatas/process_6
I0423 09:52:16.299363 138695060870912 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 73.873 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 21.38481092453003 s) (per-host)
I0423 09:52:16.299477 138695060870912 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 20.966703s.
I0423 09:52:22.476861 138825087305536 metric_logger.py:196] completed step: 6, seconds: 0.318, TFLOP/s/device: 42.773, Tokens/s/device: 6447.206, total_weights: 65536, loss: 9.111, lm_loss: 9.111, perplexity: 9058.688
I0423 09:52:22.635300 138825087305536 metric_logger.py:196] completed step: 7, seconds: 25.560, TFLOP/s/device: 0.532, Tokens/s/device: 80.125, total_weights: 65536, loss: 8.685, lm_loss: 8.685, perplexity: 5914.720
I0423 09:52:22.793838 138825087305536 metric_logger.py:196] completed step: 8, seconds: 0.163, TFLOP/s/device: 83.321, Tokens/s/device: 12559.024, total_weights: 65536, loss: 8.281, lm_loss: 8.281, perplexity: 3950.010
I0423 09:52:22.799377 138825087305536 checkpointing.py:794] Waiting for step 10 to finish before checkpoint...
I0423 09:52:23.109883 138825087305536 checkpointing.py:798] Waited 0.3104727268218994 seconds for step 10 to finish before starting checkpointing.
I0423 09:52:23.112922 138825087305536 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0423 09:52:24.570378 138695060870912 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 29.237587s.
I0423 09:52:24.570718 138694522914560 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0423 09:52:24.570850 138694522914560 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0423 09:52:24.570899 138694522914560 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts...
I0423 09:52:24.573132 138694522914560 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts.
I0423 09:52:24.573325 138825087305536 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0.
W0423 09:52:24.573462 138825087305536 checkpoint_manager.py:1452] Waiting for previous save to complete took 1.460543 seconds. If this number is high, consider checkpointing less frequently.
I0423 09:52:24.575304 138825087305536 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 10
I0423 09:52:24.577322 138825087305536 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints/10.
I0423 09:52:25.292291 138825087305536 jax_array_handlers.py:360] Scheduling D2H of 81 prioritized jax.Array.
I0423 09:52:25.292390 138825087305536 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0423 09:52:25.347011 138825087305536 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.055698s
I0423 09:52:25.347198 138825087305536 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 24.835 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.06212019920349121 s) (per-host)
I0423 09:52:25.347253 138825087305536 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.062186s (batch_requests_ready=0.003315s, total_serialization_initiated=0.058797s, others=0.000074s)
I0423 09:52:25.347359 138825087305536 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.066195s (all_items=0.000016s, per_item={'items': '0.00001574'}, temp_paths=0.066180)
I0423 09:52:25.348076 138825087305536 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.77 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints/10.
I0423 09:52:25.348376 138694522914560 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 10:12:25.348352
I0423 09:52:25.350284 138825087305536 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=10] Starting CheckpointManager Save Finalize thread=save_finalize
I0423 09:52:25.350579 138692941309696 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0423 09:52:25.350747 138825087305536 standard_logger.py:34] {'step': 10, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776937943.112891, 'wait_for_prev_duration_secs': 1.460543155670166, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776937944.5753443, 'checkpointer_blocking_duration_secs': 0.7731344699859619, 'get_old_steps_start_time': 1776937945.348508, 'get_old_steps_duration_secs': 2.956390380859375e-05, 'checkpoint_manager_blocking_start_time': 1776937943.110752, 'checkpoint_manager_blocking_duration_secs': 2.2399609088897705}
I0423 09:52:25.350874 138825087305536 checkpointing.py:409] Started an asynchronous checkpoint save for step 10
I0423 09:52:25.351504 138825087305536 metric_logger.py:196] completed step: 9, seconds: 0.158, TFLOP/s/device: 85.843, Tokens/s/device: 12939.177, total_weights: 65536, loss: 7.908, lm_loss: 7.908, perplexity: 2720.116
I0423 09:52:25.358489 138825087305536 metric_logger.py:196] completed step: 10, seconds: 0.159, TFLOP/s/device: 85.397, Tokens/s/device: 12871.922, total_weights: 65536, loss: 7.568, lm_loss: 7.568, perplexity: 1936.198
I0423 09:52:25.516685 138825087305536 metric_logger.py:196] completed step: 11, seconds: 2.557, TFLOP/s/device: 5.314, Tokens/s/device: 800.997, total_weights: 65536, loss: 7.288, lm_loss: 7.288, perplexity: 1463.272
I0423 09:52:26.083756 138825087305536 metric_logger.py:196] completed step: 12, seconds: 0.007, TFLOP/s/device: 2086.476, Tokens/s/device: 314496.314, total_weights: 65536, loss: 7.032, lm_loss: 7.032, perplexity: 1132.061
I0423 09:52:26.241920 138825087305536 metric_logger.py:196] completed step: 13, seconds: 0.563, TFLOP/s/device: 24.150, Tokens/s/device: 3640.216, total_weights: 65536, loss: 6.815, lm_loss: 6.815, perplexity: 911.031
I0423 09:52:26.400151 138825087305536 metric_logger.py:196] completed step: 14, seconds: 0.163, TFLOP/s/device: 83.239, Tokens/s/device: 12546.636, total_weights: 65536, loss: 6.635, lm_loss: 6.635, perplexity: 761.080
I0423 09:52:26.558839 138825087305536 metric_logger.py:196] completed step: 15, seconds: 0.158, TFLOP/s/device: 85.919, Tokens/s/device: 12950.632, total_weights: 65536, loss: 6.492, lm_loss: 6.492, perplexity: 659.915
I0423 09:52:26.717083 138825087305536 metric_logger.py:196] completed step: 16, seconds: 0.158, TFLOP/s/device: 85.855, Tokens/s/device: 12941.058, total_weights: 65536, loss: 6.380, lm_loss: 6.380, perplexity: 589.948
I0423 09:52:26.875297 138825087305536 metric_logger.py:196] completed step: 17, seconds: 0.159, TFLOP/s/device: 85.668, Tokens/s/device: 12912.744, total_weights: 65536, loss: 6.293, lm_loss: 6.293, perplexity: 540.941
I0423 09:52:27.033605 138825087305536 metric_logger.py:196] completed step: 18, seconds: 0.158, TFLOP/s/device: 85.858, Tokens/s/device: 12941.466, total_weights: 65536, loss: 6.223, lm_loss: 6.223, perplexity: 504.108
I0423 09:52:27.191203 138825087305536 checkpointing.py:794] Waiting for step 19 to finish before checkpoint...
I0423 09:52:27.192159 138825087305536 checkpointing.py:798] Waited 0.000978708267211914 seconds for step 19 to finish before starting checkpointing.
I0423 09:52:27.194272 138825087305536 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=10][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0423 09:52:32.760695 138691825604352 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 81 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints/10/items/array_metadatas/process_6
I0423 09:53:08.802304 138694522914560 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 36.302 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 43.51717138290405 s) (per-host)
I0423 09:53:08.802580 138694522914560 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 43.454125s.
I0423 09:53:17.119449 138694522914560 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 51.770977s.
I0423 09:53:17.119768 138692941309696 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0423 09:53:17.119882 138692941309696 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0423 09:53:17.119929 138692941309696 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=10] CheckpointManager Save Finalize is syncing with other hosts...
I0423 09:53:17.121424 138692941309696 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=10] CheckpointManager Save Finalize is done on all hosts.
I0423 09:53:17.121588 138825087305536 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=10][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=10.
W0423 09:53:17.121710 138825087305536 checkpoint_manager.py:1452] Waiting for previous save to complete took 49.927451 seconds. If this number is high, consider checkpointing less frequently.
I0423 09:53:17.123332 138825087305536 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 19
I0423 09:53:17.125597 138825087305536 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints/19.
I0423 09:53:17.435825 138825087305536 jax_array_handlers.py:360] Scheduling D2H of 81 prioritized jax.Array.
I0423 09:53:17.435924 138825087305536 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0423 09:53:17.489456 138825087305536 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.054586s
I0423 09:53:17.489628 138825087305536 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 25.317 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.06093788146972656 s) (per-host)
I0423 09:53:17.489677 138825087305536 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.060997s (batch_requests_ready=0.003269s, total_serialization_initiated=0.057665s, others=0.000064s)
I0423 09:53:17.489777 138825087305536 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.065082s (all_items=0.000011s, per_item={'items': '0.00001097'}, temp_paths=0.065071)
I0423 09:53:17.490460 138825087305536 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.37 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints/19.
I0423 09:53:17.490800 138692941309696 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 10:13:17.490762
I0423 09:53:17.492832 138825087305536 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=19] Starting CheckpointManager Save Finalize thread=save_finalize
I0423 09:53:17.493067 138691825604352 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0423 09:53:17.493211 138825087305536 standard_logger.py:34] {'step': 19, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776937947.1942344, 'wait_for_prev_duration_secs': 49.92745113372803, 'time_between_consecutive_saves_sec': 2.6210644245147705, 'checkpointer_blocking_start_time': 1776937997.1233714, 'checkpointer_blocking_duration_secs': 0.3675723075866699, 'get_old_steps_start_time': 1776937997.490961, 'get_old_steps_duration_secs': 2.5272369384765625e-05, 'checkpoint_manager_blocking_start_time': 1776937947.1924357, 'checkpoint_manager_blocking_duration_secs': 50.300743103027344}
I0423 09:53:17.493321 138825087305536 checkpointing.py:409] Started an asynchronous checkpoint save for step 19
I0423 09:53:17.493365 138825087305536 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=19][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0423 09:53:24.033885 138690802218752 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 81 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_05_fp8/checkpoints/19/items/array_metadatas/process_6
I0423 09:54:00.166035 138692941309696 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 36.964 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 42.73730444908142 s) (per-host)
I0423 09:54:00.166180 138692941309696 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 42.675267s.
I0423 09:54:08.985200 138692941309696 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 51.494272s.
I0423 09:54:08.985448 138691825604352 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0423 09:54:08.985517 138691825604352 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0423 09:54:08.985597 138691825604352 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=19] CheckpointManager Save Finalize is syncing with other hosts...
I0423 09:54:08.987064 138691825604352 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=19] CheckpointManager Save Finalize is done on all hosts.
I0423 09:54:08.987228 138825087305536 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=19][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=19.
I0423 09:54:08.987381 138825087305536 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0423 09:54:08.988214 138825087305536 metric_logger.py:196] completed step: 19, seconds: 0.158, TFLOP/s/device: 85.923, Tokens/s/device: 12951.287, total_weights: 65536, loss: 6.167, lm_loss: 6.167, perplexity: 476.947
Per train step:
Total TFLOPs: 13.59
split as 93.93% learnable weight flops and 6.07% attention flops
XPK End: Thu Apr 23 09:54:18 UTC 2026
EXIT_CODE=0