feat/nnx-trainstate-and-training-loop| Metric | Linen f093f3730 | NNX f093f3730 | Diff (NNX − Linen) |
|---|---|---|---|
| Parameters | 1.105 billion | 1.104 billion | — |
| Final loss | 6.1670 | 10.5490 | +4.382 |
| TFLOP/s | 85.801 | 86.581 | +0.78 |
| Tok/s | 12932.8 | 13050.5 | +117.684 |
| Avg s/step | 1.597 | 1.671 | +0.074 |
| Memory % | 1.44 | 1.44 | 0 |
| JAX | 0.9.2 | 0.9.2 | — |
Diff = NNX value − Linen value. Green = NNX improved. Red = NNX regressed.
XPK Start: Fri Apr 24 09:22:26 UTC 2026 PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. `rope_parameters`'s factor field must be a float >= 1, got 40 `rope_parameters`'s beta_fast field must be a float, got 32 `rope_parameters`'s beta_slow field must be a float, got 1 DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 2026-04-24 09:22:51.727103: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0424 09:22:51.937585 140196258953024 max_utils.py:273] Attempting to initialize the jax distributed system... I0424 09:23:00.979468 140196258953024 distributed.py:149] Starting JAX distributed service on [::]:8482 I0424 09:23:00.981872 140196258953024 distributed.py:172] Connecting to JAX distributed service on mt-05-fp8-py7s1-slice-job-0-0.mt-05-fp8-py7s1:8482 I0424 09:23:02.958386 140196258953024 max_utils.py:284] Jax distributed system initialized! I0424 09:23:09.067210 140196258953024 max_utils.py:800] System Information: Jax Version: 0.9.2 I0424 09:23:09.067317 140196258953024 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0424 09:23:09.067361 140196258953024 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365 I0424 09:23:09.067397 140196258953024 train_utils.py:391] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing. I0424 09:23:09.757509 140196258953024 maxtext_utils.py:1732] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0424 09:23:09.758111 140196258953024 maxtext_utils.py:1732] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0424 09:23:09.758301 140196258953024 checkpointing.py:688] Setting up checkpoint logger... I0424 09:23:09.758353 140196258953024 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True I0424 09:23:09.758396 140196258953024 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None I0424 09:23:09.758750 140196258953024 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7f817c319070>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0424 09:23:12.816049 140196258953024 checkpointing.py:266] Enabling policy for fixed interval checkpointing. I0424 09:23:12.816284 140196258953024 checkpoint_manager.py:708] [process=5][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f6c204ae6c0>}, handler_registry=None I0424 09:23:12.816519 140196258953024 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f6c204ae6c0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`. I0424 09:23:12.816566 140196258953024 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7f6d040c9dc0>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`. I0424 09:23:12.816602 140196258953024 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f6c204ae6c0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f6c204ae6c0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7f6d040c9dc0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7f6d040c9dc0>}). I0424 09:23:12.816959 140196258953024 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34 I0424 09:23:12.817031 140196258953024 async_checkpointer.py:192] [process=5][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7f6c404c9d00> timeout: 1200 secs and primary_host=0 for async checkpoint writes I0424 09:23:13.853906 140196258953024 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints I0424 09:23:13.856138 140196258953024 checkpoint_manager.py:929] [process=5][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7f6d040c7c20> I0424 09:23:13.856249 140196258953024 checkpointing.py:302] Checkpoint manager created! I0424 09:23:14.961193 140196258953024 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0424 09:23:14.961307 140196258953024 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0424 09:23:15.347503 140196258953024 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0424 09:23:15.347598 140196258953024 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0424 09:23:15.369243 140196258953024 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0424 09:23:15.369337 140196258953024 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0424 09:23:15.422025 140196258953024 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0424 09:23:15.422108 140196258953024 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0424 09:23:15.438721 140196258953024 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0424 09:23:15.438780 140196258953024 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0424 09:23:15.455378 140196258953024 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0424 09:23:15.455437 140196258953024 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0424 09:23:15.480733 140196258953024 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv'). I0424 09:23:15.480798 140196258953024 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0424 09:23:15.531865 140196258953024 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp'). I0424 09:23:15.532038 140196258953024 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None). I0424 09:23:15.990643 140196258953024 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint I0424 09:23:15.990800 140196258953024 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint. fsdp: 32 I0424 09:23:17.912533 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/input_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.912668 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/input_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.912720 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/kernel_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.912759 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/kernel_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.912795 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/output_grad_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.912828 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_0/Fp8DirectDotGeneralOp_0/output_grad_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.912860 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/input_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.912891 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/input_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.912920 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/kernel_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.912949 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/kernel_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.912984 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/output_grad_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913017 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wi_1/Fp8DirectDotGeneralOp_0/output_grad_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913047 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/input_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913076 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/input_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913105 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/kernel_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913133 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/kernel_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913162 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/output_grad_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913191 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/mlp/wo/Fp8DirectDotGeneralOp_0/output_grad_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913218 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/input_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913246 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/input_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913273 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/kernel_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913300 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/kernel_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913327 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/output_grad_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913354 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/key/Fp8DirectDotGeneralOp_0/output_grad_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913381 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/input_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913407 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/input_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913434 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/kernel_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913461 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/kernel_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913488 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/output_grad_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913516 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/out/Fp8DirectDotGeneralOp_0/output_grad_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913542 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/input_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913569 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/input_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913596 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/kernel_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913622 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/kernel_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913648 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/output_grad_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913691 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/query/Fp8DirectDotGeneralOp_0/output_grad_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913719 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/input_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913746 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/input_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913775 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/kernel_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913804 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/kernel_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913831 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/output_grad_amax_history Shape: float32[16,1024] Logical: P() Physical: () I0424 09:23:17.913859 140196258953024 maxtext_utils.py:1835] params/_overwrite_with_gradient/decoder/layers/self_attention/value/Fp8DirectDotGeneralOp_0/output_grad_scale Shape: float32[16,1] Logical: P() Physical: () I0424 09:23:17.913910 140196258953024 maxtext_utils.py:1835] params/params/decoder/decoder_norm/scale Shape: float32[2048] Logical: P('norm',) Physical: (None,) I0424 09:23:17.913988 140196258953024 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wi_0/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0424 09:23:17.914032 140196258953024 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wi_1/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0424 09:23:17.914082 140196258953024 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wo/kernel Shape: float32[7168,16,2048] Logical: P('mlp', 'layers', 'embed') Physical: (None, None, 'fsdp') I0424 09:23:17.914126 140196258953024 maxtext_utils.py:1835] params/params/decoder/layers/post_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0424 09:23:17.914158 140196258953024 maxtext_utils.py:1835] params/params/decoder/layers/pre_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0424 09:23:17.914222 140196258953024 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/key/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0424 09:23:17.914296 140196258953024 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/out/kernel Shape: float32[16,16,128,2048] Logical: P('heads', 'layers', 'kv', 'embed') Physical: (None, None, None, 'fsdp') I0424 09:23:17.914336 140196258953024 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/query/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'q_heads', 'kv') Physical: ('fsdp', None, None, None) I0424 09:23:17.914370 140196258953024 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/value/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0424 09:23:17.914414 140196258953024 maxtext_utils.py:1835] params/params/decoder/logits_dense/kernel Shape: float32[2048,32000] Logical: P('embed_vocab', 'vocab') Physical: ('fsdp', None) I0424 09:23:17.914458 140196258953024 maxtext_utils.py:1835] params/params/token_embedder/embedding Shape: float32[32000,2048] Logical: P('vocab', 'embed_vocab') Physical: (None, 'fsdp') I0424 09:23:19.536472 140196258953024 train.py:157] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0424 09:23:19.536570 140196258953024 train.py:157] train/xent Physical: float32[32,2048]............................................ ('fsdp', None). I0424 09:23:19.552015 140196258953024 train.py:164] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0424 09:23:19.552076 140196258953024 train.py:164] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None). I0424 09:23:33.320062 140196258953024 max_utils.py:791] Total memory size: 1.5 GB, Output size: 0.4 GB, Temp size: 1.1 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB. I0424 09:23:33.320875 140196258953024 metric_logger.py:301] number parameters: 1.105 billion I0424 09:23:48.616636 140196258953024 checkpointing.py:794] Waiting for step 0 to finish before checkpoint... I0424 09:23:48.786233 140196258953024 checkpointing.py:798] Waited 0.1695706844329834 seconds for step 0 to finish before starting checkpointing. I0424 09:23:48.788624 140196258953024 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0424 09:23:48.790459 140196258953024 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 0 I0424 09:23:48.791959 140196258953024 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/0. I0424 09:23:49.134750 140196258953024 signaling_client.py:364] Using JaxDistributedSignalingClient I0424 09:23:49.135808 140196258953024 jax_array_handlers.py:360] Scheduling D2H of 81 prioritized jax.Array. I0424 09:23:49.135923 140196258953024 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0424 09:23:49.546353 140196258953024 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.411734s I0424 09:23:49.546519 140196258953024 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 3.675 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.4197962284088135 s) (per-host) I0424 09:23:49.546569 140196258953024 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.419856s (batch_requests_ready=0.003329s, total_serialization_initiated=0.416460s, others=0.000067s) I0424 09:23:49.546662 140196258953024 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.423874s (all_items=0.000017s, per_item={'items': '0.00001740'}, temp_paths=0.423856) I0424 09:23:49.547414 140196258953024 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.76 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/0. I0424 09:23:49.547762 140066552411904 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-24 09:43:49.547725 I0424 09:23:49.563384 140196258953024 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize I0424 09:23:49.563730 140066051565312 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0424 09:23:49.563892 140196258953024 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1777022628.788606, 'wait_for_prev_duration_secs': 7.176399230957031e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1777022628.7904968, 'checkpointer_blocking_duration_secs': 0.757415771484375, 'get_old_steps_start_time': 1777022629.5479352, 'get_old_steps_duration_secs': 3.600120544433594e-05, 'checkpoint_manager_blocking_start_time': 1777022628.7868502, 'checkpoint_manager_blocking_duration_secs': 0.7769999504089355} I0424 09:23:49.564009 140196258953024 checkpointing.py:409] Started an asynchronous checkpoint save for step 0 I0424 09:23:49.564061 140196258953024 max_utils.py:750] Memstats: After params initialized: I0424 09:23:49.564113 140196258953024 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_18(process=5,(2,4,0,0)) I0424 09:23:49.564147 140196258953024 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_19(process=5,(3,4,0,0)) I0424 09:23:49.564174 140196258953024 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_22(process=5,(2,5,0,0)) I0424 09:23:49.564198 140196258953024 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_23(process=5,(3,5,0,0)) I0424 09:23:49.879487 140196258953024 metric_logger.py:196] completed step: 0, seconds: 15.296, TFLOP/s/device: 0.888, Tokens/s/device: 133.894, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52776.805 I0424 09:23:50.063965 140196258953024 metric_logger.py:196] completed step: 1, seconds: 1.261, TFLOP/s/device: 10.776, Tokens/s/device: 1624.213, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52761.840 I0424 09:23:50.493400 140196258953024 metric_logger.py:196] completed step: 2, seconds: 0.027, TFLOP/s/device: 505.699, Tokens/s/device: 76224.505, total_weights: 65536, loss: 10.816, lm_loss: 10.816, perplexity: 49832.398 I0424 09:23:50.651612 140196258953024 metric_logger.py:196] completed step: 3, seconds: 0.430, TFLOP/s/device: 31.616, Tokens/s/device: 4765.506, total_weights: 65536, loss: 10.431, lm_loss: 10.431, perplexity: 33901.820 I0424 09:23:50.969015 140196258953024 metric_logger.py:196] completed step: 4, seconds: 0.164, TFLOP/s/device: 82.807, Tokens/s/device: 12481.564, total_weights: 65536, loss: 9.992, lm_loss: 9.992, perplexity: 21847.639 I0424 09:23:50.975491 140196258953024 metric_logger.py:196] completed step: 5, seconds: 0.158, TFLOP/s/device: 85.943, Tokens/s/device: 12954.318, total_weights: 65536, loss: 9.549, lm_loss: 9.549, perplexity: 14035.006 I0424 09:23:52.793588 2816 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com I0424 09:23:54.883628 140066059958016 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 81 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/0/items/array_metadatas/process_5 I0424 09:24:10.252106 140066552411904 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 74.780 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 21.125354766845703 s) (per-host) I0424 09:24:10.252225 140066552411904 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 20.704350s. I0424 09:24:15.155403 140196258953024 metric_logger.py:196] completed step: 6, seconds: 0.318, TFLOP/s/device: 42.740, Tokens/s/device: 6442.257, total_weights: 65536, loss: 9.111, lm_loss: 9.111, perplexity: 9058.688 I0424 09:24:15.313784 140196258953024 metric_logger.py:196] completed step: 7, seconds: 24.023, TFLOP/s/device: 0.566, Tokens/s/device: 85.252, total_weights: 65536, loss: 8.685, lm_loss: 8.685, perplexity: 5914.720 I0424 09:24:15.472202 140196258953024 metric_logger.py:196] completed step: 8, seconds: 0.163, TFLOP/s/device: 83.206, Tokens/s/device: 12541.719, total_weights: 65536, loss: 8.281, lm_loss: 8.281, perplexity: 3950.010 I0424 09:24:15.477278 140196258953024 checkpointing.py:794] Waiting for step 10 to finish before checkpoint... I0424 09:24:15.789172 140196258953024 checkpointing.py:798] Waited 0.3118577003479004 seconds for step 10 to finish before starting checkpointing. I0424 09:24:15.791851 140196258953024 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0424 09:24:16.937820 140066552411904 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 27.389929s. I0424 09:24:16.938132 140066051565312 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0424 09:24:16.938246 140066051565312 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0424 09:24:16.938296 140066051565312 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts... I0424 09:24:16.939941 140066051565312 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts. I0424 09:24:16.940117 140196258953024 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0. W0424 09:24:16.940255 140196258953024 checkpoint_manager.py:1452] Waiting for previous save to complete took 1.148402 seconds. If this number is high, consider checkpointing less frequently. I0424 09:24:16.942089 140196258953024 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 10 I0424 09:24:16.944142 140196258953024 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/10. I0424 09:24:17.373162 140196258953024 jax_array_handlers.py:360] Scheduling D2H of 81 prioritized jax.Array. I0424 09:24:17.373357 140196258953024 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0424 09:24:17.423757 140196258953024 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.051566s I0424 09:24:17.423930 140196258953024 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 26.757 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.05765700340270996 s) (per-host) I0424 09:24:17.423981 140196258953024 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.057726s (batch_requests_ready=0.003011s, total_serialization_initiated=0.054640s, others=0.000075s) I0424 09:24:17.424085 140196258953024 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.061802s (all_items=0.000015s, per_item={'items': '0.00001478'}, temp_paths=0.061787) I0424 09:24:17.424801 140196258953024 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.48 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/10. I0424 09:24:17.425150 140066051565312 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-24 09:44:17.425113 I0424 09:24:17.427245 140196258953024 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=10] Starting CheckpointManager Save Finalize thread=save_finalize I0424 09:24:17.427530 140063908292352 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0424 09:24:17.427703 140196258953024 standard_logger.py:34] {'step': 10, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1777022655.791821, 'wait_for_prev_duration_secs': 1.148402214050293, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1777022656.9421268, 'checkpointer_blocking_duration_secs': 0.483168363571167, 'get_old_steps_start_time': 1777022657.4253156, 'get_old_steps_duration_secs': 3.147125244140625e-05, 'checkpoint_manager_blocking_start_time': 1777022655.7900095, 'checkpoint_manager_blocking_duration_secs': 1.6376583576202393} I0424 09:24:17.427811 140196258953024 checkpointing.py:409] Started an asynchronous checkpoint save for step 10 I0424 09:24:17.428472 140196258953024 metric_logger.py:196] completed step: 9, seconds: 0.158, TFLOP/s/device: 85.936, Tokens/s/device: 12953.171, total_weights: 65536, loss: 7.908, lm_loss: 7.908, perplexity: 2720.116 I0424 09:24:17.435773 140196258953024 metric_logger.py:196] completed step: 10, seconds: 0.159, TFLOP/s/device: 85.707, Tokens/s/device: 12918.690, total_weights: 65536, loss: 7.568, lm_loss: 7.568, perplexity: 1936.198 I0424 09:24:17.593841 140196258953024 metric_logger.py:196] completed step: 11, seconds: 1.956, TFLOP/s/device: 6.945, Tokens/s/device: 1046.900, total_weights: 65536, loss: 7.288, lm_loss: 7.288, perplexity: 1463.272 I0424 09:24:18.163043 140196258953024 metric_logger.py:196] completed step: 12, seconds: 0.006, TFLOP/s/device: 2147.823, Tokens/s/device: 323743.282, total_weights: 65536, loss: 7.032, lm_loss: 7.032, perplexity: 1132.061 I0424 09:24:18.321438 140196258953024 metric_logger.py:196] completed step: 13, seconds: 0.565, TFLOP/s/device: 24.047, Tokens/s/device: 3624.612, total_weights: 65536, loss: 6.815, lm_loss: 6.815, perplexity: 911.031 I0424 09:24:18.479888 140196258953024 metric_logger.py:196] completed step: 14, seconds: 0.163, TFLOP/s/device: 83.255, Tokens/s/device: 12549.173, total_weights: 65536, loss: 6.635, lm_loss: 6.635, perplexity: 761.080 I0424 09:24:18.638123 140196258953024 metric_logger.py:196] completed step: 15, seconds: 0.158, TFLOP/s/device: 85.827, Tokens/s/device: 12936.807, total_weights: 65536, loss: 6.492, lm_loss: 6.492, perplexity: 659.915 I0424 09:24:18.796403 140196258953024 metric_logger.py:196] completed step: 16, seconds: 0.158, TFLOP/s/device: 85.764, Tokens/s/device: 12927.253, total_weights: 65536, loss: 6.380, lm_loss: 6.380, perplexity: 589.948 I0424 09:24:18.954636 140196258953024 metric_logger.py:196] completed step: 17, seconds: 0.159, TFLOP/s/device: 85.561, Tokens/s/device: 12896.725, total_weights: 65536, loss: 6.293, lm_loss: 6.293, perplexity: 540.941 I0424 09:24:19.113079 140196258953024 metric_logger.py:196] completed step: 18, seconds: 0.158, TFLOP/s/device: 86.218, Tokens/s/device: 12995.748, total_weights: 65536, loss: 6.223, lm_loss: 6.223, perplexity: 504.108 I0424 09:24:19.270721 140196258953024 checkpointing.py:794] Waiting for step 19 to finish before checkpoint... I0424 09:24:19.271690 140196258953024 checkpointing.py:798] Waited 0.0009891986846923828 seconds for step 19 to finish before starting checkpointing. I0424 09:24:19.274002 140196258953024 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=10][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0424 09:24:23.534260 140062797149952 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 81 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/10/items/array_metadatas/process_5 I0424 09:25:00.560889 140066051565312 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 36.573 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 43.19458556175232 s) (per-host) I0424 09:25:00.561025 140066051565312 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 43.135759s. I0424 09:25:07.008139 140066051565312 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 49.582859s. I0424 09:25:07.008398 140063908292352 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0424 09:25:07.008465 140063908292352 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0424 09:25:07.008535 140063908292352 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=10] CheckpointManager Save Finalize is syncing with other hosts... I0424 09:25:07.010745 140063908292352 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=10] CheckpointManager Save Finalize is done on all hosts. I0424 09:25:07.010946 140196258953024 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=10][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=10. W0424 09:25:07.011092 140196258953024 checkpoint_manager.py:1452] Waiting for previous save to complete took 47.737100 seconds. If this number is high, consider checkpointing less frequently. I0424 09:25:07.012726 140196258953024 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 19 I0424 09:25:07.014789 140196258953024 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/19. I0424 09:25:07.396072 140196258953024 jax_array_handlers.py:360] Scheduling D2H of 81 prioritized jax.Array. I0424 09:25:07.396164 140196258953024 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0424 09:25:07.445669 140196258953024 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.050565s I0424 09:25:07.445829 140196258953024 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 27.262 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.056589603424072266 s) (per-host) I0424 09:25:07.445878 140196258953024 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.056649s (batch_requests_ready=0.002958s, total_serialization_initiated=0.053627s, others=0.000065s) I0424 09:25:07.445985 140196258953024 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.060974s (all_items=0.000010s, per_item={'items': '0.00001049'}, temp_paths=0.060963) I0424 09:25:07.446689 140196258953024 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.43 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/19. I0424 09:25:07.447016 140063908292352 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-24 09:45:07.446978 I0424 09:25:07.449012 140196258953024 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=19] Starting CheckpointManager Save Finalize thread=save_finalize I0424 09:25:07.449266 140062797149952 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0424 09:25:07.449401 140196258953024 standard_logger.py:34] {'step': 19, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1777022659.2739675, 'wait_for_prev_duration_secs': 47.73709988594055, 'time_between_consecutive_saves_sec': 2.3339877128601074, 'checkpointer_blocking_start_time': 1777022707.0127673, 'checkpointer_blocking_duration_secs': 0.43439221382141113, 'get_old_steps_start_time': 1777022707.4471781, 'get_old_steps_duration_secs': 2.574920654296875e-05, 'checkpoint_manager_blocking_start_time': 1777022659.2719598, 'checkpoint_manager_blocking_duration_secs': 48.17741012573242} I0424 09:25:07.449560 140196258953024 checkpointing.py:409] Started an asynchronous checkpoint save for step 19 I0424 09:25:07.449603 140196258953024 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=19][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0424 09:25:12.577094 140063363028736 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 81 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/linen_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/19/items/array_metadatas/process_5 I0424 09:25:49.388643 140063908292352 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 37.614 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 41.99935579299927 s) (per-host) I0424 09:25:49.388767 140063908292352 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 41.941639s. I0424 09:25:57.324144 140063908292352 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 49.877002s. I0424 09:25:57.324453 140062797149952 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0424 09:25:57.324573 140062797149952 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0424 09:25:57.324615 140062797149952 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=19] CheckpointManager Save Finalize is syncing with other hosts... I0424 09:25:57.326391 140062797149952 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=19] CheckpointManager Save Finalize is done on all hosts. I0424 09:25:57.326559 140196258953024 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=19][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=19. I0424 09:25:57.326727 140196258953024 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0424 09:25:57.327614 140196258953024 metric_logger.py:196] completed step: 19, seconds: 0.158, TFLOP/s/device: 85.801, Tokens/s/device: 12932.804, total_weights: 65536, loss: 6.167, lm_loss: 6.167, perplexity: 476.947 Per train step: Total TFLOPs: 13.59 split as 93.93% learnable weight flops and 6.07% attention flops XPK End: Fri Apr 24 09:26:06 UTC 2026 EXIT_CODE=0
XPK Start: Fri Apr 24 10:50:34 UTC 2026 PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. `rope_parameters`'s factor field must be a float >= 1, got 40 `rope_parameters`'s beta_fast field must be a float, got 32 `rope_parameters`'s beta_slow field must be a float, got 1 DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 2026-04-24 10:50:59.598540: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0424 10:50:59.810040 138356317857600 max_utils.py:273] Attempting to initialize the jax distributed system... I0424 10:51:08.851871 138356317857600 distributed.py:149] Starting JAX distributed service on [::]:8482 I0424 10:51:08.854360 138356317857600 distributed.py:172] Connecting to JAX distributed service on mt-05-fp8-8qznm-slice-job-0-0.mt-05-fp8-8qznm:8482 I0424 10:51:10.024068 138356317857600 max_utils.py:284] Jax distributed system initialized! I0424 10:51:16.088742 138356317857600 max_utils.py:800] System Information: Jax Version: 0.9.2 I0424 10:51:16.088847 138356317857600 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0424 10:51:16.088890 138356317857600 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365 I0424 10:51:16.088927 138356317857600 train_utils.py:391] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing. I0424 10:51:16.783276 138356317857600 maxtext_utils.py:1771] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0424 10:51:17.213359 138356317857600 checkpointing.py:688] Setting up checkpoint logger... I0424 10:51:17.213484 138356317857600 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True I0424 10:51:17.213533 138356317857600 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None I0424 10:51:17.213748 138356317857600 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7dd4df644080>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0424 10:51:20.152343 138356317857600 checkpointing.py:266] Enabling policy for fixed interval checkpointing. I0424 10:51:20.152579 138356317857600 checkpoint_manager.py:708] [process=6][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7dd3d36e61b0>}, handler_registry=None I0424 10:51:20.152821 138356317857600 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7dd3d36e61b0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`. I0424 10:51:20.152870 138356317857600 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7dc0b40c9ca0>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`. I0424 10:51:20.152905 138356317857600 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7dd3d36e61b0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7dd3d36e61b0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7dc0b40c9ca0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7dc0b40c9ca0>}). I0424 10:51:20.153252 138356317857600 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34 I0424 10:51:20.153329 138356317857600 async_checkpointer.py:192] [process=6][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7dc09c396480> timeout: 1200 secs and primary_host=0 for async checkpoint writes I0424 10:51:20.847479 138356317857600 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints I0424 10:51:20.953644 138356317857600 checkpoint_manager.py:929] [process=6][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7dc09c3418b0> I0424 10:51:20.953822 138356317857600 checkpointing.py:302] Checkpoint manager created! I0424 10:51:22.226399 138356317857600 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint I0424 10:51:22.226524 138356317857600 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint. fsdp: 32 I0424 10:51:24.747212 138356317857600 nnx_decoders.py:465] nnx_decoders/carry Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0424 10:51:24.747318 138356317857600 nnx_decoders.py:465] nnx_decoders/carry Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0424 10:51:24.752780 138356317857600 nnx_decoders.py:465] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0424 10:51:24.752839 138356317857600 nnx_decoders.py:465] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0424 10:51:24.769594 138356317857600 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch_attn', 'activation_length_attn', 'activation_embed_attn'). I0424 10:51:24.769653 138356317857600 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0424 10:51:24.785628 138356317857600 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch_attn', 'activation_length_attn', 'activation_embed_attn'). I0424 10:51:24.785687 138356317857600 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0424 10:51:24.845502 138356317857600 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim'). I0424 10:51:24.845587 138356317857600 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0424 10:51:24.861864 138356317857600 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim'). I0424 10:51:24.861924 138356317857600 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0424 10:51:24.878050 138356317857600 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim'). I0424 10:51:24.878122 138356317857600 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0424 10:51:24.909505 138356317857600 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch_attn', 'activation_length_attn', 'activation_heads', 'activation_kv'). I0424 10:51:24.909581 138356317857600 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0424 10:51:24.969449 138356317857600 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp'). I0424 10:51:24.969530 138356317857600 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None). I0424 10:51:38.950857 138356317857600 max_utils.py:791] Total memory size: 1.5 GB, Output size: 0.4 GB, Temp size: 1.1 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB. I0424 10:51:38.954704 138356317857600 metric_logger.py:301] number parameters: 1.104 billion I0424 10:51:52.629630 138356317857600 checkpointing.py:794] Waiting for step 0 to finish before checkpoint... I0424 10:51:52.791331 138356317857600 checkpointing.py:798] Waited 0.16168975830078125 seconds for step 0 to finish before starting checkpointing. I0424 10:51:52.793605 138356317857600 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0424 10:51:52.795631 138356317857600 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 0 I0424 10:51:52.797002 138356317857600 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/0. I0424 10:51:53.156077 138356317857600 signaling_client.py:364] Using JaxDistributedSignalingClient I0424 10:51:53.157035 138356317857600 jax_array_handlers.py:360] Scheduling D2H of 153 prioritized jax.Array. I0424 10:51:53.157119 138356317857600 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0424 10:51:53.566447 138356317857600 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.412023s I0424 10:51:53.566618 138356317857600 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 3.596 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.42904067039489746 s) (per-host) I0424 10:51:53.566669 138356317857600 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.429102s (batch_requests_ready=0.005949s, total_serialization_initiated=0.423084s, others=0.000070s) I0424 10:51:53.566786 138356317857600 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.433228s (all_items=0.000018s, per_item={'items': '0.00001812'}, temp_paths=0.433210) I0424 10:51:53.567530 138356317857600 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.77 seconds. Continuing save @ gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/0. I0424 10:51:53.567888 138227429209856 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-24 11:11:53.567849 I0424 10:51:53.587714 138356317857600 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize I0424 10:51:53.588035 138226928965376 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save. I0424 10:51:53.588217 138356317857600 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1777027912.7935858, 'wait_for_prev_duration_secs': 6.389617919921875e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1777027912.7956705, 'checkpointer_blocking_duration_secs': 0.7723715305328369, 'get_old_steps_start_time': 1777027913.5680685, 'get_old_steps_duration_secs': 4.4345855712890625e-05, 'checkpoint_manager_blocking_start_time': 1777027912.791789, 'checkpoint_manager_blocking_duration_secs': 0.7963881492614746} I0424 10:51:53.588331 138356317857600 checkpointing.py:409] Started an asynchronous checkpoint save for step 0 I0424 10:51:53.588410 138356317857600 max_utils.py:750] Memstats: After params initialized: I0424 10:51:53.588459 138356317857600 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_24(process=6,(0,6,0,0)) I0424 10:51:53.588490 138356317857600 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_25(process=6,(1,6,0,0)) I0424 10:51:53.588516 138356317857600 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_28(process=6,(0,7,0,0)) I0424 10:51:53.588542 138356317857600 max_utils.py:756] Using (GB) 0.45 / 31.25 (1.440000%) on TPU_29(process=6,(1,7,0,0)) I0424 10:51:53.910381 138356317857600 metric_logger.py:196] completed step: 0, seconds: 13.673, TFLOP/s/device: 0.994, Tokens/s/device: 149.787, total_weights: 65536, loss: 10.872, lm_loss: 10.872, perplexity: 52680.742 I0424 10:51:54.090132 138356317857600 metric_logger.py:196] completed step: 1, seconds: 1.278, TFLOP/s/device: 10.632, Tokens/s/device: 1602.529, total_weights: 65536, loss: 10.872, lm_loss: 10.872, perplexity: 52680.742 I0424 10:51:54.552425 138356317857600 metric_logger.py:196] completed step: 2, seconds: 0.027, TFLOP/s/device: 509.549, Tokens/s/device: 76804.800, total_weights: 65536, loss: 10.856, lm_loss: 10.856, perplexity: 51864.762 I0424 10:51:54.709357 138356317857600 metric_logger.py:196] completed step: 3, seconds: 0.439, TFLOP/s/device: 30.928, Tokens/s/device: 4661.761, total_weights: 65536, loss: 10.824, lm_loss: 10.824, perplexity: 50225.512 I0424 10:51:55.024064 138356317857600 metric_logger.py:196] completed step: 4, seconds: 0.187, TFLOP/s/device: 72.832, Tokens/s/device: 10977.996, total_weights: 65536, loss: 10.793, lm_loss: 10.793, perplexity: 48689.156 I0424 10:51:55.033643 138356317857600 metric_logger.py:196] completed step: 5, seconds: 0.158, TFLOP/s/device: 86.077, Tokens/s/device: 12974.425, total_weights: 65536, loss: 10.763, lm_loss: 10.763, perplexity: 47229.012 I0424 10:51:56.409976 2807 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com I0424 10:51:58.423775 138226937358080 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 153 array_metadata.ArrayMetadata to gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/0/items/array_metadatas/process_6 I0424 10:52:11.302725 138227429209856 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 86.967 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 18.16510772705078 s) (per-host) I0424 10:52:11.302850 138227429209856 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 17.734847s. I0424 10:52:19.301129 138356317857600 metric_logger.py:196] completed step: 6, seconds: 0.314, TFLOP/s/device: 43.254, Tokens/s/device: 6519.656, total_weights: 65536, loss: 10.733, lm_loss: 10.733, perplexity: 45864.258 I0424 10:52:19.310124 138356317857600 metric_logger.py:196] completed step: 7, seconds: 24.267, TFLOP/s/device: 0.560, Tokens/s/device: 84.395, total_weights: 65536, loss: 10.706, lm_loss: 10.706, perplexity: 44620.242 I0424 10:52:19.471200 138356317857600 metric_logger.py:196] completed step: 8, seconds: 0.010, TFLOP/s/device: 1375.912, Tokens/s/device: 207392.405, total_weights: 65536, loss: 10.680, lm_loss: 10.680, perplexity: 43477.055 I0424 10:52:19.479756 138356317857600 checkpointing.py:794] Waiting for step 10 to finish before checkpoint... I0424 10:52:19.785209 138356317857600 checkpointing.py:798] Waited 0.30542707443237305 seconds for step 10 to finish before starting checkpointing. I0424 10:52:19.787566 138356317857600 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0424 10:52:22.468291 138227429209856 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 28.900270s. I0424 10:52:22.468622 138226928965376 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save. I0424 10:52:22.468751 138226928965376 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save. I0424 10:52:22.468803 138226928965376 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts... I0424 10:52:22.471380 138226928965376 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts. I0424 10:52:22.471608 138356317857600 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0. W0424 10:52:22.471741 138356317857600 checkpoint_manager.py:1452] Waiting for previous save to complete took 2.684175 seconds. If this number is high, consider checkpointing less frequently. I0424 10:52:22.473688 138356317857600 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 10 I0424 10:52:22.475758 138356317857600 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/10. I0424 10:52:22.770488 138356317857600 jax_array_handlers.py:360] Scheduling D2H of 153 prioritized jax.Array. I0424 10:52:22.770602 138356317857600 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0424 10:52:22.819383 138356317857600 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.051512s I0424 10:52:22.819534 138356317857600 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 23.171 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.06658172607421875 s) (per-host) I0424 10:52:22.819595 138356317857600 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.066643s (batch_requests_ready=0.005110s, total_serialization_initiated=0.061465s, others=0.000068s) I0424 10:52:22.819718 138356317857600 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.070545s (all_items=0.000016s, per_item={'items': '0.00001597'}, temp_paths=0.070529) I0424 10:52:22.820460 138356317857600 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.35 seconds. Continuing save @ gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/10. I0424 10:52:22.820787 138226928965376 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-24 11:12:22.820753 I0424 10:52:22.822828 138356317857600 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=10] Starting CheckpointManager Save Finalize thread=save_finalize I0424 10:52:22.823140 138223707739904 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save. I0424 10:52:22.823283 138356317857600 standard_logger.py:34] {'step': 10, 'event_type': 'save', 'directory': 'gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1777027939.7875364, 'wait_for_prev_duration_secs': 2.6841747760772705, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1777027942.4737275, 'checkpointer_blocking_duration_secs': 0.3471996784210205, 'get_old_steps_start_time': 1777027942.8209515, 'get_old_steps_duration_secs': 3.0040740966796875e-05, 'checkpoint_manager_blocking_start_time': 1777027939.7856557, 'checkpoint_manager_blocking_duration_secs': 3.037593126296997} I0424 10:52:22.823457 138356317857600 checkpointing.py:409] Started an asynchronous checkpoint save for step 10 I0424 10:52:22.824066 138356317857600 metric_logger.py:196] completed step: 9, seconds: 0.008, TFLOP/s/device: 1683.659, Tokens/s/device: 253779.430, total_weights: 65536, loss: 10.656, lm_loss: 10.656, perplexity: 42467.320 I0424 10:52:22.835262 138356317857600 metric_logger.py:196] completed step: 10, seconds: 0.163, TFLOP/s/device: 83.612, Tokens/s/device: 12602.922, total_weights: 65536, loss: 10.636, lm_loss: 10.636, perplexity: 41601.941 I0424 10:52:22.988421 138356317857600 metric_logger.py:196] completed step: 11, seconds: 3.353, TFLOP/s/device: 4.053, Tokens/s/device: 610.881, total_weights: 65536, loss: 10.618, lm_loss: 10.618, perplexity: 40849.570 I0424 10:52:23.576085 138356317857600 metric_logger.py:196] completed step: 12, seconds: 0.010, TFLOP/s/device: 1338.633, Tokens/s/device: 201773.399, total_weights: 65536, loss: 10.602, lm_loss: 10.602, perplexity: 40203.926 I0424 10:52:23.732794 138356317857600 metric_logger.py:196] completed step: 13, seconds: 0.579, TFLOP/s/device: 23.470, Tokens/s/device: 3537.677, total_weights: 65536, loss: 10.588, lm_loss: 10.588, perplexity: 39652.145 I0424 10:52:23.889683 138356317857600 metric_logger.py:196] completed step: 14, seconds: 0.163, TFLOP/s/device: 83.372, Tokens/s/device: 12566.730, total_weights: 65536, loss: 10.577, lm_loss: 10.577, perplexity: 39212.879 I0424 10:52:24.046464 138356317857600 metric_logger.py:196] completed step: 15, seconds: 0.157, TFLOP/s/device: 86.701, Tokens/s/device: 13068.475, total_weights: 65536, loss: 10.568, lm_loss: 10.568, perplexity: 38879.223 I0424 10:52:24.203089 138356317857600 metric_logger.py:196] completed step: 16, seconds: 0.157, TFLOP/s/device: 86.633, Tokens/s/device: 13058.226, total_weights: 65536, loss: 10.562, lm_loss: 10.562, perplexity: 38619.105 I0424 10:52:24.359872 138356317857600 metric_logger.py:196] completed step: 17, seconds: 0.157, TFLOP/s/device: 86.460, Tokens/s/device: 13032.218, total_weights: 65536, loss: 10.556, lm_loss: 10.556, perplexity: 38395.816 I0424 10:52:24.519655 138356317857600 metric_logger.py:196] completed step: 18, seconds: 0.156, TFLOP/s/device: 86.943, Tokens/s/device: 13104.935, total_weights: 65536, loss: 10.552, lm_loss: 10.552, perplexity: 38236.652 I0424 10:52:24.678529 138356317857600 checkpointing.py:794] Waiting for step 19 to finish before checkpoint... I0424 10:52:24.679819 138356317857600 checkpointing.py:798] Waited 0.0012989044189453125 seconds for step 19 to finish before starting checkpointing. I0424 10:52:24.681832 138356317857600 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=10][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0424 10:52:28.980892 138223162476288 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 153 array_metadata.ArrayMetadata to gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/10/items/array_metadatas/process_6 I0424 10:52:43.829944 138226928965376 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 74.952 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 21.076953172683716 s) (per-host) I0424 10:52:43.830072 138226928965376 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 21.009176s. I0424 10:52:51.884918 138226928965376 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 29.064006s. I0424 10:52:51.885196 138223707739904 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save. I0424 10:52:51.885313 138223707739904 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save. I0424 10:52:51.885361 138223707739904 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=10] CheckpointManager Save Finalize is syncing with other hosts... I0424 10:52:51.887916 138223707739904 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=10] CheckpointManager Save Finalize is done on all hosts. I0424 10:52:51.888116 138356317857600 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=10][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=10. W0424 10:52:51.888240 138356317857600 checkpoint_manager.py:1452] Waiting for previous save to complete took 27.206411 seconds. If this number is high, consider checkpointing less frequently. I0424 10:52:51.889804 138356317857600 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 19 I0424 10:52:51.891927 138356317857600 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/19. I0424 10:52:52.592979 138356317857600 jax_array_handlers.py:360] Scheduling D2H of 153 prioritized jax.Array. I0424 10:52:52.593146 138356317857600 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0424 10:52:52.643645 138356317857600 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.053203s I0424 10:52:52.643784 138356317857600 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 22.892 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.06739306449890137 s) (per-host) I0424 10:52:52.643831 138356317857600 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.067451s (batch_requests_ready=0.005112s, total_serialization_initiated=0.062277s, others=0.000062s) I0424 10:52:52.643908 138356317857600 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.071494s (all_items=0.000011s, per_item={'items': '0.00001144'}, temp_paths=0.071482) I0424 10:52:52.644544 138356317857600 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.75 seconds. Continuing save @ gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/19. I0424 10:52:52.644865 138223179261696 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-24 11:12:52.644828 I0424 10:52:53.086300 138356317857600 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=19] Starting CheckpointManager Save Finalize thread=save_finalize I0424 10:52:53.086682 138232280889088 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save. I0424 10:52:53.086854 138356317857600 standard_logger.py:34] {'step': 19, 'event_type': 'save', 'directory': 'gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1777027944.6818037, 'wait_for_prev_duration_secs': 27.206411361694336, 'time_between_consecutive_saves_sec': 2.2103240489959717, 'checkpointer_blocking_start_time': 1777027971.8898432, 'checkpointer_blocking_duration_secs': 0.7551741600036621, 'get_old_steps_start_time': 1777027972.6450338, 'get_old_steps_duration_secs': 2.574920654296875e-05, 'checkpoint_manager_blocking_start_time': 1777027944.680156, 'checkpoint_manager_blocking_duration_secs': 28.406663179397583} I0424 10:52:53.086974 138356317857600 checkpointing.py:409] Started an asynchronous checkpoint save for step 19 I0424 10:52:53.087018 138356317857600 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=19][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0424 10:52:58.002168 138224218617600 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 153 array_metadata.ArrayMetadata to gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260424_091312_05_fp8/checkpoints/19/items/array_metadatas/process_6 I0424 10:53:13.151994 138223179261696 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 76.779 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 20.57555627822876 s) (per-host) I0424 10:53:13.152118 138223179261696 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 20.507137s. I0424 10:53:20.797919 138223179261696 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 28.152923s. I0424 10:53:20.798166 138232280889088 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save. I0424 10:53:20.798220 138232280889088 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save. I0424 10:53:20.798269 138232280889088 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=19] CheckpointManager Save Finalize is syncing with other hosts... I0424 10:53:20.799982 138232280889088 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=19] CheckpointManager Save Finalize is done on all hosts. I0424 10:53:20.800163 138356317857600 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=19][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=19. I0424 10:53:20.800355 138356317857600 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0424 10:53:20.801279 138356317857600 metric_logger.py:196] completed step: 19, seconds: 0.157, TFLOP/s/device: 86.581, Tokens/s/device: 13050.488, total_weights: 65536, loss: 10.549, lm_loss: 10.549, perplexity: 38125.074 Per train step: Total TFLOPs: 13.59 split as 93.93% learnable weight flops and 6.07% attention flops XPK End: Fri Apr 24 10:53:29 UTC 2026 EXIT_CODE=0