XPK Start: Thu Apr 23 09:53:00 UTC 2026 PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. `rope_parameters`'s factor field must be a float >= 1, got 40 `rope_parameters`'s beta_fast field must be a float, got 32 `rope_parameters`'s beta_slow field must be a float, got 1 DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 2026-04-23 09:53:24.969035: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0423 09:53:25.178368 135247601997632 max_utils.py:273] Attempting to initialize the jax distributed system... I0423 09:53:34.219909 135247601997632 distributed.py:149] Starting JAX distributed service on [::]:8482 I0423 09:53:34.222229 135247601997632 distributed.py:172] Connecting to JAX distributed service on mt-06-grad-accum-fmu3z-slice-job-0-0.mt-06-grad-accum-fmu3z:8482 I0423 09:53:36.070359 135247601997632 max_utils.py:284] Jax distributed system initialized! I0423 09:53:42.323059 135247601997632 max_utils.py:800] System Information: Jax Version: 0.9.2 I0423 09:53:42.323163 135247601997632 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0423 09:53:42.323203 135247601997632 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365 I0423 09:53:42.323241 135247601997632 train_utils.py:391] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing. I0423 09:53:43.017934 135247601997632 maxtext_utils.py:1732] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0423 09:53:43.018516 135247601997632 maxtext_utils.py:1732] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0423 09:53:43.018708 135247601997632 checkpointing.py:688] Setting up checkpoint logger... I0423 09:53:43.018757 135247601997632 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True I0423 09:53:43.018801 135247601997632 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None I0423 09:53:43.019145 135247601997632 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7b0111950e90>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0423 09:53:45.900254 135247601997632 checkpointing.py:266] Enabling policy for fixed interval checkpointing. I0423 09:53:45.900446 135247601997632 checkpoint_manager.py:708] [process=5][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7aed0c29e7e0>}, handler_registry=None I0423 09:53:45.900715 135247601997632 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7aed0c29e7e0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`. I0423 09:53:45.900766 135247601997632 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7aed0c682ed0>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`. I0423 09:53:45.900803 135247601997632 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7aed0c29e7e0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7aed0c29e7e0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7aed0c682ed0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7aed0c682ed0>}). I0423 09:53:45.901130 135247601997632 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34 I0423 09:53:45.901198 135247601997632 async_checkpointer.py:192] [process=5][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7aecf4161d00> timeout: 1200 secs and primary_host=0 for async checkpoint writes I0423 09:53:47.990746 135247601997632 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_06_grad_accum/checkpoints I0423 09:53:47.992942 135247601997632 checkpoint_manager.py:929] [process=5][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_06_grad_accum/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7aed0c6832c0> I0423 09:53:47.993054 135247601997632 checkpointing.py:302] Checkpoint manager created! I0423 09:53:49.744185 135247601997632 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0423 09:53:49.744298 135247601997632 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0423 09:53:50.127353 135247601997632 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0423 09:53:50.127448 135247601997632 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0423 09:53:50.143855 135247601997632 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0423 09:53:50.143917 135247601997632 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0423 09:53:50.167701 135247601997632 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0423 09:53:50.167766 135247601997632 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 09:53:50.184188 135247601997632 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0423 09:53:50.184250 135247601997632 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 09:53:50.200525 135247601997632 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0423 09:53:50.200585 135247601997632 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 09:53:50.225056 135247601997632 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv'). I0423 09:53:50.225129 135247601997632 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 09:53:50.245731 135247601997632 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp'). I0423 09:53:50.245796 135247601997632 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None). I0423 09:53:50.460327 135247601997632 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint I0423 09:53:50.460436 135247601997632 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint. fsdp: 32 I0423 09:53:51.890536 135247601997632 maxtext_utils.py:1835] params/params/decoder/decoder_norm/scale Shape: float32[2048] Logical: P('norm',) Physical: (None,) I0423 09:53:51.890871 135247601997632 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wi_0/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0423 09:53:51.891102 135247601997632 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wi_1/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0423 09:53:51.891215 135247601997632 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wo/kernel Shape: float32[7168,16,2048] Logical: P('mlp', 'layers', 'embed') Physical: (None, None, 'fsdp') I0423 09:53:51.891286 135247601997632 maxtext_utils.py:1835] params/params/decoder/layers/post_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0423 09:53:51.891329 135247601997632 maxtext_utils.py:1835] params/params/decoder/layers/pre_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0423 09:53:51.891391 135247601997632 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/key/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0423 09:53:51.891451 135247601997632 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/out/kernel Shape: float32[16,16,128,2048] Logical: P('heads', 'layers', 'kv', 'embed') Physical: (None, None, None, 'fsdp') I0423 09:53:51.891493 135247601997632 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/query/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'q_heads', 'kv') Physical: ('fsdp', None, None, None) I0423 09:53:51.891531 135247601997632 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/value/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0423 09:53:51.891582 135247601997632 maxtext_utils.py:1835] params/params/decoder/logits_dense/kernel Shape: float32[2048,32000] Logical: P('embed_vocab', 'vocab') Physical: ('fsdp', None) I0423 09:53:51.891641 135247601997632 maxtext_utils.py:1835] params/params/token_embedder/embedding Shape: float32[32000,2048] Logical: P('vocab', 'embed_vocab') Physical: (None, 'fsdp') I0423 09:53:51.917526 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048]............................................... Unknown. I0423 09:53:51.917591 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048]............................................... (None,). I0423 09:53:51.932554 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,16,7168]....................................... Unknown. I0423 09:53:51.932610 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,16,7168]....................................... ('fsdp', None, None). I0423 09:53:51.962139 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[7168,16,2048]....................................... Unknown. I0423 09:53:51.962203 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[7168,16,2048]....................................... (None, None, 'fsdp'). I0423 09:53:51.976967 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,16]............................................ Unknown. I0423 09:53:51.977025 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,16]............................................ (None, None). I0423 09:53:52.006408 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,16,16,128]..................................... Unknown. I0423 09:53:52.006476 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,16,16,128]..................................... ('fsdp', None, None, None). I0423 09:53:52.021286 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[16,16,128,2048]..................................... Unknown. I0423 09:53:52.021338 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[16,16,128,2048]..................................... (None, None, None, 'fsdp'). I0423 09:53:52.065629 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,32000]......................................... Unknown. I0423 09:53:52.065712 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,32000]......................................... ('fsdp', None). I0423 09:53:52.080476 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[32000,2048]......................................... Unknown. I0423 09:53:52.080533 135247601997632 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[32000,2048]......................................... (None, 'fsdp'). I0423 09:53:52.751969 135247601997632 train.py:157] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0423 09:53:52.752068 135247601997632 train.py:157] train/xent Physical: float32[32,2048]............................................ ('fsdp', None). I0423 09:53:52.767841 135247601997632 train.py:164] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0423 09:53:52.767901 135247601997632 train.py:164] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None). I0423 09:54:04.113290 135247601997632 max_utils.py:791] Total memory size: 1.7 GB, Output size: 0.4 GB, Temp size: 1.3 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB. I0423 09:54:04.114103 135247601997632 metric_logger.py:301] number parameters: 1.104 billion I0423 09:54:06.372641 135247601997632 checkpointing.py:794] Waiting for step 0 to finish before checkpoint... I0423 09:54:17.105145 135247601997632 checkpointing.py:798] Waited 10.73248291015625 seconds for step 0 to finish before starting checkpointing. I0423 09:54:17.107701 135247601997632 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0423 09:54:17.109773 135247601997632 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 0 I0423 09:54:17.111231 135247601997632 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_06_grad_accum/checkpoints/0. I0423 09:54:17.436057 135247601997632 signaling_client.py:364] Using JaxDistributedSignalingClient I0423 09:54:17.437018 135247601997632 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array. I0423 09:54:17.437073 135247601997632 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0423 09:54:17.708701 135247601997632 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.272714s I0423 09:54:17.708879 135247601997632 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 5.543 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.27827000617980957 s) (per-host) I0423 09:54:17.708943 135247601997632 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.278342s (batch_requests_ready=0.002246s, total_serialization_initiated=0.276015s, others=0.000081s) I0423 09:54:17.709072 135247601997632 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.282555s (all_items=0.000017s, per_item={'items': '0.00001693'}, temp_paths=0.282538) I0423 09:54:17.709898 135247601997632 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.60 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_06_grad_accum/checkpoints/0. I0423 09:54:17.710186 135120325433088 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 10:14:17.710153 I0423 09:54:17.719868 135247601997632 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize I0423 09:54:17.720123 135118144259840 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0423 09:54:17.720252 135247601997632 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_06_grad_accum/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776938057.1076827, 'wait_for_prev_duration_secs': 5.936622619628906e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776938057.109814, 'checkpointer_blocking_duration_secs': 0.6005227565765381, 'get_old_steps_start_time': 1776938057.710361, 'get_old_steps_duration_secs': 2.9325485229492188e-05, 'checkpoint_manager_blocking_start_time': 1776938057.105692, 'checkpoint_manager_blocking_duration_secs': 0.6145217418670654} I0423 09:54:17.720362 135247601997632 checkpointing.py:409] Started an asynchronous checkpoint save for step 0 I0423 09:54:17.720412 135247601997632 max_utils.py:750] Memstats: After params initialized: I0423 09:54:17.720463 135247601997632 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_18(process=5,(2,4,0,0)) I0423 09:54:17.720499 135247601997632 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_19(process=5,(3,4,0,0)) I0423 09:54:17.720524 135247601997632 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_22(process=5,(2,5,0,0)) I0423 09:54:17.720549 135247601997632 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_23(process=5,(3,5,0,0)) I0423 09:54:18.033493 135247601997632 metric_logger.py:196] completed step: 0, seconds: 2.258, TFLOP/s/device: 24.065, Tokens/s/device: 3627.295, total_weights: 262144, loss: 10.877, lm_loss: 10.877, perplexity: 52959.059 I0423 09:54:18.647247 135247601997632 metric_logger.py:196] completed step: 1, seconds: 11.659, TFLOP/s/device: 4.661, Tokens/s/device: 702.614, total_weights: 262144, loss: 10.877, lm_loss: 10.877, perplexity: 52959.059 I0423 09:54:19.225679 135247601997632 metric_logger.py:196] completed step: 2, seconds: 0.036, TFLOP/s/device: 1525.744, Tokens/s/device: 229976.699, total_weights: 262144, loss: 10.563, lm_loss: 10.563, perplexity: 38662.707 I0423 09:54:19.803749 135247601997632 metric_logger.py:196] completed step: 3, seconds: 0.584, TFLOP/s/device: 93.040, Tokens/s/device: 14023.963, total_weights: 262144, loss: 10.272, lm_loss: 10.272, perplexity: 28909.668 I0423 09:54:20.960302 135247601997632 metric_logger.py:196] completed step: 4, seconds: 0.578, TFLOP/s/device: 93.960, Tokens/s/device: 14162.670, total_weights: 262144, loss: 10.022, lm_loss: 10.022, perplexity: 22524.992 I0423 09:54:20.966141 135247601997632 metric_logger.py:196] completed step: 5, seconds: 0.578, TFLOP/s/device: 94.037, Tokens/s/device: 14174.286, total_weights: 262144, loss: 9.820, lm_loss: 9.820, perplexity: 18401.865 I0423 09:54:21.681699 2595 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com I0423 09:54:24.872402 135118152652544 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_06_grad_accum/checkpoints/0/items/array_metadatas/process_5 I0423 09:54:56.271086 135120325433088 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 40.669 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 38.84043574333191 s) (per-host) I0423 09:54:56.271213 135120325433088 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 38.560916s. I0423 09:55:05.442876 135120325433088 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 47.732563s. I0423 09:55:05.443113 135118144259840 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0423 09:55:05.443178 135118144259840 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0423 09:55:05.443235 135118144259840 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts... I0423 09:55:06.609382 135118144259840 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts. I0423 09:55:07.864745 135247601997632 metric_logger.py:196] completed step: 6, seconds: 1.157, TFLOP/s/device: 46.973, Tokens/s/device: 7080.264, total_weights: 262144, loss: 9.667, lm_loss: 9.667, perplexity: 15787.604 I0423 09:55:08.442841 135247601997632 metric_logger.py:196] completed step: 7, seconds: 46.321, TFLOP/s/device: 1.173, Tokens/s/device: 176.853, total_weights: 262144, loss: 9.561, lm_loss: 9.561, perplexity: 14203.827 I0423 09:55:09.021370 135247601997632 metric_logger.py:196] completed step: 8, seconds: 0.583, TFLOP/s/device: 93.201, Tokens/s/device: 14048.277, total_weights: 262144, loss: 9.496, lm_loss: 9.496, perplexity: 13302.920 I0423 09:55:09.598916 135247601997632 checkpointing.py:794] Waiting for step 9 to finish before checkpoint... I0423 09:55:09.599571 135247601997632 checkpointing.py:798] Waited 0.0006749629974365234 seconds for step 9 to finish before starting checkpointing. I0423 09:55:09.601732 135247601997632 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0423 09:55:09.603688 135247601997632 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 9 I0423 09:55:09.605422 135247601997632 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_06_grad_accum/checkpoints/9. I0423 09:55:10.345492 135247601997632 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array. I0423 09:55:10.345585 135247601997632 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0423 09:55:10.380081 135247601997632 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.035569s I0423 09:55:10.380222 135247601997632 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 39.526 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.039026737213134766 s) (per-host) I0423 09:55:10.380270 135247601997632 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.039083s (batch_requests_ready=0.001674s, total_serialization_initiated=0.037346s, others=0.000063s) I0423 09:55:10.380353 135247601997632 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.043408s (all_items=0.000014s, per_item={'items': '0.00001431'}, temp_paths=0.043394) I0423 09:55:10.381008 135247601997632 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.78 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_06_grad_accum/checkpoints/9. I0423 09:55:10.381271 135118144259840 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 10:15:10.381248 I0423 09:55:10.383190 135247601997632 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize I0423 09:55:10.383476 135120304158464 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0423 09:55:10.383635 135247601997632 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_06_grad_accum/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776938109.6017, 'wait_for_prev_duration_secs': 7.200241088867188e-05, 'time_between_consecutive_saves_sec': 2.9922266006469727, 'checkpointer_blocking_start_time': 1776938109.6037698, 'checkpointer_blocking_duration_secs': 0.7775957584381104, 'get_old_steps_start_time': 1776938110.3813884, 'get_old_steps_duration_secs': 3.0279159545898438e-05, 'checkpoint_manager_blocking_start_time': 1776938109.5998077, 'checkpoint_manager_blocking_duration_secs': 0.7837941646575928} I0423 09:55:10.383756 135247601997632 checkpointing.py:409] Started an asynchronous checkpoint save for step 9 I0423 09:55:10.383801 135247601997632 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0423 09:55:16.829961 135124586829568 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_06_grad_accum/checkpoints/9/items/array_metadatas/process_5 I0423 09:55:53.435800 135118144259840 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 36.654 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 43.09455132484436 s) (per-host) I0423 09:55:53.435927 135118144259840 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 43.054583s. I0423 09:56:02.095070 135118144259840 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 51.713710s. I0423 09:56:02.095345 135120304158464 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0423 09:56:02.095467 135120304158464 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0423 09:56:02.095513 135120304158464 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts... I0423 09:56:02.097112 135120304158464 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts. I0423 09:56:02.097293 135247601997632 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9. I0423 09:56:02.097438 135247601997632 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0423 09:56:02.098342 135247601997632 metric_logger.py:196] completed step: 9, seconds: 0.578, TFLOP/s/device: 94.009, Tokens/s/device: 14170.118, total_weights: 262144, loss: 9.457, lm_loss: 9.457, perplexity: 12802.546 Per train step: Total TFLOPs: 54.35 split as 93.93% learnable weight flops and 6.07% attention flops XPK End: Thu Apr 23 09:56:10 UTC 2026 EXIT_CODE=0