XPK Start: Tue Apr 21 14:56:52 UTC 2026 PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. `rope_parameters`'s factor field must be a float >= 1, got 40 `rope_parameters`'s beta_fast field must be a float, got 32 `rope_parameters`'s beta_slow field must be a float, got 1 DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 2026-04-21 14:57:17.993034: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0421 14:57:18.202263 137649491269440 max_utils.py:273] Attempting to initialize the jax distributed system... I0421 14:57:27.243694 137649491269440 distributed.py:149] Starting JAX distributed service on [::]:8482 I0421 14:57:27.246008 137649491269440 distributed.py:172] Connecting to JAX distributed service on mt-06-grad-accum-nyywr-slice-job-0-0.mt-06-grad-accum-nyywr:8482 I0421 14:57:28.098402 137649491269440 max_utils.py:284] Jax distributed system initialized! I0421 14:57:34.339706 137649491269440 max_utils.py:800] System Information: Jax Version: 0.9.2 I0421 14:57:34.339812 137649491269440 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0421 14:57:34.339852 137649491269440 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365 I0421 14:57:34.339886 137649491269440 train_utils.py:361] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing. I0421 14:57:35.047829 137649491269440 maxtext_utils.py:1565] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0421 14:57:35.048107 137649491269440 checkpointing.py:677] Setting up checkpoint logger... I0421 14:57:35.048161 137649491269440 checkpointing.py:233] Creating checkpoint manager with ocdbt=True and zarr3=True I0421 14:57:35.048204 137649491269440 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None I0421 14:57:35.048545 137649491269440 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7d307111c3e0>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0421 14:57:38.349074 137649491269440 checkpointing.py:265] Enabling policy for fixed interval checkpointing. I0421 14:57:38.349308 137649491269440 checkpoint_manager.py:708] [process=4][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7d1bd8495520>}, handler_registry=None I0421 14:57:38.349545 137649491269440 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7d1bd8495520>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`. I0421 14:57:38.349596 137649491269440 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7d1bd8499e50>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`. I0421 14:57:38.349632 137649491269440 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7d1bd8495520>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7d1bd8495520>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7d1bd8499e50>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7d1bd8499e50>}). I0421 14:57:38.349967 137649491269440 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34 I0421 14:57:38.350046 137649491269440 async_checkpointer.py:192] [process=4][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7d1bc07d1800> timeout: 1200 secs and primary_host=0 for async checkpoint writes I0421 14:57:39.062536 137649491269440 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_main_20260421_144122/linen_xpk_main_20260421_144122_06_grad_accum/checkpoints I0421 14:57:39.064787 137649491269440 checkpoint_manager.py:929] [process=4][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_main_20260421_144122/linen_xpk_main_20260421_144122_06_grad_accum/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7d1bd84967b0> I0421 14:57:39.064898 137649491269440 checkpointing.py:301] Checkpoint manager created! I0421 14:57:40.784849 137649491269440 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0421 14:57:40.784964 137649491269440 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0421 14:57:41.170520 137649491269440 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0421 14:57:41.170612 137649491269440 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0421 14:57:41.187389 137649491269440 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0421 14:57:41.187446 137649491269440 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0421 14:57:41.211610 137649491269440 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0421 14:57:41.211692 137649491269440 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0421 14:57:41.228582 137649491269440 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0421 14:57:41.228654 137649491269440 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0421 14:57:41.245548 137649491269440 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0421 14:57:41.245607 137649491269440 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0421 14:57:41.270768 137649491269440 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv'). I0421 14:57:41.270833 137649491269440 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0421 14:57:41.292064 137649491269440 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp'). I0421 14:57:41.292139 137649491269440 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None). I0421 14:57:41.503665 137649491269440 checkpointing.py:577] checkpoint manager exists so trying to load this run's existing checkpoint I0421 14:57:41.503784 137649491269440 checkpointing.py:665] No existing checkpoints found, not restoring checkpoint. fsdp: 32 I0421 14:57:42.930046 137649491269440 maxtext_utils.py:1668] params/params/decoder/decoder_norm/scale Shape: float32[2048] Logical: P('norm',) Physical: (None,) I0421 14:57:42.930194 137649491269440 maxtext_utils.py:1668] params/params/decoder/layers/mlp/wi_0/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0421 14:57:42.930269 137649491269440 maxtext_utils.py:1668] params/params/decoder/layers/mlp/wi_1/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0421 14:57:42.930363 137649491269440 maxtext_utils.py:1668] params/params/decoder/layers/mlp/wo/kernel Shape: float32[7168,16,2048] Logical: P('mlp', 'layers', 'embed') Physical: (None, None, 'fsdp') I0421 14:57:42.930445 137649491269440 maxtext_utils.py:1668] params/params/decoder/layers/post_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0421 14:57:42.930511 137649491269440 maxtext_utils.py:1668] params/params/decoder/layers/pre_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0421 14:57:42.930591 137649491269440 maxtext_utils.py:1668] params/params/decoder/layers/self_attention/key/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0421 14:57:42.930682 137649491269440 maxtext_utils.py:1668] params/params/decoder/layers/self_attention/out/kernel Shape: float32[16,16,128,2048] Logical: P('heads', 'layers', 'kv', 'embed') Physical: (None, None, None, 'fsdp') I0421 14:57:42.930744 137649491269440 maxtext_utils.py:1668] params/params/decoder/layers/self_attention/query/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'q_heads', 'kv') Physical: ('fsdp', None, None, None) I0421 14:57:42.930805 137649491269440 maxtext_utils.py:1668] params/params/decoder/layers/self_attention/value/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0421 14:57:42.930878 137649491269440 maxtext_utils.py:1668] params/params/decoder/logits_dense/kernel Shape: float32[2048,32000] Logical: P('embed_vocab', 'vocab') Physical: ('fsdp', None) I0421 14:57:42.930952 137649491269440 maxtext_utils.py:1668] params/params/token_embedder/embedding Shape: float32[32000,2048] Logical: P('vocab', 'embed_vocab') Physical: (None, 'fsdp') I0421 14:57:42.955463 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Logical: float32[2048]............................................... Unknown. I0421 14:57:42.955593 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Physical: float32[2048]............................................... (None,). I0421 14:57:42.970650 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Logical: float32[2048,16,7168]....................................... Unknown. I0421 14:57:42.970710 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Physical: float32[2048,16,7168]....................................... ('fsdp', None, None). I0421 14:57:43.001484 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Logical: float32[7168,16,2048]....................................... Unknown. I0421 14:57:43.001555 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Physical: float32[7168,16,2048]....................................... (None, None, 'fsdp'). I0421 14:57:43.017320 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Logical: float32[2048,16]............................................ Unknown. I0421 14:57:43.017374 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Physical: float32[2048,16]............................................ (None, None). I0421 14:57:43.047050 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Logical: float32[2048,16,16,128]..................................... Unknown. I0421 14:57:43.047114 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Physical: float32[2048,16,16,128]..................................... ('fsdp', None, None, None). I0421 14:57:43.062091 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Logical: float32[16,16,128,2048]..................................... Unknown. I0421 14:57:43.062148 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Physical: float32[16,16,128,2048]..................................... (None, None, None, 'fsdp'). I0421 14:57:43.107012 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Logical: float32[2048,32000]......................................... Unknown. I0421 14:57:43.107081 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Physical: float32[2048,32000]......................................... ('fsdp', None). I0421 14:57:43.121937 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Logical: float32[32000,2048]......................................... Unknown. I0421 14:57:43.121991 137649491269440 gradient_accumulation.py:68] gradient_accumulation/inputs Physical: float32[32000,2048]......................................... (None, 'fsdp'). I0421 14:57:43.794823 137649491269440 train.py:155] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0421 14:57:43.794914 137649491269440 train.py:155] train/xent Physical: float32[32,2048]............................................ ('fsdp', None). I0421 14:57:43.810726 137649491269440 train.py:162] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0421 14:57:43.810785 137649491269440 train.py:162] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None). I0421 14:57:55.027068 137649491269440 max_utils.py:791] Total memory size: 1.7 GB, Output size: 0.4 GB, Temp size: 1.3 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB. I0421 14:57:55.027868 137649491269440 metric_logger.py:301] number parameters: 1.104 billion I0421 14:57:57.361713 137649491269440 checkpointing.py:772] Waiting for step 0 to finish before checkpoint... I0421 14:58:08.169228 137649491269440 checkpointing.py:776] Waited 10.807493448257446 seconds for step 0 to finish before starting checkpointing. I0421 14:58:08.171767 137649491269440 checkpoint_manager.py:2009] [process=4][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0421 14:58:08.173465 137649491269440 checkpoint_manager.py:1512] [process=4] Saving checkpoint at step 0 I0421 14:58:08.174799 137649491269440 event_tracking.py:70] [process=4] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_main_20260421_144122/linen_xpk_main_20260421_144122_06_grad_accum/checkpoints/0. I0421 14:58:08.505661 137649491269440 signaling_client.py:364] Using JaxDistributedSignalingClient I0421 14:58:08.506520 137649491269440 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array. I0421 14:58:08.506578 137649491269440 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0421 14:58:08.780297 137649491269440 base_pytree_checkpoint_handler.py:154] [process=4][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.274725s I0421 14:58:08.780466 137649491269440 base_pytree_checkpoint_handler.py:130] [process=4] /jax/orbax/write/blocking_gbytes_per_sec: 5.502 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.2803685665130615 s) (per-host) I0421 14:58:08.780521 137649491269440 base_pytree_checkpoint_handler.py:768] [process=4][thread=MainThread] Initiated Pytree async_save. Time taken: 0.280433s (batch_requests_ready=0.002352s, total_serialization_initiated=0.278009s, others=0.000072s) I0421 14:58:08.780614 137649491269440 composite_checkpoint_handler.py:715] [process=4][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.284571s (all_items=0.000018s, per_item={'items': '0.00001788'}, temp_paths=0.284553) I0421 14:58:08.781426 137649491269440 event_tracking.py:125] [process=4] [async] Finished blocking save in 0.61 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_main_20260421_144122/linen_xpk_main_20260421_144122_06_grad_accum/checkpoints/0. I0421 14:58:08.781742 137520744670976 async_checkpointer.py:76] [process=4][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-21 15:18:08.781707 I0421 14:58:09.209228 137649491269440 checkpoint_manager.py:1560] [process=4][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize I0421 14:58:09.209696 137520213911296 async_checkpointer.py:280] [process=4][thread=save_finalize] Waiting for background save thread=async_save. I0421 14:58:09.209847 137649491269440 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_main_20260421_144122/linen_xpk_main_20260421_144122_06_grad_accum/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776783488.1717486, 'wait_for_prev_duration_secs': 5.984306335449219e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776783488.173502, 'checkpointer_blocking_duration_secs': 0.608389139175415, 'get_old_steps_start_time': 1776783488.7819157, 'get_old_steps_duration_secs': 3.0279159545898438e-05, 'checkpoint_manager_blocking_start_time': 1776783488.1696858, 'checkpoint_manager_blocking_duration_secs': 1.040118932723999} I0421 14:58:09.210014 137649491269440 checkpointing.py:408] Started an asynchronous checkpoint save for step 0 I0421 14:58:09.210070 137649491269440 max_utils.py:750] Memstats: After params initialized: I0421 14:58:09.210126 137649491269440 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_16(process=4,(0,4,0,0)) I0421 14:58:09.210160 137649491269440 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_17(process=4,(1,4,0,0)) I0421 14:58:09.210188 137649491269440 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_20(process=4,(0,5,0,0)) I0421 14:58:09.210214 137649491269440 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_21(process=4,(1,5,0,0)) I0421 14:58:09.523444 137649491269440 metric_logger.py:196] completed step: 0, seconds: 2.334, TFLOP/s/device: 23.288, Tokens/s/device: 3510.269, total_weights: 262144, loss: 10.877, lm_loss: 10.877, perplexity: 52959.059 I0421 14:58:10.128372 137649491269440 metric_logger.py:196] completed step: 1, seconds: 12.160, TFLOP/s/device: 4.469, Tokens/s/device: 673.669, total_weights: 262144, loss: 10.877, lm_loss: 10.877, perplexity: 52959.059 I0421 14:58:10.706759 137649491269440 metric_logger.py:196] completed step: 2, seconds: 0.027, TFLOP/s/device: 2040.722, Tokens/s/device: 307599.880, total_weights: 262144, loss: 10.563, lm_loss: 10.563, perplexity: 38662.707 I0421 14:58:11.284885 137649491269440 metric_logger.py:196] completed step: 3, seconds: 0.584, TFLOP/s/device: 93.027, Tokens/s/device: 14022.115, total_weights: 262144, loss: 10.272, lm_loss: 10.272, perplexity: 28909.668 I0421 14:58:12.455223 2556 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com I0421 14:58:12.460820 137649491269440 metric_logger.py:196] completed step: 4, seconds: 0.578, TFLOP/s/device: 93.991, Tokens/s/device: 14167.422, total_weights: 262144, loss: 10.022, lm_loss: 10.022, perplexity: 22524.992 I0421 14:58:12.466916 137649491269440 metric_logger.py:196] completed step: 5, seconds: 0.578, TFLOP/s/device: 93.993, Tokens/s/device: 14167.618, total_weights: 262144, loss: 9.820, lm_loss: 9.820, perplexity: 18401.865 I0421 14:58:15.286838 137520222304000 array_metadata_store.py:203] [process=4][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_main_20260421_144122/linen_xpk_main_20260421_144122_06_grad_accum/checkpoints/0/items/array_metadatas/process_4 I0421 14:58:46.624897 137520744670976 base_pytree_checkpoint_handler.py:130] [process=4] /jax/orbax/write/gbytes_per_sec: 41.432 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 38.124757289886475 s) (per-host) I0421 14:58:46.625024 137520744670976 async_checkpointer.py:90] [process=4][thread=async_save] 3 Handler Commit operations completed. Time taken: 37.843170s. I0421 14:58:55.164449 137520744670976 async_checkpointer.py:160] [process=4][thread=async_save] Background save thread done. Time taken: 46.382579s. I0421 14:58:55.164778 137520213911296 async_checkpointer.py:288] [process=4][thread=save_finalize] Done with waiting for background save thread=async_save. I0421 14:58:55.164905 137520213911296 async_checkpointer.py:298] [process=4][thread=save_finalize] No errors found in background save thread=async_save. I0421 14:58:55.164957 137520213911296 checkpoint_manager.py:2137] [process=4][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts... I0421 14:58:55.167673 137520213911296 checkpoint_manager.py:2146] [process=4][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts. I0421 14:58:58.103793 137649491269440 metric_logger.py:196] completed step: 6, seconds: 1.177, TFLOP/s/device: 46.192, Tokens/s/device: 6962.505, total_weights: 262144, loss: 9.667, lm_loss: 9.667, perplexity: 15787.604 I0421 14:58:58.682023 137649491269440 metric_logger.py:196] completed step: 7, seconds: 45.059, TFLOP/s/device: 1.206, Tokens/s/device: 181.806, total_weights: 262144, loss: 9.561, lm_loss: 9.561, perplexity: 14203.827 I0421 14:58:59.260164 137649491269440 metric_logger.py:196] completed step: 8, seconds: 0.583, TFLOP/s/device: 93.182, Tokens/s/device: 14045.411, total_weights: 262144, loss: 9.496, lm_loss: 9.496, perplexity: 13302.920 I0421 14:58:59.838200 137649491269440 checkpointing.py:772] Waiting for step 9 to finish before checkpoint... I0421 14:58:59.838895 137649491269440 checkpointing.py:776] Waited 0.0007131099700927734 seconds for step 9 to finish before starting checkpointing. I0421 14:58:59.840893 137649491269440 checkpoint_manager.py:2009] [process=4][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0421 14:58:59.842468 137649491269440 checkpoint_manager.py:1512] [process=4] Saving checkpoint at step 9 I0421 14:58:59.843875 137649491269440 event_tracking.py:70] [process=4] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_main_20260421_144122/linen_xpk_main_20260421_144122_06_grad_accum/checkpoints/9. I0421 14:59:00.158215 137649491269440 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array. I0421 14:59:00.158309 137649491269440 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0421 14:59:00.193449 137649491269440 base_pytree_checkpoint_handler.py:154] [process=4][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.036185s I0421 14:59:00.193599 137649491269440 base_pytree_checkpoint_handler.py:130] [process=4] /jax/orbax/write/blocking_gbytes_per_sec: 38.785 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.03977203369140625 s) (per-host) I0421 14:59:00.193661 137649491269440 base_pytree_checkpoint_handler.py:768] [process=4][thread=MainThread] Initiated Pytree async_save. Time taken: 0.039847s (batch_requests_ready=0.001796s, total_serialization_initiated=0.037970s, others=0.000081s) I0421 14:59:00.193748 137649491269440 composite_checkpoint_handler.py:715] [process=4][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.044407s (all_items=0.000016s, per_item={'items': '0.00001597'}, temp_paths=0.044391) I0421 14:59:00.194445 137649491269440 event_tracking.py:125] [process=4] [async] Finished blocking save in 0.35 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_main_20260421_144122/linen_xpk_main_20260421_144122_06_grad_accum/checkpoints/9. I0421 14:59:00.194725 137524475307776 async_checkpointer.py:76] [process=4][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-21 15:19:00.194701 I0421 14:59:00.201282 137649491269440 checkpoint_manager.py:1560] [process=4][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize I0421 14:59:00.201539 137520727885568 async_checkpointer.py:280] [process=4][thread=save_finalize] Waiting for background save thread=async_save. I0421 14:59:00.201683 137649491269440 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_main_20260421_144122/linen_xpk_main_20260421_144122_06_grad_accum/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776783539.840859, 'wait_for_prev_duration_secs': 7.62939453125e-05, 'time_between_consecutive_saves_sec': 4.673147201538086, 'checkpointer_blocking_start_time': 1776783539.8425052, 'checkpointer_blocking_duration_secs': 0.3523216247558594, 'get_old_steps_start_time': 1776783540.1948512, 'get_old_steps_duration_secs': 2.9802322387695312e-05, 'checkpoint_manager_blocking_start_time': 1776783539.8391263, 'checkpoint_manager_blocking_duration_secs': 0.36252260208129883} I0421 14:59:00.201791 137649491269440 checkpointing.py:408] Started an asynchronous checkpoint save for step 9 I0421 14:59:00.201836 137649491269440 checkpoint_manager.py:2020] [process=4][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0421 14:59:05.696557 137520744670976 array_metadata_store.py:203] [process=4][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_main_20260421_144122/linen_xpk_main_20260421_144122_06_grad_accum/checkpoints/9/items/array_metadatas/process_4 I0421 14:59:42.300541 137524475307776 base_pytree_checkpoint_handler.py:130] [process=4] /jax/orbax/write/gbytes_per_sec: 37.479 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 42.14667630195618 s) (per-host) I0421 14:59:42.300706 137524475307776 async_checkpointer.py:90] [process=4][thread=async_save] 3 Handler Commit operations completed. Time taken: 42.105904s. I0421 14:59:50.862603 137524475307776 async_checkpointer.py:160] [process=4][thread=async_save] Background save thread done. Time taken: 50.667779s. I0421 14:59:50.862879 137520727885568 async_checkpointer.py:288] [process=4][thread=save_finalize] Done with waiting for background save thread=async_save. I0421 14:59:50.862934 137520727885568 async_checkpointer.py:298] [process=4][thread=save_finalize] No errors found in background save thread=async_save. I0421 14:59:50.862987 137520727885568 checkpoint_manager.py:2137] [process=4][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts... I0421 14:59:50.865099 137520727885568 checkpoint_manager.py:2146] [process=4][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts. I0421 14:59:50.865276 137649491269440 checkpoint_manager.py:2032] [process=4][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9. I0421 14:59:50.865433 137649491269440 checkpoint_manager.py:2009] [process=4][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0421 14:59:50.866439 137649491269440 metric_logger.py:196] completed step: 9, seconds: 0.578, TFLOP/s/device: 93.985, Tokens/s/device: 14166.466, total_weights: 262144, loss: 9.457, lm_loss: 9.457, perplexity: 12802.546 Per train step: Total TFLOPs: 54.35 split as 93.93% learnable weight flops and 6.07% attention flops XPK End: Tue Apr 21 15:00:02 UTC 2026 EXIT_CODE=0