XPK Start: Wed Apr 22 09:48:09 UTC 2026 PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. `rope_parameters`'s factor field must be a float >= 1, got 40 `rope_parameters`'s beta_fast field must be a float, got 32 `rope_parameters`'s beta_slow field must be a float, got 1 DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 2026-04-22 09:48:34.066375: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0422 09:48:34.632887 134660336293696 max_utils.py:273] Attempting to initialize the jax distributed system... I0422 09:48:43.671529 134660336293696 distributed.py:149] Starting JAX distributed service on [::]:8482 I0422 09:48:43.673897 134660336293696 distributed.py:172] Connecting to JAX distributed service on mt-07-eval-wsa74-slice-job-0-0.mt-07-eval-wsa74:8482 I0422 09:48:45.033878 134660336293696 max_utils.py:284] Jax distributed system initialized! I0422 09:48:51.112406 134660336293696 max_utils.py:800] System Information: Jax Version: 0.9.2 I0422 09:48:51.112514 134660336293696 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0422 09:48:51.112554 134660336293696 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365 I0422 09:48:51.112589 134660336293696 train_utils.py:391] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing. I0422 09:48:51.810027 134660336293696 maxtext_utils.py:1732] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0422 09:48:51.810615 134660336293696 maxtext_utils.py:1732] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0422 09:48:51.810813 134660336293696 checkpointing.py:688] Setting up checkpoint logger... I0422 09:48:51.810866 134660336293696 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True I0422 09:48:51.810908 134660336293696 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None I0422 09:48:51.811251 134660336293696 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7a7855e50470>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0422 09:48:54.749526 134660336293696 checkpointing.py:266] Enabling policy for fixed interval checkpointing. I0422 09:48:54.749785 134660336293696 checkpoint_manager.py:708] [process=5][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7a64081eedb0>}, handler_registry=None I0422 09:48:54.750040 134660336293696 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7a64081eedb0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`. I0422 09:48:54.750089 134660336293696 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7a64081f2930>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`. I0422 09:48:54.750125 134660336293696 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7a64081eedb0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7a64081eedb0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7a64081f2930>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7a64081f2930>}). I0422 09:48:54.750447 134660336293696 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34 I0422 09:48:54.750515 134660336293696 async_checkpointer.py:192] [process=5][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7a62f4579800> timeout: 1200 secs and primary_host=0 for async checkpoint writes I0422 09:48:56.035435 134660336293696 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106/linen_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106_07_eval/checkpoints I0422 09:48:56.263540 134660336293696 checkpoint_manager.py:929] [process=5][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106/linen_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106_07_eval/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7a774a871430> I0422 09:48:56.263746 134660336293696 checkpointing.py:302] Checkpoint manager created! I0422 09:48:57.235742 134660336293696 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0422 09:48:57.235849 134660336293696 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0422 09:48:57.616123 134660336293696 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0422 09:48:57.616220 134660336293696 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0422 09:48:57.632767 134660336293696 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0422 09:48:57.632824 134660336293696 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0422 09:48:57.656631 134660336293696 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0422 09:48:57.656709 134660336293696 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0422 09:48:57.673145 134660336293696 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0422 09:48:57.673210 134660336293696 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0422 09:48:57.689792 134660336293696 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0422 09:48:57.689863 134660336293696 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0422 09:48:57.714964 134660336293696 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv'). I0422 09:48:57.715036 134660336293696 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0422 09:48:57.735810 134660336293696 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp'). I0422 09:48:57.735874 134660336293696 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None). I0422 09:48:57.956979 134660336293696 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint I0422 09:48:57.957087 134660336293696 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint. fsdp: 32 I0422 09:48:59.383047 134660336293696 maxtext_utils.py:1835] params/params/decoder/decoder_norm/scale Shape: float32[2048] Logical: P('norm',) Physical: (None,) I0422 09:48:59.383173 134660336293696 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wi_0/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0422 09:48:59.383228 134660336293696 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wi_1/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0422 09:48:59.383290 134660336293696 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wo/kernel Shape: float32[7168,16,2048] Logical: P('mlp', 'layers', 'embed') Physical: (None, None, 'fsdp') I0422 09:48:59.383345 134660336293696 maxtext_utils.py:1835] params/params/decoder/layers/post_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0422 09:48:59.383385 134660336293696 maxtext_utils.py:1835] params/params/decoder/layers/pre_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0422 09:48:59.383439 134660336293696 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/key/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0422 09:48:59.383494 134660336293696 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/out/kernel Shape: float32[16,16,128,2048] Logical: P('heads', 'layers', 'kv', 'embed') Physical: (None, None, None, 'fsdp') I0422 09:48:59.383538 134660336293696 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/query/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'q_heads', 'kv') Physical: ('fsdp', None, None, None) I0422 09:48:59.383576 134660336293696 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/value/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0422 09:48:59.383626 134660336293696 maxtext_utils.py:1835] params/params/decoder/logits_dense/kernel Shape: float32[2048,32000] Logical: P('embed_vocab', 'vocab') Physical: ('fsdp', None) I0422 09:48:59.383699 134660336293696 maxtext_utils.py:1835] params/params/token_embedder/embedding Shape: float32[32000,2048] Logical: P('vocab', 'embed_vocab') Physical: (None, 'fsdp') I0422 09:48:59.878355 134660336293696 train.py:157] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0422 09:48:59.878448 134660336293696 train.py:157] train/xent Physical: float32[32,2048]............................................ ('fsdp', None). I0422 09:48:59.894114 134660336293696 train.py:164] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0422 09:48:59.894172 134660336293696 train.py:164] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None). I0422 09:49:10.609599 134660336293696 max_utils.py:791] Total memory size: 1.5 GB, Output size: 0.4 GB, Temp size: 1.1 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB. I0422 09:49:10.610429 134660336293696 metric_logger.py:301] number parameters: 1.104 billion I0422 09:49:21.805823 134660336293696 checkpointing.py:794] Waiting for step 0 to finish before checkpoint... I0422 09:49:22.135935 134660336293696 checkpointing.py:798] Waited 0.3300914764404297 seconds for step 0 to finish before starting checkpointing. I0422 09:49:22.138240 134660336293696 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0422 09:49:22.139833 134660336293696 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 0 I0422 09:49:22.141387 134660336293696 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106/linen_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106_07_eval/checkpoints/0. I0422 09:49:22.482860 134660336293696 signaling_client.py:364] Using JaxDistributedSignalingClient I0422 09:49:22.483827 134660336293696 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array. I0422 09:49:22.483882 134660336293696 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0422 09:49:22.755626 134660336293696 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.272839s I0422 09:49:22.755819 134660336293696 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 5.539 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.2785041332244873 s) (per-host) I0422 09:49:22.755875 134660336293696 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.278571s (batch_requests_ready=0.002479s, total_serialization_initiated=0.276018s, others=0.000074s) I0422 09:49:22.755967 134660336293696 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.282669s (all_items=0.000018s, per_item={'items': '0.00001812'}, temp_paths=0.282651) I0422 09:49:22.756818 134660336293696 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.62 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106/linen_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106_07_eval/checkpoints/0. I0422 09:49:22.757105 134531782203136 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-22 10:09:22.757072 I0422 09:49:22.779873 134660336293696 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize I0422 09:49:22.780205 134530132641536 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0422 09:49:22.780368 134660336293696 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106/linen_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106_07_eval/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776851362.138222, 'wait_for_prev_duration_secs': 5.91278076171875e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776851362.1398728, 'checkpointer_blocking_duration_secs': 0.6173892021179199, 'get_old_steps_start_time': 1776851362.7572863, 'get_old_steps_duration_secs': 3.075599670410156e-05, 'checkpoint_manager_blocking_start_time': 1776851362.136443, 'checkpoint_manager_blocking_duration_secs': 0.6438860893249512} I0422 09:49:22.780471 134660336293696 checkpointing.py:409] Started an asynchronous checkpoint save for step 0 I0422 09:49:22.780519 134660336293696 max_utils.py:750] Memstats: After params initialized: I0422 09:49:22.780568 134660336293696 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_18(process=5,(2,4,0,0)) I0422 09:49:22.780602 134660336293696 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_19(process=5,(3,4,0,0)) I0422 09:49:22.780631 134660336293696 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_22(process=5,(2,5,0,0)) I0422 09:49:22.780671 134660336293696 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_23(process=5,(3,5,0,0)) I0422 09:49:23.089481 134660336293696 metric_logger.py:196] completed step: 0, seconds: 11.195, TFLOP/s/device: 1.214, Tokens/s/device: 182.934, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52779.875 I0422 09:49:23.268615 134660336293696 metric_logger.py:196] completed step: 1, seconds: 1.282, TFLOP/s/device: 10.597, Tokens/s/device: 1597.367, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52779.875 I0422 09:49:23.678310 134660336293696 metric_logger.py:196] completed step: 2, seconds: 0.032, TFLOP/s/device: 426.450, Tokens/s/device: 64279.213, total_weights: 65536, loss: 10.262, lm_loss: 10.262, perplexity: 28637.129 I0422 09:49:23.825533 134660336293696 metric_logger.py:196] completed step: 3, seconds: 0.410, TFLOP/s/device: 33.150, Tokens/s/device: 4996.658, total_weights: 65536, loss: 9.734, lm_loss: 9.734, perplexity: 16889.203 I0422 09:49:24.120518 134660336293696 metric_logger.py:196] completed step: 4, seconds: 0.153, TFLOP/s/device: 88.762, Tokens/s/device: 13379.237, total_weights: 65536, loss: 9.277, lm_loss: 9.277, perplexity: 10694.614 I0422 09:49:24.126150 134660336293696 metric_logger.py:196] completed step: 5, seconds: 0.147, TFLOP/s/device: 92.375, Tokens/s/device: 13923.732, total_weights: 65536, loss: 8.892, lm_loss: 8.892, perplexity: 7270.717 I0422 09:49:26.177872 2845 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com I0422 09:49:28.652548 134530141034240 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106/linen_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106_07_eval/checkpoints/0/items/array_metadatas/process_5 I0422 09:49:46.418423 134660336293696 metric_logger.py:196] completed step: 6, seconds: 0.295, TFLOP/s/device: 46.008, Tokens/s/device: 6934.803, total_weights: 65536, loss: 8.592, lm_loss: 8.592, perplexity: 5390.759 I0422 09:49:46.565873 134660336293696 metric_logger.py:196] completed step: 7, seconds: 22.146, TFLOP/s/device: 0.614, Tokens/s/device: 92.479, total_weights: 65536, loss: 8.384, lm_loss: 8.384, perplexity: 4376.905 I0422 09:49:46.713337 134660336293696 metric_logger.py:196] completed step: 8, seconds: 0.152, TFLOP/s/device: 89.384, Tokens/s/device: 13472.886, total_weights: 65536, loss: 8.255, lm_loss: 8.255, perplexity: 3846.406 I0422 09:49:46.859899 134660336293696 checkpointing.py:794] Waiting for step 9 to finish before checkpoint... I0422 09:49:46.860447 134660336293696 checkpointing.py:798] Waited 0.0005686283111572266 seconds for step 9 to finish before starting checkpointing. I0422 09:49:46.862689 134660336293696 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0422 09:49:59.589498 134531782203136 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 42.563 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 37.112144470214844 s) (per-host) I0422 09:49:59.589617 134531782203136 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 36.832397s. I0422 09:50:08.015286 134531782203136 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 45.258049s. I0422 09:50:08.015569 134530132641536 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0422 09:50:08.015714 134530132641536 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0422 09:50:08.015766 134530132641536 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts... I0422 09:50:08.017478 134530132641536 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts. I0422 09:50:08.017670 134660336293696 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0. W0422 09:50:08.017802 134660336293696 checkpoint_manager.py:1452] Waiting for previous save to complete took 21.155113 seconds. If this number is high, consider checkpointing less frequently. I0422 09:50:08.019695 134660336293696 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 9 I0422 09:50:08.021680 134660336293696 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106/linen_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106_07_eval/checkpoints/9. I0422 09:50:08.311955 134660336293696 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array. I0422 09:50:08.312047 134660336293696 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0422 09:50:08.346698 134660336293696 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.035775s I0422 09:50:08.346863 134660336293696 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 39.396 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.03915524482727051 s) (per-host) I0422 09:50:08.346914 134660336293696 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.039215s (batch_requests_ready=0.001750s, total_serialization_initiated=0.037397s, others=0.000068s) I0422 09:50:08.347011 134660336293696 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.043259s (all_items=0.000016s, per_item={'items': '0.00001574'}, temp_paths=0.043243) I0422 09:50:08.347749 134660336293696 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.33 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106/linen_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106_07_eval/checkpoints/9. I0422 09:50:08.348098 134530132641536 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-22 10:10:08.348060 I0422 09:50:08.350032 134660336293696 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize I0422 09:50:08.350308 134530141034240 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0422 09:50:08.350444 134660336293696 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106/linen_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106_07_eval/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776851386.8626473, 'wait_for_prev_duration_secs': 21.155112981796265, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776851408.0197315, 'checkpointer_blocking_duration_secs': 0.32851195335388184, 'get_old_steps_start_time': 1776851408.3482687, 'get_old_steps_duration_secs': 3.170967102050781e-05, 'checkpoint_manager_blocking_start_time': 1776851386.860688, 'checkpoint_manager_blocking_duration_secs': 21.489721536636353} I0422 09:50:08.350601 134660336293696 checkpointing.py:409] Started an asynchronous checkpoint save for step 9 I0422 09:50:08.350646 134660336293696 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0422 09:50:14.009625 134526909552384 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106/linen_xpk_feat_nnx_trainstate_and_training_loop_20260422_093106_07_eval/checkpoints/9/items/array_metadatas/process_5 I0422 09:50:51.824002 134530132641536 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 36.299 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 43.51625418663025 s) (per-host) I0422 09:50:51.824136 134530132641536 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 43.475926s. I0422 09:50:58.408536 134530132641536 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 50.060311s. I0422 09:50:58.408835 134530141034240 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0422 09:50:58.408952 134530141034240 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0422 09:50:58.409015 134530141034240 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts... I0422 09:50:58.410553 134530141034240 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts. I0422 09:50:58.410742 134660336293696 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9. I0422 09:50:58.410889 134660336293696 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0422 09:50:58.411827 134660336293696 metric_logger.py:196] completed step: 9, seconds: 0.147, TFLOP/s/device: 92.194, Tokens/s/device: 13896.428, total_weights: 65536, loss: 8.179, lm_loss: 8.179, perplexity: 3564.635 Per train step: Total TFLOPs: 13.59 split as 93.93% learnable weight flops and 6.07% attention flops XPK End: Wed Apr 22 09:51:09 UTC 2026 EXIT_CODE=0