main (586e69205) vs feat/nnx-trainstate-and-training-loop (1abe20691)| Metric | main 586e69205 | feat/nnx-trainstate-and-training-loop 1abe20691 | Diff (feat/nnx-trainstate-and-training-loop − main) |
|---|---|---|---|
| Parameters | 1.104 billion | 1.104 billion | — |
| Final loss | 8.1790 | 8.1790 | 0 |
| TFLOP/s | 92.246 | 92.347 | +0.101 |
| Tok/s | 13904.4 | 13919.5 | +15.121 |
| Avg s/step | 2.882 | 2.748 | -0.134 |
| Memory % | 1.38 | 1.38 | 0 |
| JAX | 0.9.2 | 0.9.2 | — |
Diff = branch value − main value. Green = branch improved. Red = branch regressed.
XPK Start: Thu Apr 23 07:55:24 UTC 2026 PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. `rope_parameters`'s factor field must be a float >= 1, got 40 `rope_parameters`'s beta_fast field must be a float, got 32 `rope_parameters`'s beta_slow field must be a float, got 1 DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 2026-04-23 07:55:48.841148: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0423 07:55:49.053261 132489734272832 max_utils.py:273] Attempting to initialize the jax distributed system... I0423 07:55:58.094779 132489734272832 distributed.py:149] Starting JAX distributed service on [::]:8482 I0423 07:55:58.097288 132489734272832 distributed.py:172] Connecting to JAX distributed service on mt-10-shardy-false-8whn7-slice-job-0-0.mt-10-shardy-false-8whn7:8482 I0423 07:55:59.199954 132489734272832 max_utils.py:284] Jax distributed system initialized! I0423 07:56:05.383410 132489734272832 max_utils.py:800] System Information: Jax Version: 0.9.2 I0423 07:56:05.383515 132489734272832 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0423 07:56:05.383556 132489734272832 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365 I0423 07:56:05.383593 132489734272832 train_utils.py:361] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing. I0423 07:56:06.080533 132489734272832 maxtext_utils.py:1604] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0423 07:56:06.080817 132489734272832 checkpointing.py:677] Setting up checkpoint logger... I0423 07:56:06.080875 132489734272832 checkpointing.py:233] Creating checkpoint manager with ocdbt=True and zarr3=True I0423 07:56:06.080920 132489734272832 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None I0423 07:56:06.081250 132489734272832 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x787f179182f0>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0423 07:56:09.427806 132489734272832 checkpointing.py:265] Enabling policy for fixed interval checkpointing. I0423 07:56:09.428046 132489734272832 checkpoint_manager.py:708] [process=5][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x786a905a5f10>}, handler_registry=None I0423 07:56:09.428290 132489734272832 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x786a905a5f10>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`. I0423 07:56:09.428338 132489734272832 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x786a905ac170>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`. I0423 07:56:09.428374 132489734272832 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x786a905a5f10>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x786a905a5f10>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x786a905ac170>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x786a905ac170>}). I0423 07:56:09.428704 132489734272832 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34 I0423 07:56:09.428777 132489734272832 async_checkpointer.py:192] [process=5][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x786a901d9800> timeout: 1200 secs and primary_host=0 for async checkpoint writes I0423 07:56:11.039485 132489734272832 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_main_20260423_071538/linen_xpk_main_20260423_071538_10_shardy_false/checkpoints I0423 07:56:11.041841 132489734272832 checkpoint_manager.py:929] [process=5][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_main_20260423_071538/linen_xpk_main_20260423_071538_10_shardy_false/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x786a905a6bd0> I0423 07:56:11.041954 132489734272832 checkpointing.py:301] Checkpoint manager created! I0423 07:56:11.967746 132489734272832 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0423 07:56:11.967861 132489734272832 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0423 07:56:12.347174 132489734272832 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0423 07:56:12.347268 132489734272832 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0423 07:56:12.363867 132489734272832 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0423 07:56:12.363925 132489734272832 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0423 07:56:12.387728 132489734272832 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0423 07:56:12.387797 132489734272832 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 07:56:12.404350 132489734272832 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0423 07:56:12.404412 132489734272832 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 07:56:12.420979 132489734272832 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0423 07:56:12.421035 132489734272832 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 07:56:12.445875 132489734272832 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv'). I0423 07:56:12.445944 132489734272832 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 07:56:12.467192 132489734272832 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp'). I0423 07:56:12.467266 132489734272832 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None). I0423 07:56:12.678431 132489734272832 checkpointing.py:577] checkpoint manager exists so trying to load this run's existing checkpoint I0423 07:56:12.678536 132489734272832 checkpointing.py:665] No existing checkpoints found, not restoring checkpoint. fsdp: 32 I0423 07:56:14.060797 132489734272832 maxtext_utils.py:1707] params/params/decoder/decoder_norm/scale Shape: float32[2048] Logical: P('norm',) Physical: (None,) I0423 07:56:14.060922 132489734272832 maxtext_utils.py:1707] params/params/decoder/layers/mlp/wi_0/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0423 07:56:14.060975 132489734272832 maxtext_utils.py:1707] params/params/decoder/layers/mlp/wi_1/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0423 07:56:14.061038 132489734272832 maxtext_utils.py:1707] params/params/decoder/layers/mlp/wo/kernel Shape: float32[7168,16,2048] Logical: P('mlp', 'layers', 'embed') Physical: (None, None, 'fsdp') I0423 07:56:14.061089 132489734272832 maxtext_utils.py:1707] params/params/decoder/layers/post_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0423 07:56:14.061133 132489734272832 maxtext_utils.py:1707] params/params/decoder/layers/pre_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0423 07:56:14.061185 132489734272832 maxtext_utils.py:1707] params/params/decoder/layers/self_attention/key/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0423 07:56:14.061236 132489734272832 maxtext_utils.py:1707] params/params/decoder/layers/self_attention/out/kernel Shape: float32[16,16,128,2048] Logical: P('heads', 'layers', 'kv', 'embed') Physical: (None, None, None, 'fsdp') I0423 07:56:14.061275 132489734272832 maxtext_utils.py:1707] params/params/decoder/layers/self_attention/query/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'q_heads', 'kv') Physical: ('fsdp', None, None, None) I0423 07:56:14.061311 132489734272832 maxtext_utils.py:1707] params/params/decoder/layers/self_attention/value/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0423 07:56:14.061357 132489734272832 maxtext_utils.py:1707] params/params/decoder/logits_dense/kernel Shape: float32[2048,32000] Logical: P('embed_vocab', 'vocab') Physical: ('fsdp', None) I0423 07:56:14.061405 132489734272832 maxtext_utils.py:1707] params/params/token_embedder/embedding Shape: float32[32000,2048] Logical: P('vocab', 'embed_vocab') Physical: (None, 'fsdp') I0423 07:56:14.559746 132489734272832 train.py:155] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0423 07:56:14.559848 132489734272832 train.py:155] train/xent Physical: float32[32,2048]............................................ ('fsdp', None). I0423 07:56:14.575670 132489734272832 train.py:162] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0423 07:56:14.575732 132489734272832 train.py:162] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None). I0423 07:56:25.046374 132489734272832 max_utils.py:791] Total memory size: 1.5 GB, Output size: 0.4 GB, Temp size: 1.1 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB. I0423 07:56:25.047126 132489734272832 metric_logger.py:301] number parameters: 1.104 billion I0423 07:56:36.402561 132489734272832 checkpointing.py:772] Waiting for step 0 to finish before checkpoint... I0423 07:56:36.828944 132489734272832 checkpointing.py:776] Waited 0.426363468170166 seconds for step 0 to finish before starting checkpointing. I0423 07:56:36.831197 132489734272832 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0423 07:56:36.832779 132489734272832 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 0 I0423 07:56:36.834325 132489734272832 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_main_20260423_071538/linen_xpk_main_20260423_071538_10_shardy_false/checkpoints/0. I0423 07:56:37.575447 132489734272832 signaling_client.py:364] Using JaxDistributedSignalingClient I0423 07:56:37.576350 132489734272832 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array. I0423 07:56:37.576406 132489734272832 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0423 07:56:37.850255 132489734272832 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.274883s I0423 07:56:37.850438 132489734272832 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 5.504 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.2802748680114746 s) (per-host) I0423 07:56:37.850503 132489734272832 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.280351s (batch_requests_ready=0.002192s, total_serialization_initiated=0.278074s, others=0.000085s) I0423 07:56:37.850625 132489734272832 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.284492s (all_items=0.000019s, per_item={'items': '0.00001860'}, temp_paths=0.284473) I0423 07:56:37.851408 132489734272832 event_tracking.py:125] [process=5] [async] Finished blocking save in 1.02 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_main_20260423_071538/linen_xpk_main_20260423_071538_10_shardy_false/checkpoints/0. I0423 07:56:37.851760 132360804341504 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 08:16:37.851723 I0423 07:56:38.280864 132489734272832 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize I0423 07:56:38.281257 132360247592704 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0423 07:56:38.281427 132489734272832 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_main_20260423_071538/linen_xpk_main_20260423_071538_10_shardy_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776930996.831179, 'wait_for_prev_duration_secs': 5.936622619628906e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776930996.8328156, 'checkpointer_blocking_duration_secs': 1.0190975666046143, 'get_old_steps_start_time': 1776930997.8519385, 'get_old_steps_duration_secs': 3.457069396972656e-05, 'checkpoint_manager_blocking_start_time': 1776930996.8293722, 'checkpoint_manager_blocking_duration_secs': 1.4520113468170166} I0423 07:56:38.281538 132489734272832 checkpointing.py:408] Started an asynchronous checkpoint save for step 0 I0423 07:56:38.281590 132489734272832 max_utils.py:750] Memstats: After params initialized: I0423 07:56:38.281646 132489734272832 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_18(process=5,(2,4,0,0)) I0423 07:56:38.281693 132489734272832 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_19(process=5,(3,4,0,0)) I0423 07:56:38.281723 132489734272832 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_22(process=5,(2,5,0,0)) I0423 07:56:38.281750 132489734272832 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_23(process=5,(3,5,0,0)) I0423 07:56:38.595552 132489734272832 metric_logger.py:196] completed step: 0, seconds: 11.355, TFLOP/s/device: 1.197, Tokens/s/device: 180.356, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52779.875 I0423 07:56:38.768605 132489734272832 metric_logger.py:196] completed step: 1, seconds: 2.191, TFLOP/s/device: 6.200, Tokens/s/device: 934.534, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52779.875 I0423 07:56:39.150761 132489734272832 metric_logger.py:196] completed step: 2, seconds: 0.026, TFLOP/s/device: 525.391, Tokens/s/device: 79192.607, total_weights: 65536, loss: 10.262, lm_loss: 10.262, perplexity: 28637.129 I0423 07:56:39.298031 132489734272832 metric_logger.py:196] completed step: 3, seconds: 0.382, TFLOP/s/device: 35.534, Tokens/s/device: 5356.125, total_weights: 65536, loss: 9.734, lm_loss: 9.734, perplexity: 16889.203 I0423 07:56:39.593369 132489734272832 metric_logger.py:196] completed step: 4, seconds: 0.153, TFLOP/s/device: 88.611, Tokens/s/device: 13356.377, total_weights: 65536, loss: 9.277, lm_loss: 9.277, perplexity: 10694.614 I0423 07:56:39.599736 132489734272832 metric_logger.py:196] completed step: 5, seconds: 0.147, TFLOP/s/device: 92.232, Tokens/s/device: 13902.277, total_weights: 65536, loss: 8.892, lm_loss: 8.892, perplexity: 7270.717 I0423 07:56:41.586686 2852 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com I0423 07:56:43.655354 132360775374592 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_main_20260423_071538/linen_xpk_main_20260423_071538_10_shardy_false/checkpoints/0/items/array_metadatas/process_5 I0423 07:57:02.185185 132489734272832 metric_logger.py:196] completed step: 6, seconds: 0.296, TFLOP/s/device: 45.926, Tokens/s/device: 6922.474, total_weights: 65536, loss: 8.592, lm_loss: 8.592, perplexity: 5390.759 I0423 07:57:02.332571 132489734272832 metric_logger.py:196] completed step: 7, seconds: 22.439, TFLOP/s/device: 0.606, Tokens/s/device: 91.271, total_weights: 65536, loss: 8.384, lm_loss: 8.384, perplexity: 4376.905 I0423 07:57:02.479972 132489734272832 metric_logger.py:196] completed step: 8, seconds: 0.153, TFLOP/s/device: 88.948, Tokens/s/device: 13407.178, total_weights: 65536, loss: 8.255, lm_loss: 8.255, perplexity: 3846.406 I0423 07:57:02.626466 132489734272832 checkpointing.py:772] Waiting for step 9 to finish before checkpoint... I0423 07:57:02.627160 132489734272832 checkpointing.py:776] Waited 0.0007116794586181641 seconds for step 9 to finish before starting checkpointing. I0423 07:57:02.629299 132489734272832 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0423 07:57:14.894633 132360804341504 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 42.321 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 37.32444524765015 s) (per-host) I0423 07:57:14.894746 132360804341504 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 37.042874s. I0423 07:57:23.881562 132360804341504 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 46.029673s. I0423 07:57:23.881863 132360247592704 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0423 07:57:23.881989 132360247592704 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0423 07:57:23.882041 132360247592704 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts... I0423 07:57:23.883594 132360247592704 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts. I0423 07:57:23.883793 132489734272832 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0. W0423 07:57:23.883928 132489734272832 checkpoint_manager.py:1452] Waiting for previous save to complete took 21.254626 seconds. If this number is high, consider checkpointing less frequently. I0423 07:57:23.885646 132489734272832 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 9 I0423 07:57:23.887778 132489734272832 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_main_20260423_071538/linen_xpk_main_20260423_071538_10_shardy_false/checkpoints/9. I0423 07:57:24.601198 132489734272832 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array. I0423 07:57:24.601291 132489734272832 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0423 07:57:24.636557 132489734272832 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.036489s I0423 07:57:24.636733 132489734272832 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 38.663 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.03989815711975098 s) (per-host) I0423 07:57:24.636786 132489734272832 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.039958s (batch_requests_ready=0.001759s, total_serialization_initiated=0.038133s, others=0.000066s) I0423 07:57:24.636897 132489734272832 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.044190s (all_items=0.000015s, per_item={'items': '0.00001454'}, temp_paths=0.044175) I0423 07:57:24.637616 132489734272832 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.75 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_main_20260423_071538/linen_xpk_main_20260423_071538_10_shardy_false/checkpoints/9. I0423 07:57:24.637913 132360247592704 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 08:17:24.637884 I0423 07:57:24.642695 132489734272832 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize I0423 07:57:24.642980 132360775374592 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0423 07:57:24.643140 132489734272832 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_main_20260423_071538/linen_xpk_main_20260423_071538_10_shardy_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776931022.6292703, 'wait_for_prev_duration_secs': 21.25462555885315, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776931043.8857005, 'checkpointer_blocking_duration_secs': 0.7523386478424072, 'get_old_steps_start_time': 1776931044.638063, 'get_old_steps_duration_secs': 2.8848648071289062e-05, 'checkpoint_manager_blocking_start_time': 1776931022.6274197, 'checkpoint_manager_blocking_duration_secs': 22.015685081481934} I0423 07:57:24.643251 132489734272832 checkpointing.py:408] Started an asynchronous checkpoint save for step 9 I0423 07:57:24.643294 132489734272832 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0423 07:57:29.832936 132336507143936 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_main_20260423_071538/linen_xpk_main_20260423_071538_10_shardy_false/checkpoints/9/items/array_metadatas/process_5 I0423 07:58:06.338025 132360247592704 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 37.843 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 41.74115037918091 s) (per-host) I0423 07:58:06.338149 132360247592704 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 41.700140s. I0423 07:58:14.652267 132360247592704 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 50.014244s. I0423 07:58:14.652550 132360775374592 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0423 07:58:14.652701 132360775374592 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0423 07:58:14.652753 132360775374592 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts... I0423 07:58:14.654101 132360775374592 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts. I0423 07:58:14.654333 132489734272832 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9. I0423 07:58:14.654474 132489734272832 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0423 07:58:14.655528 132489734272832 metric_logger.py:196] completed step: 9, seconds: 0.147, TFLOP/s/device: 92.246, Tokens/s/device: 13904.353, total_weights: 65536, loss: 8.179, lm_loss: 8.179, perplexity: 3564.635 Per train step: Total TFLOPs: 13.59 split as 93.93% learnable weight flops and 6.07% attention flops XPK End: Thu Apr 23 07:58:24 UTC 2026 EXIT_CODE=0
1abe20691
· feat_nnx_trainstate_and_training_loop_20260423_093806
· full log
XPK Start: Thu Apr 23 10:14:14 UTC 2026 PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. `rope_parameters`'s factor field must be a float >= 1, got 40 `rope_parameters`'s beta_fast field must be a float, got 32 `rope_parameters`'s beta_slow field must be a float, got 1 DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 2026-04-23 10:14:39.641210: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0423 10:14:39.852530 135534863787840 max_utils.py:273] Attempting to initialize the jax distributed system... I0423 10:14:48.892251 135534863787840 distributed.py:149] Starting JAX distributed service on [::]:8482 I0423 10:14:48.894796 135534863787840 distributed.py:172] Connecting to JAX distributed service on mt-10-shardy-false-2xwz8-slice-job-0-0.mt-10-shardy-false-2xwz8:8482 I0423 10:14:50.020672 135534863787840 max_utils.py:284] Jax distributed system initialized! I0423 10:14:56.070472 135534863787840 max_utils.py:800] System Information: Jax Version: 0.9.2 I0423 10:14:56.070580 135534863787840 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0423 10:14:56.070620 135534863787840 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365 I0423 10:14:56.070675 135534863787840 train_utils.py:391] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing. I0423 10:14:56.762906 135534863787840 maxtext_utils.py:1732] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0423 10:14:56.763482 135534863787840 maxtext_utils.py:1732] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1) I0423 10:14:56.763679 135534863787840 checkpointing.py:688] Setting up checkpoint logger... I0423 10:14:56.763732 135534863787840 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True I0423 10:14:56.763775 135534863787840 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None I0423 10:14:56.764118 135534863787840 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7b44129ea6c0>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0423 10:15:00.109439 135534863787840 checkpointing.py:266] Enabling policy for fixed interval checkpointing. I0423 10:15:00.109628 135534863787840 checkpoint_manager.py:708] [process=5][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7b302436eae0>}, handler_registry=None I0423 10:15:00.109874 135534863787840 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7b302436eae0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`. I0423 10:15:00.109922 135534863787840 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7b3024568560>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`. I0423 10:15:00.109969 135534863787840 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7b302436eae0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7b302436eae0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7b3024568560>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7b3024568560>}). I0423 10:15:00.110336 135534863787840 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34 I0423 10:15:00.110409 135534863787840 async_checkpointer.py:192] [process=5][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7b2ff467dd00> timeout: 1200 secs and primary_host=0 for async checkpoint writes I0423 10:15:00.842457 135534863787840 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_10_shardy_false/checkpoints I0423 10:15:01.297086 135534863787840 checkpoint_manager.py:929] [process=5][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_10_shardy_false/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7b42cc7b1430> I0423 10:15:01.297255 135534863787840 checkpointing.py:302] Checkpoint manager created! I0423 10:15:02.231407 135534863787840 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed'). I0423 10:15:02.231515 135534863787840 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0423 10:15:02.608729 135534863787840 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0423 10:15:02.608822 135534863787840 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0423 10:15:02.625241 135534863787840 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed'). I0423 10:15:02.625302 135534863787840 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None). I0423 10:15:02.649283 135534863787840 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0423 10:15:02.649348 135534863787840 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 10:15:02.665870 135534863787840 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0423 10:15:02.665927 135534863787840 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 10:15:02.682437 135534863787840 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim'). I0423 10:15:02.682499 135534863787840 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 10:15:02.707235 135534863787840 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv'). I0423 10:15:02.707308 135534863787840 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None). I0423 10:15:02.728542 135534863787840 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp'). I0423 10:15:02.728605 135534863787840 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None). I0423 10:15:02.949211 135534863787840 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint I0423 10:15:02.949320 135534863787840 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint. fsdp: 32 I0423 10:15:04.332167 135534863787840 maxtext_utils.py:1835] params/params/decoder/decoder_norm/scale Shape: float32[2048] Logical: P('norm',) Physical: (None,) I0423 10:15:04.332298 135534863787840 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wi_0/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0423 10:15:04.332351 135534863787840 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wi_1/kernel Shape: float32[2048,16,7168] Logical: P('embed', 'layers', 'mlp') Physical: ('fsdp', None, None) I0423 10:15:04.332426 135534863787840 maxtext_utils.py:1835] params/params/decoder/layers/mlp/wo/kernel Shape: float32[7168,16,2048] Logical: P('mlp', 'layers', 'embed') Physical: (None, None, 'fsdp') I0423 10:15:04.332490 135534863787840 maxtext_utils.py:1835] params/params/decoder/layers/post_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0423 10:15:04.332530 135534863787840 maxtext_utils.py:1835] params/params/decoder/layers/pre_self_attention_layer_norm/scale Shape: float32[2048,16] Logical: P('norm', 'layers') Physical: (None, None) I0423 10:15:04.332581 135534863787840 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/key/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0423 10:15:04.332633 135534863787840 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/out/kernel Shape: float32[16,16,128,2048] Logical: P('heads', 'layers', 'kv', 'embed') Physical: (None, None, None, 'fsdp') I0423 10:15:04.332685 135534863787840 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/query/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'q_heads', 'kv') Physical: ('fsdp', None, None, None) I0423 10:15:04.332725 135534863787840 maxtext_utils.py:1835] params/params/decoder/layers/self_attention/value/kernel Shape: float32[2048,16,16,128] Logical: P('embed', 'layers', 'kv_heads', 'kv_head_dim') Physical: ('fsdp', None, None, None) I0423 10:15:04.332773 135534863787840 maxtext_utils.py:1835] params/params/decoder/logits_dense/kernel Shape: float32[2048,32000] Logical: P('embed_vocab', 'vocab') Physical: ('fsdp', None) I0423 10:15:04.332823 135534863787840 maxtext_utils.py:1835] params/params/token_embedder/embedding Shape: float32[32000,2048] Logical: P('vocab', 'embed_vocab') Physical: (None, 'fsdp') I0423 10:15:04.828919 135534863787840 train.py:157] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0423 10:15:04.829016 135534863787840 train.py:157] train/xent Physical: float32[32,2048]............................................ ('fsdp', None). I0423 10:15:04.844672 135534863787840 train.py:164] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length'). I0423 10:15:04.844735 135534863787840 train.py:164] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None). I0423 10:15:15.613468 135534863787840 max_utils.py:791] Total memory size: 1.5 GB, Output size: 0.4 GB, Temp size: 1.1 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB. I0423 10:15:15.614251 135534863787840 metric_logger.py:301] number parameters: 1.104 billion I0423 10:15:26.742157 135534863787840 checkpointing.py:794] Waiting for step 0 to finish before checkpoint... I0423 10:15:27.294751 135534863787840 checkpointing.py:798] Waited 0.5525507926940918 seconds for step 0 to finish before starting checkpointing. I0423 10:15:27.297243 135534863787840 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0423 10:15:27.299024 135534863787840 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 0 I0423 10:15:27.300381 135534863787840 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_10_shardy_false/checkpoints/0. I0423 10:15:27.665144 135534863787840 signaling_client.py:364] Using JaxDistributedSignalingClient I0423 10:15:27.666074 135534863787840 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array. I0423 10:15:27.666132 135534863787840 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0423 10:15:27.944558 135534863787840 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.279489s I0423 10:15:27.944751 135534863787840 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 5.408 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.28523707389831543 s) (per-host) I0423 10:15:27.944806 135534863787840 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.285303s (batch_requests_ready=0.002321s, total_serialization_initiated=0.282907s, others=0.000076s) I0423 10:15:27.944896 135534863787840 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.289392s (all_items=0.000017s, per_item={'items': '0.00001693'}, temp_paths=0.289375) I0423 10:15:27.945719 135534863787840 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.65 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_10_shardy_false/checkpoints/0. I0423 10:15:27.946007 135407404156672 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 10:35:27.945974 I0423 10:15:28.374232 135534863787840 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize I0423 10:15:28.374609 135405781362432 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0423 10:15:28.374797 135534863787840 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_10_shardy_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776939327.297224, 'wait_for_prev_duration_secs': 6.222724914550781e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776939327.2990608, 'checkpointer_blocking_duration_secs': 0.6470928192138672, 'get_old_steps_start_time': 1776939327.946181, 'get_old_steps_duration_secs': 2.8848648071289062e-05, 'checkpoint_manager_blocking_start_time': 1776939327.2952945, 'checkpoint_manager_blocking_duration_secs': 1.0794610977172852} I0423 10:15:28.374914 135534863787840 checkpointing.py:409] Started an asynchronous checkpoint save for step 0 I0423 10:15:28.374977 135534863787840 max_utils.py:750] Memstats: After params initialized: I0423 10:15:28.375036 135534863787840 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_18(process=5,(2,4,0,0)) I0423 10:15:28.375068 135534863787840 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_19(process=5,(3,4,0,0)) I0423 10:15:28.375095 135534863787840 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_22(process=5,(2,5,0,0)) I0423 10:15:28.375119 135534863787840 max_utils.py:756] Using (GB) 0.43 / 31.25 (1.376000%) on TPU_23(process=5,(3,5,0,0)) I0423 10:15:28.687330 135534863787840 metric_logger.py:196] completed step: 0, seconds: 11.128, TFLOP/s/device: 1.221, Tokens/s/device: 184.044, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52779.875 I0423 10:15:28.865415 135534863787840 metric_logger.py:196] completed step: 1, seconds: 1.944, TFLOP/s/device: 6.991, Tokens/s/device: 1053.705, total_weights: 65536, loss: 10.874, lm_loss: 10.874, perplexity: 52779.875 I0423 10:15:29.278394 135534863787840 metric_logger.py:196] completed step: 2, seconds: 0.031, TFLOP/s/device: 442.030, Tokens/s/device: 66627.627, total_weights: 65536, loss: 10.262, lm_loss: 10.262, perplexity: 28637.129 I0423 10:15:29.425627 135534863787840 metric_logger.py:196] completed step: 3, seconds: 0.413, TFLOP/s/device: 32.860, Tokens/s/device: 4953.009, total_weights: 65536, loss: 9.734, lm_loss: 9.734, perplexity: 16889.203 I0423 10:15:29.720639 135534863787840 metric_logger.py:196] completed step: 4, seconds: 0.153, TFLOP/s/device: 88.995, Tokens/s/device: 13414.291, total_weights: 65536, loss: 9.277, lm_loss: 9.277, perplexity: 10694.614 I0423 10:15:29.726315 135534863787840 metric_logger.py:196] completed step: 5, seconds: 0.147, TFLOP/s/device: 92.291, Tokens/s/device: 13911.059, total_weights: 65536, loss: 8.892, lm_loss: 8.892, perplexity: 7270.717 I0423 10:15:31.442047 2798 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com I0423 10:15:33.392257 135406308407040 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_10_shardy_false/checkpoints/0/items/array_metadatas/process_5 I0423 10:15:51.323462 135534863787840 metric_logger.py:196] completed step: 6, seconds: 0.295, TFLOP/s/device: 45.999, Tokens/s/device: 6933.512, total_weights: 65536, loss: 8.592, lm_loss: 8.592, perplexity: 5390.759 I0423 10:15:51.470645 135534863787840 metric_logger.py:196] completed step: 7, seconds: 21.450, TFLOP/s/device: 0.633, Tokens/s/device: 95.476, total_weights: 65536, loss: 8.384, lm_loss: 8.384, perplexity: 4376.905 I0423 10:15:51.618041 135534863787840 metric_logger.py:196] completed step: 8, seconds: 0.152, TFLOP/s/device: 89.324, Tokens/s/device: 13463.941, total_weights: 65536, loss: 8.255, lm_loss: 8.255, perplexity: 3846.406 I0423 10:15:51.764680 135534863787840 checkpointing.py:794] Waiting for step 9 to finish before checkpoint... I0423 10:15:51.765351 135534863787840 checkpointing.py:798] Waited 0.0007379055023193359 seconds for step 9 to finish before starting checkpointing. I0423 10:15:52.198869 135534863787840 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0423 10:16:05.960793 135407404156672 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 41.241 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 38.301249504089355 s) (per-host) I0423 10:16:05.960901 135407404156672 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 38.014782s. I0423 10:16:13.751743 135407404156672 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 45.805608s. I0423 10:16:13.752021 135405781362432 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0423 10:16:13.752142 135405781362432 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0423 10:16:13.752190 135405781362432 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts... I0423 10:16:13.753906 135405781362432 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts. I0423 10:16:13.754085 135534863787840 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0. W0423 10:16:13.754229 135534863787840 checkpoint_manager.py:1452] Waiting for previous save to complete took 21.555367 seconds. If this number is high, consider checkpointing less frequently. I0423 10:16:13.756101 135534863787840 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 9 I0423 10:16:13.758134 135534863787840 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_10_shardy_false/checkpoints/9. I0423 10:16:14.067153 135534863787840 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array. I0423 10:16:14.067247 135534863787840 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False I0423 10:16:14.100901 135534863787840 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.034603s I0423 10:16:14.101063 135534863787840 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 40.457 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.03812861442565918 s) (per-host) I0423 10:16:14.101118 135534863787840 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.038190s (batch_requests_ready=0.001842s, total_serialization_initiated=0.036280s, others=0.000067s) I0423 10:16:14.101224 135534863787840 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.042484s (all_items=0.000016s, per_item={'items': '0.00001574'}, temp_paths=0.042469) I0423 10:16:14.101972 135534863787840 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.35 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_10_shardy_false/checkpoints/9. I0423 10:16:14.102294 135405781362432 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 10:36:14.102257 I0423 10:16:14.104283 135534863787840 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize I0423 10:16:14.104507 135381454415616 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save. I0423 10:16:14.104627 135534863787840 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_10_shardy_false/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776939352.1988308, 'wait_for_prev_duration_secs': 21.55536699295044, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776939373.7561393, 'checkpointer_blocking_duration_secs': 0.3463010787963867, 'get_old_steps_start_time': 1776939374.1024635, 'get_old_steps_duration_secs': 2.8848648071289062e-05, 'checkpoint_manager_blocking_start_time': 1776939351.7655718, 'checkpoint_manager_blocking_duration_secs': 22.339022397994995} I0423 10:16:14.104761 135534863787840 checkpointing.py:409] Started an asynchronous checkpoint save for step 9 I0423 10:16:14.104807 135534863787840 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete. I0423 10:16:19.641423 135406308407040 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_10_shardy_false/checkpoints/9/items/array_metadatas/process_5 I0423 10:16:55.381614 135405781362432 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 38.230 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 41.3186399936676 s) (per-host) I0423 10:16:55.381752 135405781362432 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 41.279346s. I0423 10:17:05.497753 135405781362432 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 51.395332s. I0423 10:17:05.498051 135381454415616 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save. I0423 10:17:05.498186 135381454415616 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save. I0423 10:17:05.498250 135381454415616 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts... I0423 10:17:05.499677 135381454415616 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts. I0423 10:17:05.499866 135534863787840 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9. I0423 10:17:05.500013 135534863787840 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning. I0423 10:17:05.501012 135534863787840 metric_logger.py:196] completed step: 9, seconds: 0.147, TFLOP/s/device: 92.347, Tokens/s/device: 13919.474, total_weights: 65536, loss: 8.179, lm_loss: 8.179, perplexity: 3564.635 Per train step: Total TFLOPs: 13.59 split as 93.93% learnable weight flops and 6.07% attention flops XPK End: Thu Apr 23 10:17:14 UTC 2026 EXIT_CODE=0