MaxView

← Back to run

Log Summary

XPK Start: Thu Apr 23 13:01:14 UTC 2026
PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
`rope_parameters`'s factor field must be a float >= 1, got 40
`rope_parameters`'s beta_fast field must be a float, got 32
`rope_parameters`'s beta_slow field must be a float, got 1
DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 
2026-04-23 13:01:38.930611: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0423 13:01:39.138603 139991267690304 max_utils.py:273] Attempting to initialize the jax distributed system...
I0423 13:01:48.179437 139991267690304 distributed.py:149] Starting JAX distributed service on [::]:8482
I0423 13:01:48.181746 139991267690304 distributed.py:172] Connecting to JAX distributed service on mt-06-grad-accum-mrxk2-slice-job-0-0.mt-06-grad-accum-mrxk2:8482
I0423 13:01:49.588434 139991267690304 max_utils.py:284] Jax distributed system initialized!
I0423 13:01:55.865886 139991267690304 max_utils.py:800] System Information: Jax Version: 0.9.2
I0423 13:01:55.865992 139991267690304 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0423 13:01:55.866032 139991267690304 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365
I0423 13:01:55.866069 139991267690304 train_utils.py:391] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
I0423 13:01:56.583246 139991267690304 maxtext_utils.py:1771] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0423 13:01:56.583855 139991267690304 maxtext_utils.py:1771] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0423 13:01:56.584034 139991267690304 checkpointing.py:688] Setting up checkpoint logger...
I0423 13:01:56.584083 139991267690304 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True
I0423 13:01:56.584128 139991267690304 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None
I0423 13:01:56.584473 139991267690304 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7f51bf518e30>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB)
I0423 13:01:59.575177 139991267690304 checkpointing.py:266] Enabling policy for fixed interval checkpointing.
I0423 13:01:59.575520 139991267690304 checkpoint_manager.py:708] [process=5][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f3d1c5d29c0>}, handler_registry=None
I0423 13:01:59.575766 139991267690304 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f3d1c5d29c0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`.
I0423 13:01:59.575815 139991267690304 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7f3d1c5d4260>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`.
I0423 13:01:59.575851 139991267690304 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f3d1c5d29c0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7f3d1c5d29c0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7f3d1c5d4260>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7f3d1c5d4260>}).
I0423 13:01:59.576173 139991267690304 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34
I0423 13:01:59.576243 139991267690304 async_checkpointer.py:192] [process=5][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7f3d1c3cdd00> timeout: 1200 secs and primary_host=0 for async checkpoint writes
I0423 13:02:00.287720 139991267690304 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260423_124541/linen_xpk_feat_nnx_post_train_fixes_20260423_124541_06_grad_accum/checkpoints
I0423 13:02:00.341284 139991267690304 checkpoint_manager.py:929] [process=5][thread=MainThread] CheckpointManager created,  primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260423_124541/linen_xpk_feat_nnx_post_train_fixes_20260423_124541_06_grad_accum/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7f3d1c5d3e90>
I0423 13:02:00.341438 139991267690304 checkpointing.py:302] Checkpoint manager created!
I0423 13:02:02.059670 139991267690304 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0423 13:02:02.059776 139991267690304 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0423 13:02:02.443850 139991267690304 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0423 13:02:02.443938 139991267690304 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0423 13:02:02.460428 139991267690304 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,2048,2048]...................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0423 13:02:02.460487 139991267690304 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,2048,2048]...................................... ('fsdp', None, None).
I0423 13:02:02.484179 139991267690304 attentions.py:1154] attentions/query Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 13:02:02.484250 139991267690304 attentions.py:1154] attentions/query Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0423 13:02:02.500769 139991267690304 attentions.py:1155] attentions/key Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 13:02:02.500831 139991267690304 attentions.py:1155] attentions/key Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0423 13:02:02.517367 139991267690304 attentions.py:1156] attentions/value Logical: bfloat16[32,2048,16,128].................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 13:02:02.517424 139991267690304 attentions.py:1156] attentions/value Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0423 13:02:02.542137 139991267690304 attentions.py:1198] attentions/out Logical: bfloat16[32,2048,16,128].................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv').
I0423 13:02:02.542208 139991267690304 attentions.py:1198] attentions/out Physical: bfloat16[32,2048,16,128].................................... ('fsdp', None, None, None).
I0423 13:02:02.562927 139991267690304 linears.py:525] linears/x Logical: bfloat16[32,2048,7168]...................................... ('activation_batch', 'activation_length', 'activation_mlp').
I0423 13:02:02.562992 139991267690304 linears.py:525] linears/x Physical: bfloat16[32,2048,7168]...................................... ('fsdp', None, None).
I0423 13:02:02.781953 139991267690304 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint
I0423 13:02:02.782058 139991267690304 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint.
fsdp: 32
I0423 13:02:04.220354 139991267690304 maxtext_utils.py:1880]  params/params/decoder/decoder_norm/scale
    Shape:     float32[2048]
    Logical:   P('norm',)
    Physical:  (None,)
I0423 13:02:04.220528 139991267690304 maxtext_utils.py:1880]  params/params/decoder/layers/mlp/wi_0/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0423 13:02:04.220600 139991267690304 maxtext_utils.py:1880]  params/params/decoder/layers/mlp/wi_1/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0423 13:02:04.220794 139991267690304 maxtext_utils.py:1880]  params/params/decoder/layers/mlp/wo/kernel
    Shape:     float32[7168,16,2048]
    Logical:   P('mlp', 'layers', 'embed')
    Physical:  (None, None, 'fsdp')
I0423 13:02:04.221056 139991267690304 maxtext_utils.py:1880]  params/params/decoder/layers/post_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0423 13:02:04.221119 139991267690304 maxtext_utils.py:1880]  params/params/decoder/layers/pre_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0423 13:02:04.221193 139991267690304 maxtext_utils.py:1880]  params/params/decoder/layers/self_attention/key/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0423 13:02:04.221257 139991267690304 maxtext_utils.py:1880]  params/params/decoder/layers/self_attention/out/kernel
    Shape:     float32[16,16,128,2048]
    Logical:   P('heads', 'layers', 'kv', 'embed')
    Physical:  (None, None, None, 'fsdp')
I0423 13:02:04.221307 139991267690304 maxtext_utils.py:1880]  params/params/decoder/layers/self_attention/query/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'q_heads', 'kv')
    Physical:  ('fsdp', None, None, None)
I0423 13:02:04.221348 139991267690304 maxtext_utils.py:1880]  params/params/decoder/layers/self_attention/value/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0423 13:02:04.221403 139991267690304 maxtext_utils.py:1880]  params/params/decoder/logits_dense/kernel
    Shape:     float32[2048,32000]
    Logical:   P('embed_vocab', 'vocab')
    Physical:  ('fsdp', None)
I0423 13:02:04.221469 139991267690304 maxtext_utils.py:1880]  params/params/token_embedder/embedding
    Shape:     float32[32000,2048]
    Logical:   P('vocab', 'embed_vocab')
    Physical:  (None, 'fsdp')

I0423 13:02:04.247766 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048]............................................... Unknown.
I0423 13:02:04.247834 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048]............................................... (None,).
I0423 13:02:04.263459 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,16,7168]....................................... Unknown.
I0423 13:02:04.263519 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,16,7168]....................................... ('fsdp', None, None).
I0423 13:02:04.293753 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[7168,16,2048]....................................... Unknown.
I0423 13:02:04.293814 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[7168,16,2048]....................................... (None, None, 'fsdp').
I0423 13:02:04.308774 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,16]............................................ Unknown.
I0423 13:02:04.308830 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,16]............................................ (None, None).
I0423 13:02:04.339013 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,16,16,128]..................................... Unknown.
I0423 13:02:04.339081 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,16,16,128]..................................... ('fsdp', None, None, None).
I0423 13:02:04.353963 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[16,16,128,2048]..................................... Unknown.
I0423 13:02:04.354020 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[16,16,128,2048]..................................... (None, None, None, 'fsdp').
I0423 13:02:04.398633 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[2048,32000]......................................... Unknown.
I0423 13:02:04.398712 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[2048,32000]......................................... ('fsdp', None).
I0423 13:02:04.413584 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Logical: float32[32000,2048]......................................... Unknown.
I0423 13:02:04.413644 139991267690304 gradient_accumulation.py:70] gradient_accumulation/inputs Physical: float32[32000,2048]......................................... (None, 'fsdp').
I0423 13:02:05.089867 139991267690304 train.py:157] train/xent Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length').
I0423 13:02:05.089965 139991267690304 train.py:157] train/xent Physical: float32[32,2048]............................................ ('fsdp', None).
I0423 13:02:05.111493 139991267690304 train.py:164] train/z_loss Logical: float32[32,2048]............................................ ('activation_embed_and_logits_batch', 'activation_length').
I0423 13:02:05.111593 139991267690304 train.py:164] train/z_loss Physical: float32[32,2048]............................................ ('fsdp', None).
I0423 13:02:16.596912 139991267690304 max_utils.py:791] Total memory size: 1.7 GB, Output size: 0.4 GB, Temp size: 1.3 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB.
I0423 13:02:16.597730 139991267690304 metric_logger.py:301] number parameters: 1.104 billion
I0423 13:02:18.511055 139991267690304 checkpointing.py:794] Waiting for step 0 to finish before checkpoint...
I0423 13:02:29.257757 139991267690304 checkpointing.py:798] Waited 10.746683120727539 seconds for step 0 to finish before starting checkpointing.
I0423 13:02:29.260136 139991267690304 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0423 13:02:29.261833 139991267690304 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 0
I0423 13:02:29.263250 139991267690304 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260423_124541/linen_xpk_feat_nnx_post_train_fixes_20260423_124541_06_grad_accum/checkpoints/0.
I0423 13:02:30.028389 139991267690304 signaling_client.py:364] Using JaxDistributedSignalingClient
I0423 13:02:30.029350 139991267690304 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0423 13:02:30.029404 139991267690304 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0423 13:02:30.301907 139991267690304 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.273590s
I0423 13:02:30.302071 139991267690304 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 5.527 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.279099702835083 s) (per-host)
I0423 13:02:30.302123 139991267690304 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.279163s (batch_requests_ready=0.002151s, total_serialization_initiated=0.276942s, others=0.000070s)
I0423 13:02:30.302220 139991267690304 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.283208s (all_items=0.000018s, per_item={'items': '0.00001788'}, temp_paths=0.283190)
I0423 13:02:30.302985 139991267690304 event_tracking.py:125] [process=5] [async] Finished blocking save in 1.04 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260423_124541/linen_xpk_feat_nnx_post_train_fixes_20260423_124541_06_grad_accum/checkpoints/0.
I0423 13:02:30.303310 139862637360896 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 13:22:30.303273
I0423 13:02:30.314283 139991267690304 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize
I0423 13:02:30.314583 139860988532480 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save.
I0423 13:02:30.314761 139991267690304 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260423_124541/linen_xpk_feat_nnx_post_train_fixes_20260423_124541_06_grad_accum/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776949349.2601173, 'wait_for_prev_duration_secs': 6.723403930664062e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776949349.2618732, 'checkpointer_blocking_duration_secs': 1.0415849685668945, 'get_old_steps_start_time': 1776949350.3034832, 'get_old_steps_duration_secs': 2.956390380859375e-05, 'checkpoint_manager_blocking_start_time': 1776949349.2582767, 'checkpoint_manager_blocking_duration_secs': 1.0564460754394531}
I0423 13:02:30.314866 139991267690304 checkpointing.py:409] Started an asynchronous checkpoint save for step 0
I0423 13:02:30.314931 139991267690304 max_utils.py:750] 
Memstats: After params initialized:
I0423 13:02:30.314979 139991267690304 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_18(process=5,(2,4,0,0))
I0423 13:02:30.315011 139991267690304 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_19(process=5,(3,4,0,0))
I0423 13:02:30.315037 139991267690304 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_22(process=5,(2,5,0,0))
I0423 13:02:30.315060 139991267690304 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_23(process=5,(3,5,0,0))
I0423 13:02:30.626817 139991267690304 metric_logger.py:196] completed step: 0, seconds: 1.913, TFLOP/s/device: 28.407, Tokens/s/device: 4281.811, total_weights: 262144, loss: 10.877, lm_loss: 10.877, perplexity: 52959.059
I0423 13:02:31.238284 139991267690304 metric_logger.py:196] completed step: 1, seconds: 12.114, TFLOP/s/device: 4.486, Tokens/s/device: 676.226, total_weights: 262144, loss: 10.877, lm_loss: 10.877, perplexity: 52959.059
I0423 13:02:31.816597 139991267690304 metric_logger.py:196] completed step: 2, seconds: 0.033, TFLOP/s/device: 1634.788, Tokens/s/device: 246412.994, total_weights: 262144, loss: 10.563, lm_loss: 10.563, perplexity: 38662.707
I0423 13:02:32.394687 139991267690304 metric_logger.py:196] completed step: 3, seconds: 0.584, TFLOP/s/device: 93.031, Tokens/s/device: 14022.667, total_weights: 262144, loss: 10.272, lm_loss: 10.272, perplexity: 28909.668
I0423 13:02:33.269145    2591 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com
I0423 13:02:33.551435 139991267690304 metric_logger.py:196] completed step: 4, seconds: 0.578, TFLOP/s/device: 93.967, Tokens/s/device: 14163.797, total_weights: 262144, loss: 10.022, lm_loss: 10.022, perplexity: 22524.992
I0423 13:02:33.558231 139991267690304 metric_logger.py:196] completed step: 5, seconds: 0.578, TFLOP/s/device: 94.001, Tokens/s/device: 14168.843, total_weights: 262144, loss: 9.820, lm_loss: 9.820, perplexity: 18401.865
I0423 13:02:35.071279 139862065485568 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260423_124541/linen_xpk_feat_nnx_post_train_fixes_20260423_124541_06_grad_accum/checkpoints/0/items/array_metadatas/process_5
I0423 13:03:07.528915 139862637360896 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 42.116 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 37.5059072971344 s) (per-host)
I0423 13:03:07.529040 139862637360896 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 37.225620s.
I0423 13:03:16.563959 139862637360896 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 46.260515s.
I0423 13:03:16.564243 139860988532480 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save.
I0423 13:03:16.564369 139860988532480 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save.
I0423 13:03:16.564421 139860988532480 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts...
I0423 13:03:16.691155 139860988532480 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts.
I0423 13:03:19.459640 139991267690304 metric_logger.py:196] completed step: 6, seconds: 1.158, TFLOP/s/device: 46.941, Tokens/s/device: 7075.457, total_weights: 262144, loss: 9.667, lm_loss: 9.667, perplexity: 15787.604
I0423 13:03:20.037724 139991267690304 metric_logger.py:196] completed step: 7, seconds: 45.324, TFLOP/s/device: 1.199, Tokens/s/device: 180.744, total_weights: 262144, loss: 9.561, lm_loss: 9.561, perplexity: 14203.827
I0423 13:03:20.615994 139991267690304 metric_logger.py:196] completed step: 8, seconds: 0.584, TFLOP/s/device: 93.112, Tokens/s/device: 14034.871, total_weights: 262144, loss: 9.496, lm_loss: 9.496, perplexity: 13302.920
I0423 13:03:21.193672 139991267690304 checkpointing.py:794] Waiting for step 9 to finish before checkpoint...
I0423 13:03:21.194436 139991267690304 checkpointing.py:798] Waited 0.0007979869842529297 seconds for step 9 to finish before starting checkpointing.
I0423 13:03:21.196693 139991267690304 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0423 13:03:21.198257 139991267690304 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 9
I0423 13:03:21.199719 139991267690304 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260423_124541/linen_xpk_feat_nnx_post_train_fixes_20260423_124541_06_grad_accum/checkpoints/9.
I0423 13:03:21.930452 139991267690304 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0423 13:03:21.930543 139991267690304 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0423 13:03:21.965509 139991267690304 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.035937s
I0423 13:03:21.965690 139991267690304 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 39.174 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.039377450942993164 s) (per-host)
I0423 13:03:21.965743 139991267690304 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.039441s (batch_requests_ready=0.001631s, total_serialization_initiated=0.037738s, others=0.000072s)
I0423 13:03:21.965834 139991267690304 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.043536s (all_items=0.000016s, per_item={'items': '0.00001621'}, temp_paths=0.043519)
I0423 13:03:21.966504 139991267690304 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.77 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260423_124541/linen_xpk_feat_nnx_post_train_fixes_20260423_124541_06_grad_accum/checkpoints/9.
I0423 13:03:21.966857 139862120331008 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 13:23:21.966815
I0423 13:03:21.970782 139991267690304 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize
I0423 13:03:21.971024 139862078367488 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save.
I0423 13:03:21.971148 139991267690304 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260423_124541/linen_xpk_feat_nnx_post_train_fixes_20260423_124541_06_grad_accum/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776949401.1966395, 'wait_for_prev_duration_secs': 0.00010013580322265625, 'time_between_consecutive_saves_sec': 4.50538969039917, 'checkpointer_blocking_start_time': 1776949401.198296, 'checkpointer_blocking_duration_secs': 0.7687146663665771, 'get_old_steps_start_time': 1776949401.9670327, 'get_old_steps_duration_secs': 3.0040740966796875e-05, 'checkpoint_manager_blocking_start_time': 1776949401.1947443, 'checkpoint_manager_blocking_duration_secs': 0.776369571685791}
I0423 13:03:21.971248 139991267690304 checkpointing.py:409] Started an asynchronous checkpoint save for step 9
I0423 13:03:21.971300 139991267690304 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0423 13:03:28.055095 139862603671296 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_post_train_fixes_20260423_124541/linen_xpk_feat_nnx_post_train_fixes_20260423_124541_06_grad_accum/checkpoints/9/items/array_metadatas/process_5
I0423 13:04:04.267977 139862120331008 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 37.306 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 42.34162640571594 s) (per-host)
I0423 13:04:04.268106 139862120331008 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 42.301131s.
I0423 13:04:12.336756 139862120331008 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 50.369766s.
I0423 13:04:12.337152 139862078367488 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save.
I0423 13:04:12.337284 139862078367488 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save.
I0423 13:04:12.337331 139862078367488 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts...
I0423 13:04:12.338816 139862078367488 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts.
I0423 13:04:12.339020 139991267690304 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9.
I0423 13:04:12.339169 139991267690304 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0423 13:04:12.340122 139991267690304 metric_logger.py:196] completed step: 9, seconds: 0.578, TFLOP/s/device: 94.022, Tokens/s/device: 14172.005, total_weights: 262144, loss: 9.457, lm_loss: 9.457, perplexity: 12802.546
Per train step:
 Total TFLOPs: 54.35 
 split as 93.93% learnable weight flops and 6.07% attention flops
XPK End: Thu Apr 23 13:04:24 UTC 2026
EXIT_CODE=0