MaxView

‹ 02_syntheticCase: 03_dropout04_int8 ›

Metrics: Linen vs NNX  ·  feat/nnx-trainstate-and-training-loop

MetricLinen  1abe20691NNX  1abe20691Diff (NNX − Linen)
Parameters1.104 billion1.104 billion
Final loss6.98107.0160+0.035
TFLOP/s11.37311.339-0.034
Tok/s1817.61812.2-5.379
Avg s/step1.4541.310-0.144
Memory %1.311.310
JAX0.9.20.9.2

Diff = NNX value − Linen value. Green = NNX improved. Red = NNX regressed.

XPK Start: Thu Apr 23 09:46:04 UTC 2026
PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
`rope_parameters`'s factor field must be a float >= 1, got 40
`rope_parameters`'s beta_fast field must be a float, got 32
`rope_parameters`'s beta_slow field must be a float, got 1
DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 
2026-04-23 09:46:35.302966: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0423 09:46:35.512267 132413634778944 max_utils.py:273] Attempting to initialize the jax distributed system...
I0423 09:46:44.554508 132413634778944 distributed.py:149] Starting JAX distributed service on [::]:8482
I0423 09:46:44.556702 132413634778944 distributed.py:172] Connecting to JAX distributed service on mt-03-dropout-2qj8t-slice-job-0-0.mt-03-dropout-2qj8t:8482
I0423 09:46:52.386116 132413634778944 max_utils.py:284] Jax distributed system initialized!
I0423 09:46:58.739778 132413634778944 max_utils.py:800] System Information: Jax Version: 0.9.2
I0423 09:46:58.739883 132413634778944 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0423 09:46:58.739924 132413634778944 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365
I0423 09:46:58.739960 132413634778944 train_utils.py:391] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
I0423 09:46:59.430828 132413634778944 maxtext_utils.py:1771] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0423 09:46:59.431430 132413634778944 maxtext_utils.py:1771] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0423 09:46:59.431612 132413634778944 checkpointing.py:688] Setting up checkpoint logger...
I0423 09:46:59.431663 132413634778944 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True
I0423 09:46:59.431706 132413634778944 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None
I0423 09:46:59.432039 132413634778944 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x786d3c050080>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB)
I0423 09:47:02.261621 132413634778944 checkpointing.py:266] Enabling policy for fixed interval checkpointing.
I0423 09:47:02.261898 132413634778944 checkpoint_manager.py:708] [process=6][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x785904186f90>}, handler_registry=None
I0423 09:47:02.262144 132413634778944 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x785904186f90>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`.
I0423 09:47:02.262194 132413634778944 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7858e46b83e0>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`.
I0423 09:47:02.262230 132413634778944 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x785904186f90>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x785904186f90>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7858e46b83e0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7858e46b83e0>}).
I0423 09:47:02.262637 132413634778944 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34
I0423 09:47:02.262712 132413634778944 async_checkpointer.py:192] [process=6][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7857f4795f80> timeout: 1200 secs and primary_host=0 for async checkpoint writes
I0423 09:47:03.907531 132413634778944 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints
I0423 09:47:04.316713 132413634778944 checkpoint_manager.py:929] [process=6][thread=MainThread] CheckpointManager created,  primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7858e46b87a0>
I0423 09:47:04.316883 132413634778944 checkpointing.py:302] Checkpoint manager created!
I0423 09:47:04.669907 132413634778944 nnx_wrappers.py:437] Unknown Logical: bfloat16[32,128,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0423 09:47:04.670012 132413634778944 nnx_wrappers.py:437] Unknown Physical: bfloat16[32,128,2048]....................................... ('fsdp', None, None).
I0423 09:47:05.055389 132413634778944 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,128,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0423 09:47:05.055482 132413634778944 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,128,2048]....................................... ('fsdp', None, None).
I0423 09:47:05.071984 132413634778944 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,128,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0423 09:47:05.072042 132413634778944 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,128,2048]....................................... ('fsdp', None, None).
I0423 09:47:05.095554 132413634778944 attentions.py:1154] attentions/query Logical: bfloat16[32,128,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 09:47:05.095625 132413634778944 attentions.py:1154] attentions/query Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0423 09:47:05.112281 132413634778944 attentions.py:1155] attentions/key Logical: bfloat16[32,128,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 09:47:05.112353 132413634778944 attentions.py:1155] attentions/key Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0423 09:47:05.128868 132413634778944 attentions.py:1156] attentions/value Logical: bfloat16[32,128,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 09:47:05.128930 132413634778944 attentions.py:1156] attentions/value Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0423 09:47:05.153893 132413634778944 attentions.py:1198] attentions/out Logical: bfloat16[32,128,16,128]..................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv').
I0423 09:47:05.153961 132413634778944 attentions.py:1198] attentions/out Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0423 09:47:05.180726 132413634778944 linears.py:525] linears/x Logical: bfloat16[32,128,7168]....................................... ('activation_batch', 'activation_length', 'activation_mlp').
I0423 09:47:05.180793 132413634778944 linears.py:525] linears/x Physical: bfloat16[32,128,7168]....................................... ('fsdp', None, None).
I0423 09:47:05.415181 132413634778944 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint
I0423 09:47:05.415292 132413634778944 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint.
fsdp: 32
I0423 09:47:06.853328 132413634778944 maxtext_utils.py:1874]  params/params/decoder/decoder_norm/scale
    Shape:     float32[2048]
    Logical:   P('norm',)
    Physical:  (None,)
I0423 09:47:06.853458 132413634778944 maxtext_utils.py:1874]  params/params/decoder/layers/mlp/wi_0/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0423 09:47:06.853510 132413634778944 maxtext_utils.py:1874]  params/params/decoder/layers/mlp/wi_1/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, None)
I0423 09:47:06.853565 132413634778944 maxtext_utils.py:1874]  params/params/decoder/layers/mlp/wo/kernel
    Shape:     float32[7168,16,2048]
    Logical:   P('mlp', 'layers', 'embed')
    Physical:  (None, None, 'fsdp')
I0423 09:47:06.853616 132413634778944 maxtext_utils.py:1874]  params/params/decoder/layers/post_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0423 09:47:06.853653 132413634778944 maxtext_utils.py:1874]  params/params/decoder/layers/pre_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  (None, None)
I0423 09:47:06.853705 132413634778944 maxtext_utils.py:1874]  params/params/decoder/layers/self_attention/key/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0423 09:47:06.853755 132413634778944 maxtext_utils.py:1874]  params/params/decoder/layers/self_attention/out/kernel
    Shape:     float32[16,16,128,2048]
    Logical:   P('heads', 'layers', 'kv', 'embed')
    Physical:  (None, None, None, 'fsdp')
I0423 09:47:06.853793 132413634778944 maxtext_utils.py:1874]  params/params/decoder/layers/self_attention/query/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'q_heads', 'kv')
    Physical:  ('fsdp', None, None, None)
I0423 09:47:06.853828 132413634778944 maxtext_utils.py:1874]  params/params/decoder/layers/self_attention/value/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, None, None)
I0423 09:47:06.853873 132413634778944 maxtext_utils.py:1874]  params/params/decoder/logits_dense/kernel
    Shape:     float32[2048,32000]
    Logical:   P('embed_vocab', 'vocab')
    Physical:  ('fsdp', None)

I0423 09:47:06.853921 132413634778944 maxtext_utils.py:1874]  params/params/token_embedder/embedding
    Shape:     float32[32000,2048]
    Logical:   P('vocab', 'embed_vocab')
    Physical:  (None, 'fsdp')
I0423 09:47:07.380404 132413634778944 train.py:157] train/xent Logical: float32[32,128]............................................. ('activation_embed_and_logits_batch', 'activation_length').
I0423 09:47:07.380503 132413634778944 train.py:157] train/xent Physical: float32[32,128]............................................. ('fsdp', None).
I0423 09:47:07.396145 132413634778944 train.py:164] train/z_loss Logical: float32[32,128]............................................. ('activation_embed_and_logits_batch', 'activation_length').
I0423 09:47:07.396207 132413634778944 train.py:164] train/z_loss Physical: float32[32,128]............................................. ('fsdp', None).
I0423 09:47:11.031535 132413634778944 max_utils.py:791] Total memory size: 0.8 GB, Output size: 0.4 GB, Temp size: 0.4 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB.
I0423 09:47:11.032304 132413634778944 metric_logger.py:301] number parameters: 1.104 billion
I0423 09:47:15.696960 132413634778944 checkpointing.py:794] Waiting for step 0 to finish before checkpoint...
I0423 09:47:15.769665 132413634778944 checkpointing.py:798] Waited 0.07269024848937988 seconds for step 0 to finish before starting checkpointing.
I0423 09:47:15.772121 132413634778944 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0423 09:47:15.774253 132413634778944 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 0
I0423 09:47:15.775647 132413634778944 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/0.
I0423 09:47:16.528782 132413634778944 signaling_client.py:364] Using JaxDistributedSignalingClient
I0423 09:47:16.529906 132413634778944 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0423 09:47:16.529965 132413634778944 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0423 09:47:16.808299 132413634778944 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.279590s
I0423 09:47:16.808477 132413634778944 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 5.413 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.284987211227417 s) (per-host)
I0423 09:47:16.808533 132413634778944 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.285054s (batch_requests_ready=0.002162s, total_serialization_initiated=0.282816s, others=0.000075s)
I0423 09:47:16.808634 132413634778944 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.289162s (all_items=0.000017s, per_item={'items': '0.00001693'}, temp_paths=0.289145)
I0423 09:47:16.809414 132413634778944 event_tracking.py:125] [process=6] [async] Finished blocking save in 1.04 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/0.
I0423 09:47:16.809746 132282278995712 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 10:07:16.809707
I0423 09:47:17.244078 132413634778944 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize
I0423 09:47:17.244457 132285479229184 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0423 09:47:17.244629 132413634778944 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776937635.7720814, 'wait_for_prev_duration_secs': 8.225440979003906e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776937635.7742913, 'checkpointer_blocking_duration_secs': 1.0356049537658691, 'get_old_steps_start_time': 1776937636.80992, 'get_old_steps_duration_secs': 2.9087066650390625e-05, 'checkpoint_manager_blocking_start_time': 1776937635.770179, 'checkpoint_manager_blocking_duration_secs': 1.474407434463501}
I0423 09:47:17.244739 132413634778944 checkpointing.py:409] Started an asynchronous checkpoint save for step 0
I0423 09:47:17.244790 132413634778944 max_utils.py:750] 
Memstats: After params initialized:
I0423 09:47:17.244846 132413634778944 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_24(process=6,(0,6,0,0))
I0423 09:47:17.244879 132413634778944 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_25(process=6,(1,6,0,0))
I0423 09:47:17.244907 132413634778944 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_28(process=6,(0,7,0,0))
I0423 09:47:17.244931 132413634778944 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_29(process=6,(1,7,0,0))
I0423 09:47:17.560959 132413634778944 metric_logger.py:196] completed step: 0, seconds: 4.665, TFLOP/s/device: 0.172, Tokens/s/device: 27.441, total_weights: 4096, loss: 10.889, lm_loss: 10.889, perplexity: 53560.699
I0423 09:47:17.658612 132413634778944 metric_logger.py:196] completed step: 1, seconds: 1.863, TFLOP/s/device: 0.430, Tokens/s/device: 68.721, total_weights: 4096, loss: 10.902, lm_loss: 10.902, perplexity: 54298.523
I0423 09:47:18.077334 132413634778944 metric_logger.py:196] completed step: 2, seconds: 0.027, TFLOP/s/device: 29.360, Tokens/s/device: 4692.426, total_weights: 4096, loss: 9.956, lm_loss: 9.956, perplexity: 21082.051
I0423 09:47:18.148146 132413634778944 metric_logger.py:196] completed step: 3, seconds: 0.419, TFLOP/s/device: 1.911, Tokens/s/device: 305.494, total_weights: 4096, loss: 9.139, lm_loss: 9.139, perplexity: 9314.691
I0423 09:47:18.292713 132413634778944 metric_logger.py:196] completed step: 4, seconds: 0.076, TFLOP/s/device: 10.551, Tokens/s/device: 1686.385, total_weights: 4096, loss: 8.459, lm_loss: 8.459, perplexity: 4715.493
I0423 09:47:18.298300 132413634778944 metric_logger.py:196] completed step: 5, seconds: 0.071, TFLOP/s/device: 11.333, Tokens/s/device: 1811.235, total_weights: 4096, loss: 7.924, lm_loss: 7.924, perplexity: 2762.373
I0423 09:47:21.888012    2833 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com
I0423 09:47:25.673598 132282794477312 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/0/items/array_metadatas/process_6
I0423 09:47:28.706162 132413634778944 metric_logger.py:196] completed step: 6, seconds: 0.145, TFLOP/s/device: 5.527, Tokens/s/device: 883.289, total_weights: 4096, loss: 7.553, lm_loss: 7.553, perplexity: 1906.617
I0423 09:47:28.776778 132413634778944 metric_logger.py:196] completed step: 7, seconds: 10.338, TFLOP/s/device: 0.077, Tokens/s/device: 12.381, total_weights: 4096, loss: 7.267, lm_loss: 7.267, perplexity: 1431.653
I0423 09:47:28.847461 132413634778944 metric_logger.py:196] completed step: 8, seconds: 0.075, TFLOP/s/device: 10.686, Tokens/s/device: 1707.919, total_weights: 4096, loss: 7.111, lm_loss: 7.111, perplexity: 1225.983
I0423 09:47:28.917789 132413634778944 checkpointing.py:794] Waiting for step 9 to finish before checkpoint...
I0423 09:47:28.918497 132413634778944 checkpointing.py:798] Waited 0.0007236003875732422 seconds for step 9 to finish before starting checkpointing.
I0423 09:47:28.920498 132413634778944 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0423 09:47:51.185818 132282278995712 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 45.571 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 34.66229033470154 s) (per-host)
I0423 09:47:51.185941 132282278995712 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 34.376082s.
I0423 09:48:00.455721 132282278995712 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 43.645845s.
I0423 09:48:00.455982 132285479229184 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0423 09:48:00.456122 132285479229184 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0423 09:48:00.456174 132285479229184 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts...
I0423 09:48:00.458305 132285479229184 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts.
I0423 09:48:00.458434 132413634778944 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0.
W0423 09:48:00.458540 132413634778944 checkpoint_manager.py:1452] Waiting for previous save to complete took 31.538043 seconds. If this number is high, consider checkpointing less frequently.
I0423 09:48:00.460644 132413634778944 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 9
I0423 09:48:00.462740 132413634778944 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/9.
I0423 09:48:01.181053 132413634778944 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0423 09:48:01.181161 132413634778944 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0423 09:48:01.213418 132413634778944 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.033534s
I0423 09:48:01.213604 132413634778944 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 41.692 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.03699922561645508 s) (per-host)
I0423 09:48:01.213669 132413634778944 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.037083s (batch_requests_ready=0.001786s, total_serialization_initiated=0.035204s, others=0.000092s)
I0423 09:48:01.213796 132413634778944 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.041589s (all_items=0.000015s, per_item={'items': '0.00001550'}, temp_paths=0.041574)
I0423 09:48:01.214584 132413634778944 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.75 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/9.
I0423 09:48:01.214867 132285479229184 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 10:08:01.214834
I0423 09:48:01.222512 132413634778944 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize
I0423 09:48:01.222787 132282794477312 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0423 09:48:01.222943 132413634778944 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776937648.9204636, 'wait_for_prev_duration_secs': 31.538042545318604, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776937680.460698, 'checkpointer_blocking_duration_secs': 0.7543065547943115, 'get_old_steps_start_time': 1776937681.215027, 'get_old_steps_duration_secs': 2.9087066650390625e-05, 'checkpoint_manager_blocking_start_time': 1776937648.9187257, 'checkpoint_manager_blocking_duration_secs': 32.30418419837952}
I0423 09:48:01.223048 132413634778944 checkpointing.py:409] Started an asynchronous checkpoint save for step 9
I0423 09:48:01.223103 132413634778944 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0423 09:48:09.051711 132278092089088 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/linen_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/9/items/array_metadatas/process_6
I0423 09:48:44.152600 132285479229184 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 36.755 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 42.97598052024841 s) (per-host)
I0423 09:48:44.152706 132285479229184 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 42.937733s.
I0423 09:48:52.305365 132285479229184 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 51.090376s.
I0423 09:48:52.305675 132282794477312 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0423 09:48:52.305798 132282794477312 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0423 09:48:52.305846 132282794477312 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts...
I0423 09:48:52.307068 132282794477312 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts.
I0423 09:48:52.307246 132413634778944 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9.
I0423 09:48:52.307389 132413634778944 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0423 09:48:52.308315 132413634778944 metric_logger.py:196] completed step: 9, seconds: 0.070, TFLOP/s/device: 11.373, Tokens/s/device: 1817.614, total_weights: 4096, loss: 6.981, lm_loss: 6.981, perplexity: 1075.867
Per train step:
 Total TFLOPs: 0.80 
 split as 99.60% learnable weight flops and 0.40% attention flops
XPK End: Thu Apr 23 09:49:04 UTC 2026
EXIT_CODE=0
XPK Start: Thu Apr 23 11:19:14 UTC 2026
PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
`rope_parameters`'s factor field must be a float >= 1, got 40
`rope_parameters`'s beta_fast field must be a float, got 32
`rope_parameters`'s beta_slow field must be a float, got 1
DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 
2026-04-23 11:19:39.824486: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0423 11:19:40.035994 136616018200384 max_utils.py:273] Attempting to initialize the jax distributed system...
I0423 11:19:49.078462 136616018200384 distributed.py:149] Starting JAX distributed service on [::]:8482
I0423 11:19:49.080837 136616018200384 distributed.py:172] Connecting to JAX distributed service on mt-03-dropout-2enct-slice-job-0-0.mt-03-dropout-2enct:8482
I0423 11:19:50.378352 136616018200384 max_utils.py:284] Jax distributed system initialized!
I0423 11:19:56.570600 136616018200384 max_utils.py:800] System Information: Jax Version: 0.9.2
I0423 11:19:56.570706 136616018200384 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0423 11:19:56.570746 136616018200384 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365
I0423 11:19:56.570782 136616018200384 train_utils.py:391] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
I0423 11:19:57.269889 136616018200384 maxtext_utils.py:1771] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1, 1)
I0423 11:19:57.382749 136616018200384 checkpointing.py:688] Setting up checkpoint logger...
I0423 11:19:57.382873 136616018200384 checkpointing.py:234] Creating checkpoint manager with ocdbt=True and zarr3=True
I0423 11:19:57.382918 136616018200384 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None
I0423 11:19:57.383123 136616018200384 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7c3fd8fae600>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB)
I0423 11:20:00.992752 136616018200384 checkpointing.py:266] Enabling policy for fixed interval checkpointing.
I0423 11:20:00.992990 136616018200384 checkpoint_manager.py:708] [process=6][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7c2b70188a10>}, handler_registry=None
I0423 11:20:00.993246 136616018200384 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7c2b70188a10>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`.
I0423 11:20:00.993296 136616018200384 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7c2b707a6930>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`.
I0423 11:20:00.993333 136616018200384 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7c2b70188a10>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x7c2b70188a10>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7c2b707a6930>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x7c2b707a6930>}).
I0423 11:20:00.993661 136616018200384 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34
I0423 11:20:00.993735 136616018200384 async_checkpointer.py:192] [process=6][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x7c2b7010b7e0> timeout: 1200 secs and primary_host=0 for async checkpoint writes
I0423 11:20:01.681699 136616018200384 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints
I0423 11:20:01.716869 136616018200384 checkpoint_manager.py:929] [process=6][thread=MainThread] CheckpointManager created,  primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x7c2b701adeb0>
I0423 11:20:01.717012 136616018200384 checkpointing.py:302] Checkpoint manager created!
I0423 11:20:02.095639 136616018200384 checkpointing.py:578] checkpoint manager exists so trying to load this run's existing checkpoint
I0423 11:20:02.095761 136616018200384 checkpointing.py:676] No existing checkpoints found, not restoring checkpoint.
fsdp: 32

I0423 11:20:03.882123 136616018200384 nnx_decoders.py:465] nnx_decoders/carry Logical: bfloat16[32,128,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0423 11:20:03.882221 136616018200384 nnx_decoders.py:465] nnx_decoders/carry Physical: bfloat16[32,128,2048]....................................... ('fsdp', None, None).
I0423 11:20:03.888356 136616018200384 nnx_decoders.py:465] Unknown Logical: bfloat16[32,128,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0423 11:20:03.888414 136616018200384 nnx_decoders.py:465] Unknown Physical: bfloat16[32,128,2048]....................................... ('fsdp', None, None).
I0423 11:20:03.905206 136616018200384 attentions.py:1088] attentions/inputs_q Logical: bfloat16[32,128,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0423 11:20:03.905265 136616018200384 attentions.py:1088] attentions/inputs_q Physical: bfloat16[32,128,2048]....................................... ('fsdp', None, None).
I0423 11:20:03.921310 136616018200384 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[32,128,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0423 11:20:03.921367 136616018200384 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[32,128,2048]....................................... ('fsdp', None, None).
I0423 11:20:03.945541 136616018200384 attentions.py:1154] attentions/query Logical: bfloat16[32,128,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 11:20:03.945610 136616018200384 attentions.py:1154] attentions/query Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0423 11:20:03.961766 136616018200384 attentions.py:1155] attentions/key Logical: bfloat16[32,128,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 11:20:03.961828 136616018200384 attentions.py:1155] attentions/key Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0423 11:20:03.977951 136616018200384 attentions.py:1156] attentions/value Logical: bfloat16[32,128,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0423 11:20:03.978013 136616018200384 attentions.py:1156] attentions/value Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0423 11:20:04.008735 136616018200384 attentions.py:1198] attentions/out Logical: bfloat16[32,128,16,128]..................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv').
I0423 11:20:04.008807 136616018200384 attentions.py:1198] attentions/out Physical: bfloat16[32,128,16,128]..................................... ('fsdp', None, None, None).
I0423 11:20:04.035178 136616018200384 linears.py:525] linears/x Logical: bfloat16[32,128,7168]....................................... ('activation_batch', 'activation_length', 'activation_mlp').
I0423 11:20:04.035246 136616018200384 linears.py:525] linears/x Physical: bfloat16[32,128,7168]....................................... ('fsdp', None, None).
I0423 11:20:07.782808 136616018200384 max_utils.py:791] Total memory size: 0.8 GB, Output size: 0.4 GB, Temp size: 0.4 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB.
I0423 11:20:07.785054 136616018200384 metric_logger.py:301] number parameters: 1.104 billion
I0423 11:20:12.154369 136616018200384 checkpointing.py:794] Waiting for step 0 to finish before checkpoint...
I0423 11:20:12.228781 136616018200384 checkpointing.py:798] Waited 0.07439994812011719 seconds for step 0 to finish before starting checkpointing.
I0423 11:20:12.231150 136616018200384 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0423 11:20:12.233132 136616018200384 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 0
I0423 11:20:12.234498 136616018200384 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/0.
I0423 11:20:12.536624 136616018200384 signaling_client.py:364] Using JaxDistributedSignalingClient
I0423 11:20:12.537617 136616018200384 jax_array_handlers.py:360] Scheduling D2H of 69 prioritized jax.Array.
I0423 11:20:12.537676 136616018200384 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0423 11:20:12.812515 136616018200384 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.276658s
I0423 11:20:12.812692 136616018200384 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 5.396 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.2858603000640869 s) (per-host)
I0423 11:20:12.812746 136616018200384 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.285925s (batch_requests_ready=0.003026s, total_serialization_initiated=0.282826s, others=0.000072s)
I0423 11:20:12.812846 136616018200384 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.290073s (all_items=0.000017s, per_item={'items': '0.00001717'}, temp_paths=0.290056)
I0423 11:20:12.813724 136616018200384 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.58 seconds. Continuing save @ gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/0.
I0423 11:20:12.814080 136488255731456 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 11:40:12.814039
I0423 11:20:12.825503 136616018200384 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize
I0423 11:20:12.825804 136486624147200 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0423 11:20:12.825966 136616018200384 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776943212.2311318, 'wait_for_prev_duration_secs': 6.532669067382812e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776943212.2331705, 'checkpointer_blocking_duration_secs': 0.5810918807983398, 'get_old_steps_start_time': 1776943212.814293, 'get_old_steps_duration_secs': 3.457069396972656e-05, 'checkpoint_manager_blocking_start_time': 1776943212.229241, 'checkpoint_manager_blocking_duration_secs': 0.5966794490814209}
I0423 11:20:12.826100 136616018200384 checkpointing.py:409] Started an asynchronous checkpoint save for step 0
I0423 11:20:12.826185 136616018200384 max_utils.py:750] 
Memstats: After params initialized:
I0423 11:20:12.826242 136616018200384 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_24(process=6,(0,6,0,0))
I0423 11:20:12.826286 136616018200384 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_25(process=6,(1,6,0,0))
I0423 11:20:12.826319 136616018200384 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_28(process=6,(0,7,0,0))
I0423 11:20:12.826359 136616018200384 max_utils.py:756] 	Using (GB) 0.41 / 31.25 (1.312000%) on TPU_29(process=6,(1,7,0,0))
I0423 11:20:13.149604 136616018200384 metric_logger.py:196] completed step: 0, seconds: 4.368, TFLOP/s/device: 0.183, Tokens/s/device: 29.302, total_weights: 4096, loss: 10.874, lm_loss: 10.874, perplexity: 52805.141
I0423 11:20:13.240880 136616018200384 metric_logger.py:196] completed step: 1, seconds: 0.993, TFLOP/s/device: 0.806, Tokens/s/device: 128.865, total_weights: 4096, loss: 10.892, lm_loss: 10.892, perplexity: 53764.488
I0423 11:20:13.674630 136616018200384 metric_logger.py:196] completed step: 2, seconds: 0.022, TFLOP/s/device: 36.864, Tokens/s/device: 5891.830, total_weights: 4096, loss: 9.969, lm_loss: 9.969, perplexity: 21350.100
I0423 11:20:13.745213 136616018200384 metric_logger.py:196] completed step: 3, seconds: 0.424, TFLOP/s/device: 1.891, Tokens/s/device: 302.237, total_weights: 4096, loss: 9.125, lm_loss: 9.125, perplexity: 9186.262
I0423 11:20:13.887024 136616018200384 metric_logger.py:196] completed step: 4, seconds: 0.088, TFLOP/s/device: 9.128, Tokens/s/device: 1458.839, total_weights: 4096, loss: 8.469, lm_loss: 8.469, perplexity: 4765.735
I0423 11:20:13.894428 136616018200384 metric_logger.py:196] completed step: 5, seconds: 0.070, TFLOP/s/device: 11.421, Tokens/s/device: 1825.338, total_weights: 4096, loss: 7.908, lm_loss: 7.908, perplexity: 2719.938
I0423 11:20:15.670657    2788 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com
I0423 11:20:18.980656 136486676444928 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 69 array_metadata.ArrayMetadata to gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/0/items/array_metadatas/process_6
I0423 11:20:23.863919 136616018200384 metric_logger.py:196] completed step: 6, seconds: 0.143, TFLOP/s/device: 5.618, Tokens/s/device: 897.874, total_weights: 4096, loss: 7.554, lm_loss: 7.554, perplexity: 1909.063
I0423 11:20:23.934353 136616018200384 metric_logger.py:196] completed step: 7, seconds: 9.900, TFLOP/s/device: 0.081, Tokens/s/device: 12.929, total_weights: 4096, loss: 7.313, lm_loss: 7.313, perplexity: 1500.206
I0423 11:20:24.004781 136616018200384 metric_logger.py:196] completed step: 8, seconds: 0.076, TFLOP/s/device: 10.520, Tokens/s/device: 1681.423, total_weights: 4096, loss: 7.091, lm_loss: 7.091, perplexity: 1200.912
I0423 11:20:24.075976 136616018200384 checkpointing.py:794] Waiting for step 9 to finish before checkpoint...
I0423 11:20:24.076701 136616018200384 checkpointing.py:798] Waited 0.0007348060607910156 seconds for step 9 to finish before starting checkpointing.
I0423 11:20:24.079074 136616018200384 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0423 11:20:45.177421 136488255731456 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 48.379 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 32.650550365448 s) (per-host)
I0423 11:20:45.177531 136488255731456 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 32.363315s.
I0423 11:20:54.099685 136488255731456 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 41.285451s.
I0423 11:20:54.099970 136486624147200 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0423 11:20:54.100086 136486624147200 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0423 11:20:54.100161 136486624147200 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts...
I0423 11:20:54.101788 136486624147200 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts.
I0423 11:20:54.101958 136616018200384 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0.
W0423 11:20:54.102108 136616018200384 checkpoint_manager.py:1452] Waiting for previous save to complete took 30.023021 seconds. If this number is high, consider checkpointing less frequently.
I0423 11:20:54.103908 136616018200384 checkpoint_manager.py:1512] [process=6] Saving checkpoint at step 9
I0423 11:20:54.105904 136616018200384 event_tracking.py:70] [process=6] [async] Started save checkpoint @ gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/9.
I0423 11:20:54.415453 136616018200384 jax_array_handlers.py:360] Scheduling D2H of 69 prioritized jax.Array.
I0423 11:20:54.415548 136616018200384 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0423 11:20:54.450374 136616018200384 base_pytree_checkpoint_handler.py:154] [process=6][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.036582s
I0423 11:20:54.450547 136616018200384 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/blocking_gbytes_per_sec: 35.242 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.04377031326293945 s) (per-host)
I0423 11:20:54.450604 136616018200384 base_pytree_checkpoint_handler.py:768] [process=6][thread=MainThread] Initiated Pytree async_save. Time taken: 0.043838s (batch_requests_ready=0.002743s, total_serialization_initiated=0.041021s, others=0.000073s)
I0423 11:20:54.450702 136616018200384 composite_checkpoint_handler.py:715] [process=6][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.047869s (all_items=0.000016s, per_item={'items': '0.00001621'}, temp_paths=0.047853)
I0423 11:20:54.451375 136616018200384 event_tracking.py:125] [process=6] [async] Finished blocking save in 0.35 seconds. Continuing save @ gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/9.
I0423 11:20:54.451716 136487215560448 async_checkpointer.py:76] [process=6][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-23 11:40:54.451677
I0423 11:20:54.457976 136616018200384 checkpoint_manager.py:1560] [process=6][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize
I0423 11:20:54.458260 136486624147200 async_checkpointer.py:280] [process=6][thread=save_finalize] Waiting for background save thread=async_save.
I0423 11:20:54.458430 136616018200384 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1776943224.0790431, 'wait_for_prev_duration_secs': 30.02302098274231, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1776943254.1039486, 'checkpointer_blocking_duration_secs': 0.34790563583374023, 'get_old_steps_start_time': 1776943254.451877, 'get_old_steps_duration_secs': 2.8371810913085938e-05, 'checkpoint_manager_blocking_start_time': 1776943224.0769632, 'checkpoint_manager_blocking_duration_secs': 30.381432056427002}
I0423 11:20:54.458544 136616018200384 checkpointing.py:409] Started an asynchronous checkpoint save for step 9
I0423 11:20:54.458588 136616018200384 checkpoint_manager.py:2020] [process=6][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0423 11:20:59.675235 136487709783808 array_metadata_store.py:203] [process=6][thread=array_type_handler] Wrote 69 array_metadata.ArrayMetadata to gs://lance-maxtext/nnx_ckpt_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806/nnx_xpk_feat_nnx_trainstate_and_training_loop_20260423_093806_03_dropout/checkpoints/9/items/array_metadatas/process_6
I0423 11:21:35.254487 136487215560448 base_pytree_checkpoint_handler.py:130] [process=6] /jax/orbax/write/gbytes_per_sec: 38.670 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 40.84766507148743 s) (per-host)
I0423 11:21:35.254604 136487215560448 async_checkpointer.py:90] [process=6][thread=async_save] 3 Handler Commit operations completed. Time taken: 40.802777s.
I0423 11:21:44.090854 136487215560448 async_checkpointer.py:160] [process=6][thread=async_save] Background save thread done. Time taken: 49.639012s.
I0423 11:21:44.091144 136486624147200 async_checkpointer.py:288] [process=6][thread=save_finalize] Done with waiting for background save thread=async_save.
I0423 11:21:44.091260 136486624147200 async_checkpointer.py:298] [process=6][thread=save_finalize] No errors found in background save thread=async_save.
I0423 11:21:44.091306 136486624147200 checkpoint_manager.py:2137] [process=6][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts...
I0423 11:21:44.093004 136486624147200 checkpoint_manager.py:2146] [process=6][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts.
I0423 11:21:44.093195 136616018200384 checkpoint_manager.py:2032] [process=6][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9.
I0423 11:21:44.093365 136616018200384 checkpoint_manager.py:2009] [process=6][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0423 11:21:44.094389 136616018200384 metric_logger.py:196] completed step: 9, seconds: 0.071, TFLOP/s/device: 11.339, Tokens/s/device: 1812.235, total_weights: 4096, loss: 7.016, lm_loss: 7.016, perplexity: 1113.843
Per train step:
 Total TFLOPs: 0.80 
 split as 99.60% learnable weight flops and 0.40% attention flops
XPK End: Thu Apr 23 11:21:55 UTC 2026
EXIT_CODE=0