MaxView

‹ 08_checkpoint_async_true_saveCase: 09_pdb_lt_110_shardy_false ›

Metrics: Linen vs NNX  ·  main

MetricLinen  b117f50cfNNX  b117f50cfDiff (NNX − Linen)
Parameters1.104 billion
Final loss5.8920
TFLOP/s58.396
Tok/s8802.1
Avg s/step3.284
Memory %1.38
JAX0.9.20.9.2

Diff = NNX value − Linen value. Green = NNX improved. Red = NNX regressed.

Linen  ·  b117f50cf  ·  main_20260424_070227  ·  full log
XPK Start: Fri Apr 24 07:35:37 UTC 2026
PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
`rope_parameters`'s factor field must be a float >= 1, got 40
`rope_parameters`'s beta_fast field must be a float, got 32
`rope_parameters`'s beta_slow field must be a float, got 1
DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 
2026-04-24 07:36:02.291438: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0424 07:36:02.504955 132714912790336 max_utils.py:273] Attempting to initialize the jax distributed system...
I0424 07:36:11.545100 132714912790336 distributed.py:149] Starting JAX distributed service on [::]:8482
I0424 07:36:11.547427 132714912790336 distributed.py:172] Connecting to JAX distributed service on mt-09-pdb-lt-1-b2b3t-slice-job-0-0.mt-09-pdb-lt-1-b2b3t:8482
I0424 07:36:12.601859 132714912790336 max_utils.py:284] Jax distributed system initialized!
I0424 07:36:18.638015 132714912790336 max_utils.py:800] System Information: Jax Version: 0.9.2
I0424 07:36:18.638118 132714912790336 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0424 07:36:18.638161 132714912790336 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365
I0424 07:36:18.638197 132714912790336 train_utils.py:361] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
I0424 07:36:19.345382 132714912790336 maxtext_utils.py:1604] Num_devices: 32, shape (1, 1, 1, 8, 1, 1, 1, 1, 4, 1, 1, 1, 1)
I0424 07:36:19.345678 132714912790336 checkpointing.py:677] Setting up checkpoint logger...
I0424 07:36:19.345730 132714912790336 checkpointing.py:233] Creating checkpoint manager with ocdbt=True and zarr3=True
I0424 07:36:19.345775 132714912790336 pytree_checkpoint_handler.py:592] save_device_host_concurrent_bytes=None
I0424 07:36:19.346121 132714912790336 base_pytree_checkpoint_handler.py:441] Created BasePyTreeCheckpointHandler: use_ocdbt=True, use_zarr3=True, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x78b361650440>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB)
I0424 07:36:22.214681 132714912790336 checkpointing.py:265] Enabling policy for fixed interval checkpointing.
I0424 07:36:22.214927 132714912790336 checkpoint_manager.py:708] [process=5][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=('items',), item_handlers={'items': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x789ef025ddf0>}, handler_registry=None
I0424 07:36:22.215165 132714912790336 composite_checkpoint_handler.py:237] Deferred registration for item: "items". Adding handler `<orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x789ef025ddf0>` for item "items" and save args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>` to `_handler_registry`.
I0424 07:36:22.215219 132714912790336 composite_checkpoint_handler.py:237] Deferred registration for item: "metrics". Adding handler `<orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x789ef02696a0>` for item "metrics" and save args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>` and restore args `<class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>` to `_handler_registry`.
I0424 07:36:22.215255 132714912790336 composite_checkpoint_handler.py:505] Initialized registry DefaultCheckpointHandlerRegistry({('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x789ef025ddf0>, ('items', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x789ef025ddf0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x789ef02696a0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x789ef02696a0>}).
I0424 07:36:22.215577 132714912790336 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.34
I0424 07:36:22.215666 132714912790336 async_checkpointer.py:192] [process=5][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>._fn at 0x789e307f5b20> timeout: 1200 secs and primary_host=0 for async checkpoint writes
I0424 07:36:23.031838 132714912790336 checkpoint_manager.py:1812] Found 0 checkpoint steps in gs://lance-maxtext/linen_ckpt_xpk_main_20260424_070227/linen_xpk_main_20260424_070227_09_pdb_lt_1/checkpoints
I0424 07:36:23.057322 132714912790336 checkpoint_manager.py:929] [process=5][thread=MainThread] CheckpointManager created,  primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=None, keep_time_interval=None, keep_period=None, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=True, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, todelete_full_path=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=None, multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=FixedIntervalPolicy(interval=10), preservation_policy=LatestN(n=None), prevent_write_metrics=False, enable_should_save_is_saving_in_progress_check=True, enable_per_process_directory_creation=False, lightweight_initialize=False), root_directory=gs://lance-maxtext/linen_ckpt_xpk_main_20260424_070227/linen_xpk_main_20260424_070227_09_pdb_lt_1/checkpoints: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x789ef0269250>
I0424 07:36:23.057452 132714912790336 checkpointing.py:301] Checkpoint manager created!
I0424 07:36:23.982100 132714912790336 nnx_wrappers.py:437] Unknown Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_norm_length', 'activation_embed').
I0424 07:36:23.982218 132714912790336 nnx_wrappers.py:437] Unknown Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, 'tensor').
I0424 07:36:24.362793 132714912790336 attentions.py:1088] attentions/inputs_q Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0424 07:36:24.362885 132714912790336 attentions.py:1088] attentions/inputs_q Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, 'tensor').
I0424 07:36:24.379324 132714912790336 attentions.py:1089] attentions/inputs_kv Logical: bfloat16[8,2048,2048]....................................... ('activation_batch', 'activation_attn_length', 'activation_attn_embed').
I0424 07:36:24.379381 132714912790336 attentions.py:1089] attentions/inputs_kv Physical: bfloat16[8,2048,2048]....................................... ('fsdp', None, 'tensor').
I0424 07:36:24.403622 132714912790336 attentions.py:1154] attentions/query Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0424 07:36:24.403713 132714912790336 attentions.py:1154] attentions/query Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, 'tensor', None).
I0424 07:36:24.420240 132714912790336 attentions.py:1155] attentions/key Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0424 07:36:24.420304 132714912790336 attentions.py:1155] attentions/key Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, 'tensor', None).
I0424 07:36:24.436618 132714912790336 attentions.py:1156] attentions/value Logical: bfloat16[8,2048,16,128]..................................... ('activation_kv_batch', 'activation_attn_length', 'activation_kv_heads', 'activation_kv_head_dim').
I0424 07:36:24.436692 132714912790336 attentions.py:1156] attentions/value Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, 'tensor', None).
I0424 07:36:24.461572 132714912790336 attentions.py:1198] attentions/out Logical: bfloat16[8,2048,16,128]..................................... ('activation_batch', 'activation_attn_length', 'activation_heads', 'activation_kv').
I0424 07:36:24.461641 132714912790336 attentions.py:1198] attentions/out Physical: bfloat16[8,2048,16,128]..................................... ('fsdp', None, 'tensor', None).
I0424 07:36:24.482792 132714912790336 linears.py:525] linears/x Logical: bfloat16[8,2048,7168]....................................... ('activation_batch', 'activation_length', 'activation_mlp').
I0424 07:36:24.482856 132714912790336 linears.py:525] linears/x Physical: bfloat16[8,2048,7168]....................................... ('fsdp', None, 'tensor').
I0424 07:36:24.692084 132714912790336 checkpointing.py:577] checkpoint manager exists so trying to load this run's existing checkpoint
I0424 07:36:24.692192 132714912790336 checkpointing.py:665] No existing checkpoints found, not restoring checkpoint.
fsdp: 8
tensor: 4
I0424 07:36:26.133955 132714912790336 maxtext_utils.py:1707]  params/params/decoder/decoder_norm/scale
    Shape:     float32[2048]
    Logical:   P('norm',)
    Physical:  ('tensor',)
I0424 07:36:26.134086 132714912790336 maxtext_utils.py:1707]  params/params/decoder/layers/mlp/wi_0/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, 'tensor')
I0424 07:36:26.134139 132714912790336 maxtext_utils.py:1707]  params/params/decoder/layers/mlp/wi_1/kernel
    Shape:     float32[2048,16,7168]
    Logical:   P('embed', 'layers', 'mlp')
    Physical:  ('fsdp', None, 'tensor')
I0424 07:36:26.134197 132714912790336 maxtext_utils.py:1707]  params/params/decoder/layers/mlp/wo/kernel
    Shape:     float32[7168,16,2048]
    Logical:   P('mlp', 'layers', 'embed')
    Physical:  ('tensor', None, 'fsdp')
I0424 07:36:26.134249 132714912790336 maxtext_utils.py:1707]  params/params/decoder/layers/post_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  ('tensor', None)
I0424 07:36:26.134294 132714912790336 maxtext_utils.py:1707]  params/params/decoder/layers/pre_self_attention_layer_norm/scale
    Shape:     float32[2048,16]
    Logical:   P('norm', 'layers')
    Physical:  ('tensor', None)
I0424 07:36:26.134349 132714912790336 maxtext_utils.py:1707]  params/params/decoder/layers/self_attention/key/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, 'tensor', None)
I0424 07:36:26.134405 132714912790336 maxtext_utils.py:1707]  params/params/decoder/layers/self_attention/out/kernel
    Shape:     float32[16,16,128,2048]
    Logical:   P('heads', 'layers', 'kv', 'embed')
    Physical:  ('tensor', None, None, 'fsdp')
I0424 07:36:26.134446 132714912790336 maxtext_utils.py:1707]  params/params/decoder/layers/self_attention/query/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'q_heads', 'kv')
    Physical:  ('fsdp', None, 'tensor', None)
I0424 07:36:26.134483 132714912790336 maxtext_utils.py:1707]  params/params/decoder/layers/self_attention/value/kernel
    Shape:     float32[2048,16,16,128]
    Logical:   P('embed', 'layers', 'kv_heads', 'kv_head_dim')
    Physical:  ('fsdp', None, 'tensor', None)
I0424 07:36:26.134532 132714912790336 maxtext_utils.py:1707]  params/params/decoder/logits_dense/kernel
    Shape:     float32[2048,32000]
    Logical:   P('embed_vocab', 'vocab')
    Physical:  ('fsdp', 'tensor')
I0424 07:36:26.134580 132714912790336 maxtext_utils.py:1707]  params/params/token_embedder/embedding
    Shape:     float32[32000,2048]
    Logical:   P('vocab', 'embed_vocab')
    Physical:  ('tensor', 'fsdp')

I0424 07:36:26.631970 132714912790336 train.py:155] train/xent Logical: float32[8,2048]............................................. ('activation_embed_and_logits_batch', 'activation_length').
I0424 07:36:26.632063 132714912790336 train.py:155] train/xent Physical: float32[8,2048]............................................. ('fsdp', None).
I0424 07:36:26.647638 132714912790336 train.py:162] train/z_loss Logical: float32[8,2048]............................................. ('activation_embed_and_logits_batch', 'activation_length').
I0424 07:36:26.647709 132714912790336 train.py:162] train/z_loss Physical: float32[8,2048]............................................. ('fsdp', None).
I0424 07:36:41.785103 132714912790336 max_utils.py:791] Total memory size: 0.9 GB, Output size: 0.4 GB, Temp size: 0.5 GB, Argument size: 0.4 GB, Host temp size: 0.0 GB.
I0424 07:36:41.785912 132714912790336 metric_logger.py:301] number parameters: 1.104 billion
I0424 07:36:43.375324 132714912790336 checkpointing.py:772] Waiting for step 0 to finish before checkpoint...
I0424 07:36:57.501405 132714912790336 checkpointing.py:776] Waited 14.126058340072632 seconds for step 0 to finish before starting checkpointing.
I0424 07:36:57.503766 132714912790336 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0424 07:36:57.505686 132714912790336 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 0
I0424 07:36:57.507447 132714912790336 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_main_20260424_070227/linen_xpk_main_20260424_070227_09_pdb_lt_1/checkpoints/0.
I0424 07:36:57.849629 132714912790336 signaling_client.py:364] Using JaxDistributedSignalingClient
I0424 07:36:57.850674 132714912790336 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0424 07:36:57.850731 132714912790336 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0424 07:36:58.113990 132714912790336 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.264434s
I0424 07:36:58.114162 132714912790336 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 5.714 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.2699697017669678 s) (per-host)
I0424 07:36:58.114214 132714912790336 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.270032s (batch_requests_ready=0.002248s, total_serialization_initiated=0.267712s, others=0.000071s)
I0424 07:36:58.114311 132714912790336 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.274315s (all_items=0.000017s, per_item={'items': '0.00001669'}, temp_paths=0.274298)
I0424 07:36:58.115146 132714912790336 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.61 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_main_20260424_070227/linen_xpk_main_20260424_070227_09_pdb_lt_1/checkpoints/0.
I0424 07:36:58.115470 132586772883200 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-24 07:56:58.115432
I0424 07:36:58.126884 132714912790336 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=0] Starting CheckpointManager Save Finalize thread=save_finalize
I0424 07:36:58.127166 132586747705088 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save.
I0424 07:36:58.127328 132714912790336 standard_logger.py:34] {'step': 0, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_main_20260424_070227/linen_xpk_main_20260424_070227_09_pdb_lt_1/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1777016217.5037472, 'wait_for_prev_duration_secs': 6.246566772460938e-05, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1777016217.505725, 'checkpointer_blocking_duration_secs': 0.6098964214324951, 'get_old_steps_start_time': 1777016218.1156473, 'get_old_steps_duration_secs': 5.030632019042969e-05, 'checkpoint_manager_blocking_start_time': 1777016217.5019639, 'checkpoint_manager_blocking_duration_secs': 0.6253242492675781}
I0424 07:36:58.127434 132714912790336 checkpointing.py:408] Started an asynchronous checkpoint save for step 0
I0424 07:36:58.127485 132714912790336 max_utils.py:750] 
Memstats: After params initialized:
I0424 07:36:58.127538 132714912790336 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_18(process=5,(2,4,0,0))
I0424 07:36:58.127572 132714912790336 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_19(process=5,(3,4,0,0))
I0424 07:36:58.127601 132714912790336 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_22(process=5,(2,5,0,0))
I0424 07:36:58.127627 132714912790336 max_utils.py:756] 	Using (GB) 0.43 / 31.25 (1.376000%) on TPU_23(process=5,(3,5,0,0))
I0424 07:36:58.442730 132714912790336 metric_logger.py:196] completed step: 0, seconds: 1.589, TFLOP/s/device: 2.137, Tokens/s/device: 322.152, total_weights: 16384, loss: 10.862, lm_loss: 10.862, perplexity: 52153.723
I0424 07:36:58.520964 132714912790336 metric_logger.py:196] completed step: 1, seconds: 15.066, TFLOP/s/device: 0.225, Tokens/s/device: 33.984, total_weights: 16384, loss: 10.862, lm_loss: 10.862, perplexity: 52153.723
I0424 07:36:58.938742 132714912790336 metric_logger.py:196] completed step: 2, seconds: 0.020, TFLOP/s/device: 168.416, Tokens/s/device: 25385.493, total_weights: 16384, loss: 9.763, lm_loss: 9.763, perplexity: 17373.129
I0424 07:36:58.996997 132714912790336 metric_logger.py:196] completed step: 3, seconds: 0.418, TFLOP/s/device: 8.134, Tokens/s/device: 1225.986, total_weights: 16384, loss: 8.823, lm_loss: 8.823, perplexity: 6791.817
I0424 07:36:59.114076 132714912790336 metric_logger.py:196] completed step: 4, seconds: 0.064, TFLOP/s/device: 53.160, Tokens/s/device: 8012.896, total_weights: 16384, loss: 7.947, lm_loss: 7.947, perplexity: 2828.417
I0424 07:36:59.119958 132714912790336 metric_logger.py:196] completed step: 5, seconds: 0.058, TFLOP/s/device: 58.408, Tokens/s/device: 8803.907, total_weights: 16384, loss: 7.212, lm_loss: 7.212, perplexity: 1355.339
I0424 07:37:01.214714    2574 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com
I0424 07:37:03.225701 132586756097792 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_main_20260424_070227/linen_xpk_main_20260424_070227_09_pdb_lt_1/checkpoints/0/items/array_metadatas/process_5
I0424 07:37:12.871280 132714912790336 metric_logger.py:196] completed step: 6, seconds: 0.118, TFLOP/s/device: 28.808, Tokens/s/device: 4342.295, total_weights: 16384, loss: 6.652, lm_loss: 6.652, perplexity: 774.659
I0424 07:37:12.929592 132714912790336 metric_logger.py:196] completed step: 7, seconds: 13.693, TFLOP/s/device: 0.248, Tokens/s/device: 37.390, total_weights: 16384, loss: 6.269, lm_loss: 6.269, perplexity: 527.696
I0424 07:37:12.987992 132714912790336 metric_logger.py:196] completed step: 8, seconds: 0.063, TFLOP/s/device: 53.885, Tokens/s/device: 8122.085, total_weights: 16384, loss: 6.031, lm_loss: 6.031, perplexity: 416.165
I0424 07:37:13.045524 132714912790336 checkpointing.py:772] Waiting for step 9 to finish before checkpoint...
I0424 07:37:13.046144 132714912790336 checkpointing.py:776] Waited 0.0006356239318847656 seconds for step 9 to finish before starting checkpointing.
I0424 07:37:13.049532 132714912790336 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=0][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0424 07:37:34.353734 132586772883200 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 43.265 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 36.50947594642639 s) (per-host)
I0424 07:37:34.353856 132586772883200 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 36.238279s.
I0424 07:37:42.744809 132586772883200 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 44.629216s.
I0424 07:37:42.745103 132586747705088 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save.
I0424 07:37:42.745233 132586747705088 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save.
I0424 07:37:42.745284 132586747705088 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is syncing with other hosts...
I0424 07:37:42.746781 132586747705088 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=0] CheckpointManager Save Finalize is done on all hosts.
I0424 07:37:42.746899 132714912790336 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=0][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=0.
W0424 07:37:42.746973 132714912790336 checkpoint_manager.py:1452] Waiting for previous save to complete took 29.697440 seconds. If this number is high, consider checkpointing less frequently.
I0424 07:37:42.749060 132714912790336 checkpoint_manager.py:1512] [process=5] Saving checkpoint at step 9
I0424 07:37:42.751064 132714912790336 event_tracking.py:70] [process=5] [async] Started save checkpoint @ gs://lance-maxtext/linen_ckpt_xpk_main_20260424_070227/linen_xpk_main_20260424_070227_09_pdb_lt_1/checkpoints/9.
I0424 07:37:43.037339 132714912790336 jax_array_handlers.py:360] Scheduling D2H of 39 prioritized jax.Array.
I0424 07:37:43.037433 132714912790336 replica_slices.py:424] Transferring arrays to host memory with options: use_replica_parallel=True, min_slice_bytes_for_replica_parallel=None, max_replicas_for_replica_parallel=None, enable_pinned_host_transfer=False
I0424 07:37:43.062430 132714912790336 base_pytree_checkpoint_handler.py:154] [process=5][thread=MainThread] Initiated "orbax.checkpoint._src.serialization.jax_array_handlers.ArrayHandler".serialize. Time taken: 0.026143s
I0424 07:37:43.062556 132714912790336 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/blocking_gbytes_per_sec: 52.125 GiB/s (total gbytes: 1.5 GiB) (time elapsed: 0.029593706130981445 s) (per-host)
I0424 07:37:43.062603 132714912790336 base_pytree_checkpoint_handler.py:768] [process=5][thread=MainThread] Initiated Pytree async_save. Time taken: 0.029649s (batch_requests_ready=0.001807s, total_serialization_initiated=0.027781s, others=0.000061s)
I0424 07:37:43.062700 132714912790336 composite_checkpoint_handler.py:715] [process=5][thread=MainThread] Initiated CompositeCheckpointHandler.async_save. Time taken: 0.033810s (all_items=0.000014s, per_item={'items': '0.00001383'}, temp_paths=0.033796)
I0424 07:37:43.063427 132714912790336 event_tracking.py:125] [process=5] [async] Finished blocking save in 0.31 seconds. Continuing save @ gs://lance-maxtext/linen_ckpt_xpk_main_20260424_070227/linen_xpk_main_20260424_070227_09_pdb_lt_1/checkpoints/9.
I0424 07:37:43.063753 132586747705088 async_checkpointer.py:76] [process=5][thread=async_save] Background save thread started. Deadline for this save operation is 2026-04-24 07:57:43.063714
I0424 07:37:43.068326 132714912790336 checkpoint_manager.py:1560] [process=5][thread=MainThread][step=9] Starting CheckpointManager Save Finalize thread=save_finalize
I0424 07:37:43.068599 132586761123584 async_checkpointer.py:280] [process=5][thread=save_finalize] Waiting for background save thread=async_save.
I0424 07:37:43.068782 132714912790336 standard_logger.py:34] {'step': 9, 'event_type': 'save', 'directory': 'gs://lance-maxtext/linen_ckpt_xpk_main_20260424_070227/linen_xpk_main_20260424_070227_09_pdb_lt_1/checkpoints', 'reached_preemption': False, 'preemption_received_at': None, 'synchronous': False, 'wait_for_prev_start_time': 1777016233.0495017, 'wait_for_prev_duration_secs': 29.697440147399902, 'time_between_consecutive_saves_sec': None, 'checkpointer_blocking_start_time': 1777016262.7491007, 'checkpointer_blocking_duration_secs': 0.3148019313812256, 'get_old_steps_start_time': 1777016263.0639277, 'get_old_steps_duration_secs': 3.0994415283203125e-05, 'checkpoint_manager_blocking_start_time': 1777016233.046369, 'checkpoint_manager_blocking_duration_secs': 30.022374153137207}
I0424 07:37:43.068903 132714912790336 checkpointing.py:408] Started an asynchronous checkpoint save for step 9
I0424 07:37:43.068948 132714912790336 checkpoint_manager.py:2020] [process=5][thread=MainThread][step=9][wait_until_finished] Waiting for Save Finalize thread (save_finalize) to complete.
I0424 07:37:48.199904 132580322039552 array_metadata_store.py:203] [process=5][thread=array_type_handler] Wrote 39 array_metadata.ArrayMetadata to gs://lance-maxtext/linen_ckpt_xpk_main_20260424_070227/linen_xpk_main_20260424_070227_09_pdb_lt_1/checkpoints/9/items/array_metadatas/process_5
I0424 07:38:24.327916 132586747705088 base_pytree_checkpoint_handler.py:130] [process=5] /jax/orbax/write/gbytes_per_sec: 38.252 MiB/s (total gbytes: 1.5 GiB) (time elapsed: 41.29491186141968 s) (per-host)
I0424 07:38:24.328045 132586747705088 async_checkpointer.py:90] [process=5][thread=async_save] 3 Handler Commit operations completed. Time taken: 41.264172s.
I0424 07:38:32.368540 132586747705088 async_checkpointer.py:160] [process=5][thread=async_save] Background save thread done. Time taken: 49.304651s.
I0424 07:38:32.368868 132586761123584 async_checkpointer.py:288] [process=5][thread=save_finalize] Done with waiting for background save thread=async_save.
I0424 07:38:32.368985 132586761123584 async_checkpointer.py:298] [process=5][thread=save_finalize] No errors found in background save thread=async_save.
I0424 07:38:32.369042 132586761123584 checkpoint_manager.py:2137] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is syncing with other hosts...
I0424 07:38:32.370502 132586761123584 checkpoint_manager.py:2146] [process=5][thread=save_finalize][step=9] CheckpointManager Save Finalize is done on all hosts.
I0424 07:38:32.370697 132714912790336 checkpoint_manager.py:2032] [process=5][thread=MainThread][step=9][wait_until_finished] Done waiting for Save Finalize thread (save_finalize) running at step=9.
I0424 07:38:32.370846 132714912790336 checkpoint_manager.py:2009] [process=5][thread=MainThread][wait_until_finished] No Save Finalize thread to wait for. Returning.
I0424 07:38:32.371854 132714912790336 metric_logger.py:196] completed step: 9, seconds: 0.058, TFLOP/s/device: 58.396, Tokens/s/device: 8802.090, total_weights: 16384, loss: 5.892, lm_loss: 5.892, perplexity: 361.955
Per train step:
 Total TFLOPs: 3.40 
 split as 93.93% learnable weight flops and 6.07% attention flops
XPK End: Fri Apr 24 07:38:43 UTC 2026
EXIT_CODE=0
NNX  ·  b117f50cf  ·  main_20260424_070227  ·  full log
XPK Start: Fri Apr 24 08:48:42 UTC 2026
PyTorch was not found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
`rope_parameters`'s factor field must be a float >= 1, got 40
`rope_parameters`'s beta_fast field must be a float, got 32
`rope_parameters`'s beta_slow field must be a float, got 1
DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 
2026-04-24 08:49:07.419572: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
I0424 08:49:07.622039 139978554332992 max_utils.py:273] Attempting to initialize the jax distributed system...
I0424 08:49:16.663262 139978554332992 distributed.py:149] Starting JAX distributed service on [::]:8482
I0424 08:49:16.665708 139978554332992 distributed.py:172] Connecting to JAX distributed service on mt-09-pdb-lt-1-045my-slice-job-0-0.mt-09-pdb-lt-1-045my:8482
I0424 08:49:17.698649 139978554332992 max_utils.py:284] Jax distributed system initialized!
I0424 08:49:23.763745 139978554332992 max_utils.py:800] System Information: Jax Version: 0.9.2
I0424 08:49:23.763850 139978554332992 max_utils.py:801] System Information: Jaxlib Version: 0.9.2
I0424 08:49:23.763891 139978554332992 max_utils.py:802] System Information: Jax Backend: PJRT C API
TFRT TPU v6 lite
Built on Mar 4 2026 11:32:08 (1772652728) cl/878335365
I0424 08:49:23.763926 139978554332992 train_utils.py:361] WARNING: Sequence packing is essentially ignored for synthetic data. Please use a real dataset to use sequence packing.
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/deps/src/maxtext/trainers/pre_train/train.py", line 744, in <module>
    app.run(main)
  File "/usr/local/lib/python3.12/site-packages/absl/app.py", line 367, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.12/site-packages/absl/app.py", line 312, in _run_main
    sys.exit(main(argv))
             ^^^^^^^^^^
  File "/deps/src/maxtext/trainers/pre_train/train.py", line 740, in main
    train_func()
  File "/deps/src/maxtext/trainers/pre_train/train.py", line 730, in train_func
    run(config, recorder, diagnostic_config)
  File "/deps/src/maxtext/trainers/pre_train/train.py", line 709, in run
    train_loop(config, recorder)
  File "/deps/src/maxtext/trainers/pre_train/train.py", line 536, in train_loop
    ) = train_utils.setup_train_loop(config, recorder)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/deps/src/maxtext/utils/train_utils.py", line 218, in setup_train_loop
    raise NotImplementedError("Pure NNX support has not been implemented yet.")
NotImplementedError: Pure NNX support has not been implemented yet.
XPK End: Fri Apr 24 08:49:32 UTC 2026
EXIT_CODE=1