XPK Start: Fri Apr 24 07:32:13 UTC 2026 `rope_parameters`'s factor field must be a float >= 1, got 40 `rope_parameters`'s beta_fast field must be a float, got 32 `rope_parameters`'s beta_slow field must be a float, got 1 DeepseekV32Config got `key=rope_scaling` in kwargs but hasn't set it as attribute. For RoPE standardization you need to set `self.rope_parameters` in model's config. 2026-04-24 07:32:38.070332: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) I0424 07:32:38.070459 134898864674624 max_utils.py:800] System Information: Jax Version: 0.9.2 I0424 07:32:38.070558 134898864674624 max_utils.py:801] System Information: Jaxlib Version: 0.9.2 I0424 07:32:46.068376 134898864674624 max_utils.py:802] System Information: Jax Backend: PJRT C API TFRT TPU v6 lite Built on Apr 6 2026 20:48:10 (1775533690) cl/895581894 I0424 07:32:46.270708 134898864674624 max_utils.py:238] Skipping jax distributed system due to skip_jax_distributed_system=True flag. I0424 07:32:46.272298 134898864674624 model_creation_utils.py:269] Running on a single slice I0424 07:32:46.272357 134898864674624 model_creation_utils.py:356] Creating reference model and also meshes for reference and rollout /usr/local/lib/python3.12/site-packages/pydantic/main.py:464: UserWarning: Pydantic serializer warnings: PydanticSerializationUnexpectedValue(Expected `str` - serialized value may not be as expected [field_name='load_parameters_path', input_value=PosixGPath('gs://lance-ma...0260424_070237/0/items'), input_type=PosixGPath]) return self.__pydantic_serializer__.to_python( I0424 07:32:46.531864 134898864674624 maxtext_utils.py:1604] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1) I0424 07:32:46.641528 134898864674624 maxtext_utils.py:1604] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1) I0424 07:32:51.259655 134898864674624 pytree_checkpoint_handler.py:577] save_device_host_concurrent_bytes=None I0424 07:32:51.260148 134898864674624 base_pytree_checkpoint_handler.py:411] Created BasePyTreeCheckpointHandler: use_ocdbt=False, use_zarr3=False, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7aaf91803e30>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) I0424 07:32:51.260209 134898864674624 abstract_checkpointer.py:35] orbax-checkpoint version: 0.11.28 W0424 07:32:51.645677 134898864674624 checkpoint.py:202] Metadata file does not exist: gs://lance-maxtext/pt_ckpt_xpk_main_20260424_070237/0/items/_CHECKPOINT_METADATA I0424 07:32:52.046237 1669 google_auth_provider.cc:181] Running on GCE, using service account 562977990677-compute@developer.gserviceaccount.com I0424 07:32:53.278324 134898864674624 checkpointer.py:304] Restoring checkpoint from gs://lance-maxtext/pt_ckpt_xpk_main_20260424_070237/0/items. W0424 07:32:58.416703 134898864674624 transform_utils.py:230] The transformations API will eventually be replaced by an upgraded design. The current API will not be removed until this point, but it will no longer be actively worked on. I0424 07:32:58.417007 134898864674624 transform_utils.py:288] The following keys are not loaded from the original tree after applying specified transforms: params/params/decoder/dropout/rngs/aqt/count, params/params/decoder/dropout/rngs/aqt/key, params/params/decoder/dropout/rngs/dropout/count, params/params/decoder/dropout/rngs/dropout/key, params/params/decoder/dropout/rngs/params/count, params/params/decoder/dropout/rngs/params/key, params/params/decoder/layers/mlp/dropout/rngs/aqt/count, params/params/decoder/layers/mlp/dropout/rngs/aqt/key, params/params/decoder/layers/mlp/dropout/rngs/dropout/count, params/params/decoder/layers/mlp/dropout/rngs/dropout/key, params/params/decoder/layers/mlp/dropout/rngs/params/count, params/params/decoder/layers/mlp/dropout/rngs/params/key, params/params/decoder/layers/self_attention/attention_op/rngs/aqt/count, params/params/decoder/layers/self_attention/attention_op/rngs/aqt/key, params/params/decoder/layers/self_attention/attention_op/rngs/dropout/count, params/params/decoder/layers/self_attention/attention_op/rngs/dropout/key, params/params/decoder/layers/self_attention/attention_op/rngs/params/count, params/params/decoder/layers/self_attention/attention_op/rngs/params/key, params/params/decoder/rngs/aqt/count, params/params/decoder/rngs/aqt/key, params/params/decoder/rngs/dropout/count, params/params/decoder/rngs/dropout/key, params/params/decoder/rngs/params/count, params/params/decoder/rngs/params/key I0424 07:32:58.473664 134898864674624 checkpointer.py:318] Finished restoring checkpoint in 5.71 seconds from gs://lance-maxtext/pt_ckpt_xpk_main_20260424_070237/0/items. I0424 07:32:58.502585 134898864674624 maxtext_utils.py:1604] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1) I0424 07:32:58.502736 134898864674624 model_creation_utils.py:373] Creating policy model with same config as reference model on trainer mesh I0424 07:32:58.626677 134898864674624 maxtext_utils.py:1604] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1) I0424 07:32:58.686371 134898864674624 maxtext_utils.py:1604] Num_devices: 32, shape (1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1) W0424 07:32:58.895275 11 pjrt_executable.cc:642] Assume version compatibility. PjRt-IFRT does not track XLA executable versions. I0424 07:32:58.958067 134898864674624 pytree_checkpoint_handler.py:577] save_device_host_concurrent_bytes=None I0424 07:32:58.958233 134898864674624 base_pytree_checkpoint_handler.py:411] Created BasePyTreeCheckpointHandler: use_ocdbt=False, use_zarr3=False, pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=<orbax.checkpoint._src.metadata.array_metadata_store.Store object at 0x7aaf91803e30>, enable_pinned_host_transfer=False, save_concurrent_bytes: 96000000000 (89.4 GiB), restore_concurrent_bytes: 96000000000 (89.4 GiB) W0424 07:32:59.452335 134898864674624 checkpoint.py:202] Metadata file does not exist: gs://lance-maxtext/pt_ckpt_xpk_main_20260424_070237/0/items/_CHECKPOINT_METADATA I0424 07:33:00.580994 134898864674624 checkpointer.py:304] Restoring checkpoint from gs://lance-maxtext/pt_ckpt_xpk_main_20260424_070237/0/items. W0424 07:33:04.677973 134898864674624 transform_utils.py:230] The transformations API will eventually be replaced by an upgraded design. The current API will not be removed until this point, but it will no longer be actively worked on. I0424 07:33:04.678281 134898864674624 transform_utils.py:288] The following keys are not loaded from the original tree after applying specified transforms: params/params/decoder/dropout/rngs/aqt/count, params/params/decoder/dropout/rngs/aqt/key, params/params/decoder/dropout/rngs/dropout/count, params/params/decoder/dropout/rngs/dropout/key, params/params/decoder/dropout/rngs/params/count, params/params/decoder/dropout/rngs/params/key, params/params/decoder/layers/mlp/dropout/rngs/aqt/count, params/params/decoder/layers/mlp/dropout/rngs/aqt/key, params/params/decoder/layers/mlp/dropout/rngs/dropout/count, params/params/decoder/layers/mlp/dropout/rngs/dropout/key, params/params/decoder/layers/mlp/dropout/rngs/params/count, params/params/decoder/layers/mlp/dropout/rngs/params/key, params/params/decoder/layers/self_attention/attention_op/rngs/aqt/count, params/params/decoder/layers/self_attention/attention_op/rngs/aqt/key, params/params/decoder/layers/self_attention/attention_op/rngs/dropout/count, params/params/decoder/layers/self_attention/attention_op/rngs/dropout/key, params/params/decoder/layers/self_attention/attention_op/rngs/params/count, params/params/decoder/layers/self_attention/attention_op/rngs/params/key, params/params/decoder/rngs/aqt/count, params/params/decoder/rngs/aqt/key, params/params/decoder/rngs/dropout/count, params/params/decoder/rngs/dropout/key, params/params/decoder/rngs/params/count, params/params/decoder/rngs/params/key I0424 07:33:06.639418 134898864674624 checkpointer.py:318] Finished restoring checkpoint in 6.43 seconds from gs://lance-maxtext/pt_ckpt_xpk_main_20260424_070237/0/items. I0424 07:33:09.984452 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/Qwen/Qwen3-0.6B/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect" I0424 07:33:09.992864 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/Qwen/Qwen3-0.6B/c1899de289a04d12100db370d81485cdf75e47ca/config.json "HTTP/1.1 200 OK" I0424 07:33:10.002580 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/resolve-cache/models/Qwen/Qwen3-0.6B/c1899de289a04d12100db370d81485cdf75e47ca/config.json "HTTP/1.1 200 OK" I0424 07:33:10.113176 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/Qwen/Qwen3-0.6B/resolve/main/tokenizer_config.json "HTTP/1.1 307 Temporary Redirect" I0424 07:33:10.122458 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/Qwen/Qwen3-0.6B/c1899de289a04d12100db370d81485cdf75e47ca/tokenizer_config.json "HTTP/1.1 200 OK" I0424 07:33:10.131322 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/resolve-cache/models/Qwen/Qwen3-0.6B/c1899de289a04d12100db370d81485cdf75e47ca/tokenizer_config.json "HTTP/1.1 200 OK" I0424 07:33:10.243360 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/models/Qwen/Qwen3-0.6B/tree/main/additional_chat_templates?recursive=false&expand=false "HTTP/1.1 404 Not Found" I0424 07:33:10.353254 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/models/Qwen/Qwen3-0.6B/tree/main?recursive=true&expand=false "HTTP/1.1 200 OK" I0424 07:33:10.460034 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/Qwen/Qwen3-0.6B/resolve/main/vocab.json "HTTP/1.1 307 Temporary Redirect" I0424 07:33:10.468654 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/Qwen/Qwen3-0.6B/c1899de289a04d12100db370d81485cdf75e47ca/vocab.json "HTTP/1.1 200 OK" I0424 07:33:10.478358 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/resolve-cache/models/Qwen/Qwen3-0.6B/c1899de289a04d12100db370d81485cdf75e47ca/vocab.json "HTTP/1.1 200 OK" I0424 07:33:10.608978 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/Qwen/Qwen3-0.6B/resolve/main/merges.txt "HTTP/1.1 307 Temporary Redirect" I0424 07:33:10.618175 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/Qwen/Qwen3-0.6B/c1899de289a04d12100db370d81485cdf75e47ca/merges.txt "HTTP/1.1 200 OK" I0424 07:33:10.627525 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/resolve-cache/models/Qwen/Qwen3-0.6B/c1899de289a04d12100db370d81485cdf75e47ca/merges.txt "HTTP/1.1 200 OK" I0424 07:33:10.742736 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/Qwen/Qwen3-0.6B/resolve/main/tokenizer.json "HTTP/1.1 302 Found" I0424 07:33:10.848700 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/models/Qwen/Qwen3-0.6B/xet-read-token/c1899de289a04d12100db370d81485cdf75e47ca "HTTP/1.1 200 OK" I0424 07:33:11.509457 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/Qwen/Qwen3-0.6B/resolve/main/added_tokens.json "HTTP/1.1 404 Not Found" I0424 07:33:11.612585 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/Qwen/Qwen3-0.6B/resolve/main/special_tokens_map.json "HTTP/1.1 404 Not Found" I0424 07:33:11.719737 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/Qwen/Qwen3-0.6B/resolve/main/chat_template.jinja "HTTP/1.1 404 Not Found" I0424 07:33:12.495040 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/models/Qwen/Qwen3-0.6B "HTTP/1.1 200 OK" I0424 07:33:12.623633 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/datasets/openai/gsm8k/resolve/main/README.md "HTTP/1.1 307 Temporary Redirect" I0424 07:33:12.632457 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/api/resolve-cache/datasets/openai/gsm8k/740312add88f781978c0658806c59bc2815b9866/README.md "HTTP/1.1 200 OK" I0424 07:33:12.642213 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/resolve-cache/datasets/openai/gsm8k/740312add88f781978c0658806c59bc2815b9866/README.md "HTTP/1.1 200 OK" I0424 07:33:12.746886 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/datasets/openai/gsm8k/resolve/740312add88f781978c0658806c59bc2815b9866/gsm8k.py "HTTP/1.1 404 Not Found" I0424 07:33:13.066631 134898864674624 _client.py:1025] HTTP Request: HEAD https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/openai/gsm8k/openai/gsm8k.py "HTTP/1.1 404 Not Found" I0424 07:33:13.326926 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/datasets/openai/gsm8k/revision/740312add88f781978c0658806c59bc2815b9866 "HTTP/1.1 200 OK" I0424 07:33:13.430803 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/datasets/openai/gsm8k/resolve/740312add88f781978c0658806c59bc2815b9866/.huggingface.yaml "HTTP/1.1 404 Not Found" I0424 07:33:13.596727 134898864674624 _client.py:1025] HTTP Request: GET https://datasets-server.huggingface.co/info?dataset=openai/gsm8k "HTTP/1.1 200 OK" I0424 07:33:13.707549 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/datasets/openai/gsm8k/tree/740312add88f781978c0658806c59bc2815b9866/main?recursive=true&expand=false "HTTP/1.1 200 OK" I0424 07:33:13.816634 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/datasets/openai/gsm8k/tree/740312add88f781978c0658806c59bc2815b9866?recursive=false&expand=false "HTTP/1.1 200 OK" I0424 07:33:13.935465 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/datasets/openai/gsm8k/resolve/740312add88f781978c0658806c59bc2815b9866/dataset_infos.json "HTTP/1.1 404 Not Found" I0424 07:33:14.093878 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/datasets/openai/gsm8k/resolve/740312add88f781978c0658806c59bc2815b9866/main/train-00000-of-00001.parquet "HTTP/1.1 302 Found" I0424 07:33:14.196702 134898864674624 _client.py:1025] HTTP Request: GET https://huggingface.co/api/datasets/openai/gsm8k/xet-read-token/740312add88f781978c0658806c59bc2815b9866 "HTTP/1.1 200 OK" I0424 07:33:14.734207 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/datasets/openai/gsm8k/resolve/740312add88f781978c0658806c59bc2815b9866/main/test-00000-of-00001.parquet "HTTP/1.1 302 Found" Generating train split: 0%| | 0/7473 [00:00<?, ? examples/s] Generating train split: 100%|██████████| 7473/7473 [00:00<00:00, 740486.04 examples/s] Generating test split: 0%| | 0/1319 [00:00<?, ? examples/s] Generating test split: 100%|██████████| 1319/1319 [00:00<00:00, 479027.36 examples/s] I0424 07:33:14.983392 134898864674624 train_rl.py:96] Loaded Hugging Face dataset openai/gsm8k with split train. Size: 7473 I0424 07:33:15.089646 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/datasets/openai/gsm8k/resolve/main/README.md "HTTP/1.1 307 Temporary Redirect" I0424 07:33:15.099297 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/api/resolve-cache/datasets/openai/gsm8k/740312add88f781978c0658806c59bc2815b9866/README.md "HTTP/1.1 200 OK" I0424 07:33:15.211514 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/datasets/openai/gsm8k/resolve/740312add88f781978c0658806c59bc2815b9866/gsm8k.py "HTTP/1.1 404 Not Found" I0424 07:33:15.303880 134898864674624 _client.py:1025] HTTP Request: HEAD https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/openai/gsm8k/openai/gsm8k.py "HTTP/1.1 404 Not Found" I0424 07:33:15.428602 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/datasets/openai/gsm8k/resolve/740312add88f781978c0658806c59bc2815b9866/.huggingface.yaml "HTTP/1.1 404 Not Found" I0424 07:33:15.544445 134898864674624 _client.py:1025] HTTP Request: GET https://datasets-server.huggingface.co/info?dataset=openai/gsm8k "HTTP/1.1 200 OK" I0424 07:33:15.652140 134898864674624 _client.py:1025] HTTP Request: HEAD https://huggingface.co/datasets/openai/gsm8k/resolve/740312add88f781978c0658806c59bc2815b9866/dataset_infos.json "HTTP/1.1 404 Not Found" I0424 07:33:15.656879 134898864674624 train_rl.py:96] Loaded Hugging Face dataset openai/gsm8k with split test. Size: 1319 I0424 07:33:15.658053 134898864674624 train_rl.py:562] Train dataset samples: I0424 07:33:15.691203 134898864674624 train_rl.py:568] Test dataset samples: I0424 07:33:15.696085 134898864674624 train_rl.py:575] Reference Model initialized successfully {'answer': array(['["3", "3"]'], dtype='<U10'), 'prompts': array(['<|im_start|>user\n<start_of_turn>user\nYou are given a problem. Think about the problem and provide your reasoning. Place it between <reasoning> and </reasoning>. Then, provide the final answer (i.e., just one numerical value) between <answer> and </answer>.\n\nMaria has 4 dimes, 4 quarters, and 7 nickels in her piggy bank. Her mom gives her 5 quarters. How much money, in dollars, does Maria have now?<end_of_turn>\n<start_of_turn>model<|im_end|>\n<|im_start|>assistant\n'], dtype='<U467'), I0424 07:33:15.709436 134898864674624 train_rl.py:577] Reference mesh shape: OrderedDict({'diloco': 1, 'data': 1, 'stage': 1, 'fsdp': 32, 'fsdp_transpose': 1, 'context': 1, 'context_autoregressive': 1, 'tensor': 1, 'tensor_transpose': 1, 'tensor_sequence': 1, 'expert': 1, 'autoregressive': 1}) I0424 07:33:15.709488 134898864674624 train_rl.py:578] Policy Model initialized successfully 'question': array(['Maria has 4 dimes, 4 quarters, and 7 nickels in her piggy bank. Her mom gives her 5 quarters. How much money, in dollars, does Maria have now?'], dtype='<U142')} {'answer': array(['["34", "34"]'], dtype='<U12'), 'prompts': array(['<|im_start|>user\n<start_of_turn>user\nYou are given a problem. Think about the problem and provide your reasoning. Place it between <reasoning> and </reasoning>. Then, provide the final answer (i.e., just one numerical value) between <answer> and </answer>.\n\nA wildlife team is monitoring the number of birds in a park. There are 3 blackbirds in each of the park’s 7 trees. There are also 13 magpies roaming around the park. How many birds are in the park in total?<end_of_turn>\n<start_of_turn>model<|im_end|>\n<|im_start|>assistant\n'], dtype='<U531'), 'question': array(['A wildlife team is monitoring the number of birds in a park. There are 3 blackbirds in each of the park’s 7 trees. There are also 13 magpies roaming around the park. How many birds are in the park in total?'], dtype='<U206')} {'answer': array(['["300", "300"]'], dtype='<U14'), 'prompts': array(['<|im_start|>user\n<start_of_turn>user\nYou are given a problem. Think about the problem and provide your reasoning. Place it between <reasoning> and </reasoning>. Then, provide the final answer (i.e., just one numerical value) between <answer> and </answer>.\n\nMr Hezekiah had 20 trucks from his store supplying fertiliser to different farmers in his hometown dispatched for delivery on a particular day. Each truck was carrying 20 tons of fertiliser packed in bags. Two hours after the trucks had departed for delivery, Mr Hezekiah got the news that a quarter of the number of lorries dispatched for delivery had mechanical failures on the road and could not deliver the fertilisers to the farmers. Calculate the total number of tons of fertiliser that reached the farmers that day?<end_of_turn>\n<start_of_turn>model<|im_end|>\n<|im_start|>assistant\n'], dtype='<U847'), 'question': array(['Mr Hezekiah had 20 trucks from his store supplying fertiliser to different farmers in his hometown dispatched for delivery on a particular day. Each truck was carrying 20 tons of fertiliser packed in bags. Two hours after the trucks had departed for delivery, Mr Hezekiah got the news that a quarter of the number of lorries dispatched for delivery had mechanical failures on the road and could not deliver the fertilisers to the farmers. Calculate the total number of tons of fertiliser that reached the farmers that day?'], dtype='<U522')} {'answer': array(['["450", "450"]'], dtype='<U14'), 'prompts': array(['<|im_start|>user\n<start_of_turn>user\nYou are given a problem. Think about the problem and provide your reasoning. Place it between <reasoning> and </reasoning>. Then, provide the final answer (i.e., just one numerical value) between <answer> and </answer>.\n\nGrandpa loves to eat jelly beans, but how many jelly beans he can eat depends on the size of the beans. It takes 75 large jelly beans to fill Grandpa up. He can eat twice as many medium-sized beans as large beans. And eating 3 small beans is the same as eating 1 medium-sized bean. How many small beans can Grandpa eat?<end_of_turn>\n<start_of_turn>model<|im_end|>\n<|im_start|>assistant\n'], dtype='<U647'), 'question': array(['Grandpa loves to eat jelly beans, but how many jelly beans he can eat depends on the size of the beans. It takes 75 large jelly beans to fill Grandpa up. He can eat twice as many medium-sized beans as large beans. And eating 3 small beans is the same as eating 1 medium-sized bean. How many small beans can Grandpa eat?'], dtype='<U322')} {'answer': array(['["320", "320"]'], dtype='<U14'), 'prompts': array(['<|im_start|>user\n<start_of_turn>user\nYou are given a problem. Think about the problem and provide your reasoning. Place it between <reasoning> and </reasoning>. Then, provide the final answer (i.e., just one numerical value) between <answer> and </answer>.\n\nMr. Maxim works at The Best Cookeries Around restaurant. On a particular day, 50 people entered the restaurant in the morning to eat. At around 10:00, 40 more people entered the restaurant and ordered the same amount of food as the first people. After a while, twice the number of people who entered the restaurant at 10:00 came in and ordered lunch. By evening, an additional 3 times as many people as the number that came in first had entered the restaurant. Calculate the total number of people that entered the restaurant on that day.<end_of_turn>\n<start_of_turn>model<|im_end|>\n<|im_start|>assistant\n'], dtype='<U863'), 'question': array(['Mr. Maxim works at The Best Cookeries Around restaurant. On a particular day, 50 people entered the restaurant in the morning to eat. At around 10:00, 40 more people entered the restaurant and ordered the same amount of food as the first people. After a while, twice the number of people who entered the restaurant at 10:00 came in and ordered lunch. By evening, an additional 3 times as many people as the number that came in first had entered the restaurant. Calculate the total number of people that entered the restaurant on that day.'], dtype='<U538')} {'answer': array(['["9", "9"]'], dtype='<U10'), 'prompts': array(['<|im_start|>user\n<start_of_turn>user\nYou are given a problem. Think about the problem and provide your reasoning. Place it between <reasoning> and </reasoning>. Then, provide the final answer (i.e., just one numerical value) between <answer> and </answer>.\n\nJackson is planting tulips. He can fit 6 red tulips in a row and 8 blue tulips in a row. If Jackson buys 36 red tulips and 24 blue tulips, how many rows of flowers will he plant?<end_of_turn>\n<start_of_turn>model<|im_end|>\n<|im_start|>assistant\n'], dtype='<U503'), 'question': array(['Jackson is planting tulips. He can fit 6 red tulips in a row and 8 blue tulips in a row. If Jackson buys 36 red tulips and 24 blue tulips, how many rows of flowers will he plant?'], dtype='<U178')} {'answer': array(['["60", "60"]'], dtype='<U12'), 'prompts': array(['<|im_start|>user\n<start_of_turn>user\nYou are given a problem. Think about the problem and provide your reasoning. Place it between <reasoning> and </reasoning>. Then, provide the final answer (i.e., just one numerical value) between <answer> and </answer>.\n\nThere are five phones on a phone plan. The main phone costs twice as much as each additional phone. If the main phone plan costs $20, how much does the whole phone plan cost?<end_of_turn>\n<start_of_turn>model<|im_end|>\n<|im_start|>assistant\n'], dtype='<U499'), 'question': array(['There are five phones on a phone plan. The main phone costs twice as much as each additional phone. If the main phone plan costs $20, how much does the whole phone plan cost?'], dtype='<U174')} TunixMaxTextAdapter( # Param: 596,049,920 (1.2 GB), RngState: 348 (2.1 KB), Total: 596,050,268 (1.2 GB) base=Transformer( # Param: 596,049,920 (1.2 GB), RngState: 348 (2.1 KB), Total: 596,050,268 (1.2 GB) audio_encoder=None, config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a9870268a70>, decoder=NNXDecoder( # Param: 440,467,456 (880.9 MB), RngState: 348 (2.1 KB), Total: 440,467,804 (880.9 MB) config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a9870268a70>, decoder_norm=RMSNorm( # Param: 1,024 (2.0 KB) dtype=dtype(bfloat16), epsilon=1e-06, kernel_axes=('norm',), num_features=1024, parameter_memory_host_offload=False, scale=Param( # 1,024 (2.0 KB) value=Array(shape=(1024,), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('norm',) ), scale_init=<function ones at 0x7aaf95666a20>, scale_offset=0.0, shard_mode=<ShardMode.AUTO: 'auto'>, weight_dtype=dtype(bfloat16), with_scale=True ), dropout=Dropout( # RngState: 6 (36 B) broadcast_dims=(-2,), deterministic=False, rate=0.0, rng_collection='dropout', rngs=Rngs( # RngState: 6 (36 B) aqt=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(0, dtype=uint32), eager_sharding=False, tag='aqt' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [2799984767 1105366846], eager_sharding=False, tag='aqt' ), tag='aqt' ), dropout=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(0, dtype=uint32), eager_sharding=False, tag='dropout' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [346279018 360566543], eager_sharding=False, tag='dropout' ), tag='dropout' ), params=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(0, dtype=uint32), eager_sharding=False, tag='params' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [2839387376 2467677468], eager_sharding=False, tag='params' ), tag='params' ) ) ), is_deepseek=False, is_gemma3=False, layers=Qwen3DecoderLayer( # RngState: 336 (2.0 KB), Param: 440,466,432 (880.9 MB), Total: 440,466,768 (880.9 MB) activation_axis_names=('activation_batch', 'activation_norm_length', 'activation_embed'), config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a9870268a70>, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), mlp=MlpBlock( # RngState: 168 (1.0 KB), Param: 264,241,152 (528.5 MB), Total: 264,241,320 (528.5 MB) activations=['silu', 'linear'], config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a9870268a70>, dropout=Dropout( # RngState: 168 (1.0 KB) broadcast_dims=(-2,), deterministic=False, rate=0.0, rng_collection='dropout', rngs=Rngs( # RngState: 168 (1.0 KB) aqt=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='aqt' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='aqt' ), tag='aqt' ), dropout=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='dropout' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='dropout' ), tag='dropout' ), params=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='params' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='params' ), tag='params' ) ) ), dtype=dtype(bfloat16), in_features=1024, intermediate_dim=3072, intermediate_dropout_rate=0.0, intermediate_logical=('activation_batch', 'activation_length', 'activation_mlp'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae61ca1ee0>, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), mlp_layer_norm=None, model_mode='train', quant=None, use_bias=False, use_pre_norm=False, weight_dtype=dtype(bfloat16), wi_0=DenseGeneral( # Param: 88,080,384 (176.2 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(1024,), kernel=Param( # 88,080,384 (176.2 MB) value=Array(shape=(1024, 28, 3072), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('embed', 'layers', 'mlp') ), kernel_axes=('embed', 'mlp'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae61ca1ee0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(3072,), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), wi_1=DenseGeneral( # Param: 88,080,384 (176.2 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(1024,), kernel=Param( # 88,080,384 (176.2 MB) value=Array(shape=(1024, 28, 3072), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('embed', 'layers', 'mlp') ), kernel_axes=('embed', 'mlp'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae61ca1ee0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(3072,), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), wo=DenseGeneral( # Param: 88,080,384 (176.2 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(3072,), kernel=Param( # 88,080,384 (176.2 MB) value=Array(shape=(3072, 28, 1024), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('mlp', 'layers', 'embed') ), kernel_axes=('mlp', 'embed'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae61ca1ee0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(1024,), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ) ), post_self_attention_layer_norm=RMSNorm( # Param: 28,672 (57.3 KB) dtype=dtype(bfloat16), epsilon=1e-06, kernel_axes=('norm',), num_features=1024, parameter_memory_host_offload=False, scale=Param( # 28,672 (57.3 KB) value=Array(shape=(1024, 28), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('norm', 'layers') ), scale_init=<function ones at 0x7aaf95666a20>, scale_offset=0.0, shard_mode=<ShardMode.AUTO: 'auto'>, weight_dtype=dtype(bfloat16), with_scale=True ), pre_self_attention_layer_norm=RMSNorm( # Param: 28,672 (57.3 KB) dtype=dtype(bfloat16), epsilon=1e-06, kernel_axes=('norm',), num_features=1024, parameter_memory_host_offload=False, scale=Param( # 28,672 (57.3 KB) value=Array(shape=(1024, 28), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('norm', 'layers') ), scale_init=<function ones at 0x7aaf95666a20>, scale_offset=0.0, shard_mode=<ShardMode.AUTO: 'auto'>, weight_dtype=dtype(bfloat16), with_scale=True ), quant=None, self_attention=Attention( # RngState: 168 (1.0 KB), Param: 176,167,936 (352.3 MB), Total: 176,168,104 (352.3 MB) KVCache_0=None, ar_cache_axis_order=(1, 2, 0, 3), attention_kernel='dot_product', attention_op=AttentionOp( # RngState: 168 (1.0 KB) AqtEinsum_0=<function einsum at 0x7aaf95bbe520>, AqtEinsum_1=<function einsum at 0x7aaf95bbe520>, AqtEinsum_2=<function einsum at 0x7aaf95bbe520>, AqtEinsum_3=<function einsum at 0x7aaf95bbe520>, attention_kernel='dot_product', attention_type=<AttentionType.GLOBAL: 'global'>, attn_logits_soft_cap=None, cache_logical_axis_names=('cache_batch', 'cache_sequence', 'cache_heads', 'cache_kv'), cache_scale_logical_axis_names=('cache_scale_batch', 'cache_scale_sequence', 'cache_scale_heads', 'cache_scale_kv'), chunk_attn_window_size=0, compute_axis_order=(0, 1, 2, 3), config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a9870268a70>, dropout_rate=0.0, dtype=dtype(bfloat16), flash_axis_names_kv=('activation_batch_attn', 'activation_heads', 'activation_kv_length', 'activation_kv'), flash_axis_names_q=('activation_batch_attn', 'activation_heads', 'activation_length', 'activation_kv'), flash_axis_names_splash_kernel=('activation_heads', 'activation_length'), float32_logits=False, float32_qk_product=False, key_axis_order=(2, 0, 1, 3), kv_quant=None, max_prefill_predict_length=256, max_target_length=1024, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), num_kv_heads=8, num_query_heads=16, prefill_cache_logical_axis_names=('cache_batch_prefill', 'cache_sequence', 'cache_heads', 'cache_kv'), quant=None, ragged_block_size=256, ragged_lengths_names=('cache_batch',), ragged_qkv_axis_names=('cache_batch', 'cache_heads', 'cache_sequence', 'cache_kv'), reshape_q=False, rngs=Rngs( # RngState: 168 (1.0 KB) aqt=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='aqt' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='aqt' ), tag='aqt' ), dropout=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='dropout' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='dropout' ), tag='dropout' ), params=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='params' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='params' ), tag='params' ) ), sliding_window_size=None, use_ragged_attention=False ), attention_type=<AttentionType.GLOBAL: 'global'>, attn_logits_soft_cap=None, compute_axis_order=(0, 1, 2, 3), config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a9870268a70>, decode_input_axis_names=('decode_batch', 'decode_length', 'activation_embed_attn'), decode_out_axis_names=('decode_batch', 'decode_length', 'activation_heads', 'activation_kv'), dropout_rate=0.0, dtype=dtype(bfloat16), float32_logits=False, float32_qk_product=False, head_dim=128, input_axis_names=('activation_batch_attn', 'activation_length_attn', 'activation_embed_attn'), is_nope_layer=False, is_qwen2=False, is_qwen3_next=False, is_vision=False, kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae613a59e0>, key=DenseGeneral( # Param: 29,360,128 (58.7 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(1024,), kernel=Param( # 29,360,128 (58.7 MB) value=Array(shape=(1024, 28, 8, 128), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('embed', 'layers', 'kv_heads', 'kv_head_dim') ), kernel_axes=('embed', 'kv_heads', 'kv_head_dim'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae613a59e0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(8, 128), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), key_axis_names=('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim'), key_norm=RMSNorm( # Param: 3,584 (7.2 KB) dtype=dtype(bfloat16), epsilon=1e-06, kernel_axes=('norm',), num_features=128, parameter_memory_host_offload=False, scale=Param( # 3,584 (7.2 KB) value=Array(shape=(128, 28), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('norm', 'layers') ), scale_init=<function ones at 0x7aaf95666a20>, scale_offset=0.0, shard_mode=<ShardMode.AUTO: 'auto'>, weight_dtype=dtype(bfloat16), with_scale=True ), kv_quant=None, max_prefill_predict_length=256, max_target_length=1024, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), model_mode='train', mrope_section=[24, 20, 20], num_kv_heads=8, num_query_heads=16, out=DenseGeneral( # Param: 58,720,256 (117.4 MB) axis=(-2, -1), bias=None, dtype=dtype(bfloat16), in_features_shape=(16, 128), kernel=Param( # 58,720,256 (117.4 MB) value=Array(shape=(16, 28, 128, 1024), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('heads', 'layers', 'kv', 'embed') ), kernel_axes=('heads', 'kv', 'embed'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae613a59e0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(1024,), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), out_axis_names=('activation_batch_attn', 'activation_length_attn', 'activation_heads', 'activation_kv'), partial_rotary_factor=None, prefill_cache_axis_order=(1, 2, 0, 3), prefill_input_axis_names=('activation_prefill_kv_batch', 'prefill_activation_length', 'activation_embed_attn'), prefill_key_axis_names=('activation_prefill_kv_batch', 'prefill_activation_length', 'activation_kv_heads', 'activation_kv_head_dim'), prefill_out_axis_names=('activation_prefill_kv_batch', 'prefill_activation_length', 'activation_heads', 'activation_kv'), prefill_query_axis_names=('activation_prefill_kv_batch', 'prefill_activation_length', 'activation_kv_heads', 'activation_kv_head_dim'), prefill_value_axis_names=('activation_prefill_kv_batch', 'prefill_activation_length', 'activation_kv_heads', 'activation_kv_head_dim'), quant=None, query=DenseGeneral( # Param: 58,720,256 (117.4 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(1024,), kernel=Param( # 58,720,256 (117.4 MB) value=Array(shape=(1024, 28, 16, 128), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('embed', 'layers', 'q_heads', 'kv') ), kernel_axes=('embed', 'q_heads', 'kv'), kernel_init=<function Attention.init_query_w.<locals>.query_init at 0x7a987010b7e0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(16, 128), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), query_axis_names=('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim'), query_norm=RMSNorm( # Param: 3,584 (7.2 KB) dtype=dtype(bfloat16), epsilon=1e-06, kernel_axes=('norm',), num_features=128, parameter_memory_host_offload=False, scale=Param( # 3,584 (7.2 KB) value=Array(shape=(128, 28), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('norm', 'layers') ), scale_init=<function ones at 0x7aaf95666a20>, scale_offset=0.0, shard_mode=<ShardMode.AUTO: 'auto'>, weight_dtype=dtype(bfloat16), with_scale=True ), query_pre_attn_scalar=0.08838834764831845, ragged_block_size=256, reshape_q=False, rngs=Rngs(...), rope_max_timescale=1000000, rope_type='default', rotary_embedding=RotaryEmbedding( cast_as_fprop_dtype=True, embedding_dims=128, fprop_dtype=dtype(bfloat16), max_timescale=1000000, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), min_timescale=1, rope_linear_scaling_factor=1.0, shard_mode=<ShardMode.AUTO: 'auto'> ), share_kv_projections=False, sinks=None, sliding_window_size=None, temperature_tuning=False, temperature_tuning_floor_scale=8192.0, temperature_tuning_scale=0.1, use_bias_in_projections=False, use_mrope=False, use_qk_norm=True, use_ragged_attention=False, use_v_norm=False, value=DenseGeneral( # Param: 29,360,128 (58.7 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(1024,), kernel=Param( # 29,360,128 (58.7 MB) value=Array(shape=(1024, 28, 8, 128), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('embed', 'layers', 'kv_heads', 'kv_head_dim') ), kernel_axes=('embed', 'kv_heads', 'kv_head_dim'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae613a59e0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(8, 128), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), value_axis_names=('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim'), value_norm=None, weight_dtype=dtype(bfloat16) ) ), mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), model_mode='train', positional_embedding=PositionalEmbedding( cast_as_fprop_dtype=False, embedding_dims=1024, fprop_dtype=bfloat16, max_wavelength=10000, rngs=None ), quant=None, rngs=Rngs( # RngState: 6 (36 B) aqt=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(2, dtype=uint32), eager_sharding=False, tag='aqt' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [4146024105 2718843009], eager_sharding=False, tag='aqt' ), tag='aqt' ), dropout=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(2, dtype=uint32), eager_sharding=False, tag='dropout' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [ 928981903 3453687069], eager_sharding=False, tag='dropout' ), tag='dropout' ), params=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(4, dtype=uint32), eager_sharding=False, tag='params' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [1797259609 2579123966], eager_sharding=False, tag='params' ), tag='params' ) ), scanned_layers=None ), hidden_states=None, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), model_mode='train', quant=None, token_embedder=Embed( # Param: 155,582,464 (311.2 MB) attend_dtype=dtype(bfloat16), cast_input_dtype=None, config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a9870268a70>, dtype=dtype(bfloat16), embedding=Param( # 155,582,464 (311.2 MB) value=Array(shape=(151936, 1024), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('vocab', 'embed_vocab') ), mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), num_embeddings=151936, num_features=1024 ), vision_encoder=None ), use_no_op_mappings=False, config=None ) TunixMaxTextAdapter( # Param: 596,049,920 (1.2 GB), RngState: 348 (2.1 KB), Total: 596,050,268 (1.2 GB) base=Transformer( # Param: 596,049,920 (1.2 GB), RngState: 348 (2.1 KB), Total: 596,050,268 (1.2 GB) audio_encoder=None, config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a97d42e3770>, decoder=NNXDecoder( # Param: 440,467,456 (880.9 MB), RngState: 348 (2.1 KB), Total: 440,467,804 (880.9 MB) config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a97d42e3770>, I0424 07:33:15.722605 134898864674624 train_rl.py:580] Policy mesh shape: OrderedDict({'diloco': 1, 'data': 1, 'stage': 1, 'fsdp': 32, 'fsdp_transpose': 1, 'context': 1, 'context_autoregressive': 1, 'tensor': 1, 'tensor_transpose': 1, 'tensor_sequence': 1, 'expert': 1, 'autoregressive': 1}) I0424 07:33:15.722663 134898864674624 train_rl.py:581] Rollout_mesh shape: OrderedDict({'diloco': 1, 'data': 1, 'stage': 1, 'fsdp': 32, 'fsdp_transpose': 1, 'context': 1, 'context_autoregressive': 1, 'tensor': 1, 'tensor_transpose': 1, 'tensor_sequence': 1, 'expert': 1, 'autoregressive': 1}) decoder_norm=RMSNorm( # Param: 1,024 (2.0 KB) dtype=dtype(bfloat16), epsilon=1e-06, kernel_axes=('norm',), num_features=1024, parameter_memory_host_offload=False, scale=Param( # 1,024 (2.0 KB) value=Array(shape=(1024,), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('norm',) I0424 07:33:15.722715 134898864674624 _schedule.py:129] A polynomial schedule was set with a non-positive `transition_steps` value; this results in a constant schedule with value `init_value`. ), scale_init=<function ones at 0x7aaf95666a20>, scale_offset=0.0, shard_mode=<ShardMode.AUTO: 'auto'>, weight_dtype=dtype(bfloat16), with_scale=True ), dropout=Dropout( # RngState: 6 (36 B) broadcast_dims=(-2,), deterministic=False, rate=0.0, rng_collection='dropout', rngs=Rngs( # RngState: 6 (36 B) aqt=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(0, dtype=uint32), eager_sharding=False, tag='aqt' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [2799984767 1105366846], eager_sharding=False, tag='aqt' ), tag='aqt' ), dropout=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(0, dtype=uint32), eager_sharding=False, tag='dropout' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [346279018 360566543], eager_sharding=False, tag='dropout' ), tag='dropout' ), params=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(0, dtype=uint32), eager_sharding=False, tag='params' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [2839387376 2467677468], eager_sharding=False, tag='params' ), tag='params' ) ) ), is_deepseek=False, is_gemma3=False, layers=Qwen3DecoderLayer( # RngState: 336 (2.0 KB), Param: 440,466,432 (880.9 MB), Total: 440,466,768 (880.9 MB) activation_axis_names=('activation_batch', 'activation_norm_length', 'activation_embed'), config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a97d42e3770>, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), mlp=MlpBlock( # RngState: 168 (1.0 KB), Param: 264,241,152 (528.5 MB), Total: 264,241,320 (528.5 MB) activations=['silu', 'linear'], config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a97d42e3770>, dropout=Dropout( # RngState: 168 (1.0 KB) broadcast_dims=(-2,), deterministic=False, rate=0.0, rng_collection='dropout', rngs=Rngs( # RngState: 168 (1.0 KB) aqt=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='aqt' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='aqt' ), tag='aqt' ), dropout=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='dropout' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='dropout' ), tag='dropout' ), params=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='params' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='params' ), tag='params' ) ) ), dtype=dtype(bfloat16), in_features=1024, intermediate_dim=3072, intermediate_dropout_rate=0.0, intermediate_logical=('activation_batch', 'activation_length', 'activation_mlp'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae61ca1ee0>, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), mlp_layer_norm=None, model_mode='train', quant=None, use_bias=False, use_pre_norm=False, weight_dtype=dtype(bfloat16), wi_0=DenseGeneral( # Param: 88,080,384 (176.2 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(1024,), kernel=Param( # 88,080,384 (176.2 MB) value=Array(shape=(1024, 28, 3072), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('embed', 'layers', 'mlp') ), kernel_axes=('embed', 'mlp'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae61ca1ee0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(3072,), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), wi_1=DenseGeneral( # Param: 88,080,384 (176.2 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(1024,), kernel=Param( # 88,080,384 (176.2 MB) value=Array(shape=(1024, 28, 3072), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('embed', 'layers', 'mlp') ), kernel_axes=('embed', 'mlp'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae61ca1ee0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(3072,), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), wo=DenseGeneral( # Param: 88,080,384 (176.2 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(3072,), kernel=Param( # 88,080,384 (176.2 MB) value=Array(shape=(3072, 28, 1024), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('mlp', 'layers', 'embed') ), kernel_axes=('mlp', 'embed'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae61ca1ee0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(1024,), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ) ), post_self_attention_layer_norm=RMSNorm( # Param: 28,672 (57.3 KB) dtype=dtype(bfloat16), epsilon=1e-06, kernel_axes=('norm',), num_features=1024, parameter_memory_host_offload=False, scale=Param( # 28,672 (57.3 KB) value=Array(shape=(1024, 28), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('norm', 'layers') ), scale_init=<function ones at 0x7aaf95666a20>, scale_offset=0.0, shard_mode=<ShardMode.AUTO: 'auto'>, weight_dtype=dtype(bfloat16), with_scale=True ), pre_self_attention_layer_norm=RMSNorm( # Param: 28,672 (57.3 KB) dtype=dtype(bfloat16), epsilon=1e-06, kernel_axes=('norm',), num_features=1024, parameter_memory_host_offload=False, scale=Param( # 28,672 (57.3 KB) value=Array(shape=(1024, 28), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('norm', 'layers') ), scale_init=<function ones at 0x7aaf95666a20>, scale_offset=0.0, shard_mode=<ShardMode.AUTO: 'auto'>, weight_dtype=dtype(bfloat16), with_scale=True ), quant=None, self_attention=Attention( # RngState: 168 (1.0 KB), Param: 176,167,936 (352.3 MB), Total: 176,168,104 (352.3 MB) KVCache_0=None, ar_cache_axis_order=(1, 2, 0, 3), attention_kernel='dot_product', attention_op=AttentionOp( # RngState: 168 (1.0 KB) AqtEinsum_0=<function einsum at 0x7aaf95bbe520>, AqtEinsum_1=<function einsum at 0x7aaf95bbe520>, AqtEinsum_2=<function einsum at 0x7aaf95bbe520>, AqtEinsum_3=<function einsum at 0x7aaf95bbe520>, attention_kernel='dot_product', attention_type=<AttentionType.GLOBAL: 'global'>, attn_logits_soft_cap=None, cache_logical_axis_names=('cache_batch', 'cache_sequence', 'cache_heads', 'cache_kv'), cache_scale_logical_axis_names=('cache_scale_batch', 'cache_scale_sequence', 'cache_scale_heads', 'cache_scale_kv'), chunk_attn_window_size=0, compute_axis_order=(0, 1, 2, 3), config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a97d42e3770>, dropout_rate=0.0, dtype=dtype(bfloat16), flash_axis_names_kv=('activation_batch_attn', 'activation_heads', 'activation_kv_length', 'activation_kv'), flash_axis_names_q=('activation_batch_attn', 'activation_heads', 'activation_length', 'activation_kv'), flash_axis_names_splash_kernel=('activation_heads', 'activation_length'), float32_logits=False, float32_qk_product=False, key_axis_order=(2, 0, 1, 3), kv_quant=None, max_prefill_predict_length=256, max_target_length=1024, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), num_kv_heads=8, num_query_heads=16, prefill_cache_logical_axis_names=('cache_batch_prefill', 'cache_sequence', 'cache_heads', 'cache_kv'), quant=None, ragged_block_size=256, ragged_lengths_names=('cache_batch',), ragged_qkv_axis_names=('cache_batch', 'cache_heads', 'cache_sequence', 'cache_kv'), reshape_q=False, rngs=Rngs( # RngState: 168 (1.0 KB) aqt=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='aqt' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='aqt' ), tag='aqt' ), dropout=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='dropout' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='dropout' ), tag='dropout' ), params=RngStream( # RngState: 56 (336 B) count=RngCount( # 28 (112 B) value=Array(shape=(28,), dtype=dtype('uint32')), eager_sharding=False, tag='params' ), key=RngKey( # 28 (224 B) value=Array(shape=(28,), dtype=key<fry>), eager_sharding=False, tag='params' ), tag='params' ) ), sliding_window_size=None, use_ragged_attention=False ), attention_type=<AttentionType.GLOBAL: 'global'>, attn_logits_soft_cap=None, compute_axis_order=(0, 1, 2, 3), config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a97d42e3770>, decode_input_axis_names=('decode_batch', 'decode_length', 'activation_embed_attn'), decode_out_axis_names=('decode_batch', 'decode_length', 'activation_heads', 'activation_kv'), dropout_rate=0.0, dtype=dtype(bfloat16), float32_logits=False, float32_qk_product=False, head_dim=128, input_axis_names=('activation_batch_attn', 'activation_length_attn', 'activation_embed_attn'), is_nope_layer=False, is_qwen2=False, is_qwen3_next=False, is_vision=False, kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae613a59e0>, key=DenseGeneral( # Param: 29,360,128 (58.7 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(1024,), kernel=Param( # 29,360,128 (58.7 MB) value=Array(shape=(1024, 28, 8, 128), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('embed', 'layers', 'kv_heads', 'kv_head_dim') ), kernel_axes=('embed', 'kv_heads', 'kv_head_dim'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae613a59e0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(8, 128), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), key_axis_names=('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim'), key_norm=RMSNorm( # Param: 3,584 (7.2 KB) dtype=dtype(bfloat16), epsilon=1e-06, kernel_axes=('norm',), num_features=128, parameter_memory_host_offload=False, scale=Param( # 3,584 (7.2 KB) value=Array(shape=(128, 28), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('norm', 'layers') ), scale_init=<function ones at 0x7aaf95666a20>, scale_offset=0.0, shard_mode=<ShardMode.AUTO: 'auto'>, weight_dtype=dtype(bfloat16), with_scale=True ), kv_quant=None, max_prefill_predict_length=256, max_target_length=1024, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), model_mode='train', mrope_section=[24, 20, 20], num_kv_heads=8, num_query_heads=16, out=DenseGeneral( # Param: 58,720,256 (117.4 MB) axis=(-2, -1), bias=None, dtype=dtype(bfloat16), in_features_shape=(16, 128), kernel=Param( # 58,720,256 (117.4 MB) value=Array(shape=(16, 28, 128, 1024), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('heads', 'layers', 'kv', 'embed') ), kernel_axes=('heads', 'kv', 'embed'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae613a59e0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(1024,), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), out_axis_names=('activation_batch_attn', 'activation_length_attn', 'activation_heads', 'activation_kv'), partial_rotary_factor=None, prefill_cache_axis_order=(1, 2, 0, 3), prefill_input_axis_names=('activation_prefill_kv_batch', 'prefill_activation_length', 'activation_embed_attn'), prefill_key_axis_names=('activation_prefill_kv_batch', 'prefill_activation_length', 'activation_kv_heads', 'activation_kv_head_dim'), prefill_out_axis_names=('activation_prefill_kv_batch', 'prefill_activation_length', 'activation_heads', 'activation_kv'), prefill_query_axis_names=('activation_prefill_kv_batch', 'prefill_activation_length', 'activation_kv_heads', 'activation_kv_head_dim'), prefill_value_axis_names=('activation_prefill_kv_batch', 'prefill_activation_length', 'activation_kv_heads', 'activation_kv_head_dim'), quant=None, query=DenseGeneral( # Param: 58,720,256 (117.4 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(1024,), kernel=Param( # 58,720,256 (117.4 MB) value=Array(shape=(1024, 28, 16, 128), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('embed', 'layers', 'q_heads', 'kv') ), kernel_axes=('embed', 'q_heads', 'kv'), kernel_init=<function Attention.init_query_w.<locals>.query_init at 0x7a97d41bb9c0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(16, 128), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), query_axis_names=('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim'), query_norm=RMSNorm( # Param: 3,584 (7.2 KB) dtype=dtype(bfloat16), epsilon=1e-06, kernel_axes=('norm',), num_features=128, parameter_memory_host_offload=False, scale=Param( # 3,584 (7.2 KB) value=Array(shape=(128, 28), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('norm', 'layers') ), scale_init=<function ones at 0x7aaf95666a20>, scale_offset=0.0, shard_mode=<ShardMode.AUTO: 'auto'>, weight_dtype=dtype(bfloat16), with_scale=True ), query_pre_attn_scalar=0.08838834764831845, ragged_block_size=256, reshape_q=False, rngs=Rngs(...), rope_max_timescale=1000000, rope_type='default', rotary_embedding=RotaryEmbedding( cast_as_fprop_dtype=True, embedding_dims=128, fprop_dtype=dtype(bfloat16), max_timescale=1000000, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), min_timescale=1, rope_linear_scaling_factor=1.0, shard_mode=<ShardMode.AUTO: 'auto'> ), share_kv_projections=False, sinks=None, sliding_window_size=None, temperature_tuning=False, temperature_tuning_floor_scale=8192.0, temperature_tuning_scale=0.1, use_bias_in_projections=False, use_mrope=False, use_qk_norm=True, use_ragged_attention=False, use_v_norm=False, value=DenseGeneral( # Param: 29,360,128 (58.7 MB) axis=(-1,), bias=None, dtype=dtype(bfloat16), in_features_shape=(1024,), kernel=Param( # 29,360,128 (58.7 MB) value=Array(shape=(1024, 28, 8, 128), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('embed', 'layers', 'kv_heads', 'kv_head_dim') ), kernel_axes=('embed', 'kv_heads', 'kv_head_dim'), kernel_init=<function nd_dense_init.<locals>.init_fn at 0x7aae613a59e0>, matmul_precision=<MatmulPrecision.DEFAULT: 'default'>, out_features_shape=(8, 128), parameter_memory_host_offload=False, quant=None, shard_mode=<ShardMode.AUTO: 'auto'>, use_bias=False, weight_dtype=dtype(bfloat16) ), value_axis_names=('activation_kv_batch', 'activation_length_attn', 'activation_kv_heads', 'activation_kv_head_dim'), value_norm=None, weight_dtype=dtype(bfloat16) ) ), mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), model_mode='train', positional_embedding=PositionalEmbedding( cast_as_fprop_dtype=False, embedding_dims=1024, fprop_dtype=bfloat16, max_wavelength=10000, rngs=None ), quant=None, rngs=Rngs( # RngState: 6 (36 B) aqt=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(2, dtype=uint32), eager_sharding=False, tag='aqt' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [4146024105 2718843009], eager_sharding=False, tag='aqt' ), tag='aqt' ), dropout=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(2, dtype=uint32), eager_sharding=False, tag='dropout' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [ 928981903 3453687069], eager_sharding=False, tag='dropout' ), tag='dropout' ), params=RngStream( # RngState: 2 (12 B) count=RngCount( # 1 (4 B) value=Array(4, dtype=uint32), eager_sharding=False, tag='params' ), key=RngKey( # 1 (8 B) value=Array((), dtype=key<fry>) overlaying: [1797259609 2579123966], eager_sharding=False, tag='params' ), tag='params' ) ), scanned_layers=None ), hidden_states=None, mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), model_mode='train', quant=None, token_embedder=Embed( # Param: 155,582,464 (311.2 MB) attend_dtype=dtype(bfloat16), cast_input_dtype=None, config=<maxtext.configs.pyconfig.HyperParameters object at 0x7a97d42e3770>, dtype=dtype(bfloat16), embedding=Param( # 155,582,464 (311.2 MB) value=Array(shape=(151936, 1024), dtype=dtype(bfloat16)), eager_sharding=False, out_sharding=('vocab', 'embed_vocab') ), mesh=Mesh(axis_sizes=(1, 1, 1, 32, 1, 1, 1, 1, 1, 1, 1, 1), axis_names=('diloco', 'data', 'stage', 'fsdp', 'fsdp_transpose', 'context', 'context_autoregressive', 'tensor', 'tensor_transpose', 'tensor_sequence', 'expert', 'autoregressive'), axis_types=(Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto, Auto)), num_embeddings=151936, num_features=1024 ), vision_encoder=None ), use_no_op_mappings=False, config=None W0424 07:33:15.862162 134898864674624 pyconfig.py:111] base_output_directory is not provided; Using local directory called maxtext_output I0424 07:33:15.900613 134898864674624 max_utils.py:238] Skipping jax distributed system due to skip_jax_distributed_system=True flag. I0424 07:33:15.902042 134898864674624 train_rl.py:426] Creating RL cluster... ) ERROR 04-24 07:33:16 [tpu_info.py:40] Unable to poll TPU GCE Metadata. Got status code: 404 and content: <!DOCTYPE html> ERROR 04-24 07:33:16 [tpu_info.py:40] <html lang=en> ERROR 04-24 07:33:16 [tpu_info.py:40] <meta charset=utf-8> ERROR 04-24 07:33:16 [tpu_info.py:40] <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width"> ERROR 04-24 07:33:16 [tpu_info.py:40] <title>Error 404 (Not Found)!!1</title> ERROR 04-24 07:33:16 [tpu_info.py:40] <style> ERROR 04-24 07:33:16 [tpu_info.py:40] *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px} ERROR 04-24 07:33:16 [tpu_info.py:40] </style> ERROR 04-24 07:33:16 [tpu_info.py:40] <a href=//www.google.com/><span id=logo aria-label=Google></span></a> ERROR 04-24 07:33:16 [tpu_info.py:40] <p><b>404.</b> <ins>That’s an error.</ins> ERROR 04-24 07:33:16 [tpu_info.py:40] <p>The requested URL <code>/computeMetadata/v1/instance/attributes/instance-id</code> was not found on this server. <ins>That’s all we know.</ins> ERROR 04-24 07:33:16 [tpu_info.py:40] INFO 04-24 07:33:16 [__init__.py:59] TPU info: node_name=None | tpu_type=v6e-32 | worker_id=0 | num_chips=4 | num_cores_per_chip=1 /usr/local/lib/python3.12/multiprocessing/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock. self.pid = os.fork() WARNING: All log messages before absl::InitializeLog() is called are written to STDERR E0000 00:00:1777015999.119649 1863 descriptor_database.cc:633] File already exists in database: google/protobuf/timestamp.proto F0000 00:00:1777015999.119731 1863 descriptor.cc:2236] Check failed: GeneratedDatabase()->Add(encoded_file_descriptor, size) *** Check failure stack trace: *** @ 0x7aae608a2fe4 absl::lts_20250127::log_internal::LogMessage::SendToLog() @ 0x7aae608a2976 absl::lts_20250127::log_internal::LogMessage::Flush() @ 0x7aae608a3539 absl::lts_20250127::log_internal::LogMessageFatal::~LogMessageFatal() @ 0x7aae607955cb google::protobuf::DescriptorPool::InternalAddGeneratedFile() @ 0x7aae6080f308 google::protobuf::internal::AddDescriptors() @ 0x7aae6080f2fa google::protobuf::internal::AddDescriptors() @ 0x7ab012c95b9f __static_initialization_and_destruction_0() @ 0x7ab012c95bd2 _GLOBAL__sub_I.00102_tpu_metric_service.pb.cc @ 0x7ab097855fe2 (unknown) Fatal Python error: Aborted Current thread 0x00007ab096e8d740 (most recent call first): File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 1293 in create_module File "<frozen importlib._bootstrap>", line 813 in module_from_spec File "<frozen importlib._bootstrap>", line 921 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1415 in _handle_fromlist File "/usr/local/lib/python3.12/site-packages/tpu_info/cli_helper.py", line 56 in _check_library_safety File "/usr/local/lib/python3.12/multiprocessing/process.py", line 108 in run File "/usr/local/lib/python3.12/multiprocessing/process.py", line 314 in _bootstrap File "/usr/local/lib/python3.12/multiprocessing/popen_fork.py", line 71 in _launch File "/usr/local/lib/python3.12/multiprocessing/popen_fork.py", line 19 in __init__ File "/usr/local/lib/python3.12/multiprocessing/context.py", line 282 in _Popen File "/usr/local/lib/python3.12/multiprocessing/context.py", line 224 in _Popen File "/usr/local/lib/python3.12/multiprocessing/process.py", line 121 in start File "/usr/local/lib/python3.12/site-packages/tpu_info/cli_helper.py", line 96 in _initialize_libtpu_safely File "/usr/local/lib/python3.12/site-packages/tpu_info/cli_helper.py", line 132 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1415 in _handle_fromlist File "/usr/local/lib/python3.12/site-packages/tpu_info/cli.py", line 27 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1415 in _handle_fromlist File "/usr/local/lib/python3.12/site-packages/tpu_info/__init__.py", line 16 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "/usr/local/lib/python3.12/site-packages/tpu_inference/platforms/tpu_platform.py", line 142 in get_device_name File "/usr/local/lib/python3.12/site-packages/tpu_inference/platforms/tpu_platform.py", line 151 in fp8_dtype File "/usr/local/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/utils/quant_utils.py", line 20 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "/usr/local/lib/python3.12/site-packages/vllm/v1/attention/backend.py", line 13 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "/usr/local/lib/python3.12/site-packages/vllm/forward_context.py", line 17 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "/usr/local/lib/python3.12/site-packages/vllm/compilation/cuda_graph.py", line 19 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "/usr/local/lib/python3.12/site-packages/vllm/v1/metrics/stats.py", line 10 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "/usr/local/lib/python3.12/site-packages/vllm/outputs.py", line 16 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "/usr/local/lib/python3.12/site-packages/tunix/generate/vllm_async_driver.py", line 35 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "/usr/local/lib/python3.12/site-packages/tunix/generate/vllm_sampler.py", line 32 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1415 in _handle_fromlist File "/usr/local/lib/python3.12/site-packages/tunix/rl/rollout/vllm_rollout.py", line 23 in <module> File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap_external>", line 999 in exec_module File "<frozen importlib._bootstrap>", line 935 in _load_unlocked File "<frozen importlib._bootstrap>", line 1331 in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1360 in _find_and_load File "<frozen importlib._bootstrap>", line 488 in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1415 in _handle_fromlist File "/usr/local/lib/python3.12/site-packages/tunix/rl/rl_cluster.py", line 392 in _init_cluster ... Extension modules: numpy._core._multiarray_umath, numpy.linalg._umath_linalg, pyarrow.lib, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pyarrow._compute, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, pyarrow._acero, pyarrow._fs, pyarrow._csv, pyarrow._json, pyarrow._substrait, pyarrow._dataset, pyarrow._dataset_orc, pyarrow._parquet, pyarrow._parquet_encryption, pyarrow._dataset_parquet_encryption, pyarrow._dataset_parquet, zstandard.backend_c, yaml._yaml, pyarrow._azurefs, pyarrow._hdfs, pyarrow._gcsfs, pyarrow._s3fs, charset_normalizer.md, simplejson._speedups, requests.packages.charset_normalizer.md, requests.packages.chardet.md, multidict._multidict, yarl._quoting_c, propcache._helpers_c, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket.mask, aiohttp._websocket.reader_c, frozenlist._frozenlist, xxhash._xxhash, jaxlib.cpu_feature_guard, google._upb._message, msgpack._cmsgpack, grpc._cython.cygrpc, _cffi_backend, regex._regex, markupsafe._speedups, PIL._imaging, torch._C, torch._C._dynamo.autograd_compiler, torch._C._dynamo.eval_frame, torch._C._dynamo.guards, torch._C._dynamo.utils, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, psutil._psutil_linux, sentencepiece._sentencepiece, h5py._errors, h5py.defs, h5py._objects, h5py.h5, h5py.utils, h5py.h5t, h5py.h5s, h5py.h5ac, h5py.h5p, h5py.h5r, h5py._npystrings, h5py._proxy, h5py._conv, h5py.h5z, h5py.h5a, h5py.h5d, h5py.h5ds, h5py.h5g, h5py.h5i, h5py.h5o, h5py.h5f, h5py.h5fd, h5py.h5pl, h5py.h5l, h5py._selector, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, _cyutility, scipy._cyutility, scipy.sparse._csparsetools, kiwisolver._cext, PIL._imagingft, scipy.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, msgspec._core, _cbor2, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_schur_sqrtm, scipy.linalg._matfuncs_expm, scipy.linalg._linalg_pythran, scipy.linalg.cython_blas, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack, scipy.sparse.linalg._propack._zpropack, scipy.optimize._group_columns, scipy._lib.messagestream, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._slsqplib, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy._lib._uarray._uarray, scipy.special._ufuncs_cxx, scipy.special._ellip_harm_2, scipy.special._special_ufuncs, scipy.special._gufuncs, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.linalg._decomp_interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.spatial._ckdtree, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._hausdorff, scipy.spatial._distance_wrap, scipy.spatial.transform._rotation, scipy.spatial.transform._rigid_transform, scipy.optimize._direct, zmq.backend.cython._zmq, pybase64._pybase64, scipy.signal._sigtools, scipy.signal._max_len_seq_inner, scipy.signal._upfirdn_apply, scipy.signal._spline, scipy.interpolate._fitpack, scipy.interpolate._dfitpack, scipy.interpolate._dierckx, scipy.interpolate._ppoly, scipy.interpolate._interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.ndimage._nd_image, scipy.ndimage._rank_filter_1d, _ni_label, scipy.ndimage._ni_label, scipy.signal._sosfilt, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.special.cython_special, scipy.stats._stats, scipy.stats._biasedurn, scipy.stats._stats_pythran, scipy.stats._levy_stable.levyst, scipy.stats._ansari_swilk_statistics, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._rcont.rcont, scipy.stats._qmvnt_cy, scipy.signal._peak_finding_utils, uvloop.loop (total: 230) Check failed with unknown exit code: -6. INFO 04-24 07:33:36 [tpu_platform.py:152] Automatically using fp8_e5m2 for FP8 KV cache on TPU v6e. I0424 07:33:37.892677 134898864674624 vllm_sampler.py:102] Engine kwargs setting key 'model' with value 'Qwen/Qwen3-0.6B'. I0424 07:33:37.892805 134898864674624 vllm_sampler.py:102] Engine kwargs setting key 'max_model_len' with value '1280'. I0424 07:33:37.892832 134898864674624 vllm_sampler.py:102] Engine kwargs setting key 'async_scheduling' with value 'False'. I0424 07:33:37.892852 134898864674624 vllm_sampler.py:102] Engine kwargs setting key 'max_num_batched_tokens' with value 'None'. I0424 07:33:37.892870 134898864674624 vllm_sampler.py:102] Engine kwargs setting key 'max_num_seqs' with value 'None'. I0424 07:33:37.892889 134898864674624 vllm_sampler.py:102] Engine kwargs setting key 'hf_config_path' with value ''. I0424 07:33:37.892904 134898864674624 vllm_sampler.py:102] Engine kwargs setting key 'max_logprobs' with value '1'. I0424 07:33:37.892921 134898864674624 vllm_sampler.py:102] Engine kwargs setting key 'hf_overrides' with value '{}'. I0424 07:33:37.892937 134898864674624 vllm_sampler.py:102] Engine kwargs setting key 'enable_expert_parallel' with value 'False'. I0424 07:33:37.892955 134898864674624 vllm_sampler.py:102] Engine kwargs setting key 'enable_prefix_caching' with value 'True'. INFO 04-24 07:33:37 [attention_interface.py:53] Using default RPA kernel INFO 04-24 07:33:37 [importing.py:44] Triton is installed but 0 active driver(s) found (expected 1). Disabling Triton to prevent runtime errors. INFO 04-24 07:33:37 [importing.py:68] Triton not installed or not compatible; certain GPU-related functions will not be available. WARNING 04-24 07:33:38 [interface.py:240] Failed to import from vllm._C: ModuleNotFoundError("No module named 'vllm._C'") INFO 04-24 07:33:38 [tpu_platform.py:152] Automatically using fp8_e5m2 for FP8 KV cache on TPU v6e. INFO 04-24 07:33:38 [tpu_platform.py:152] Automatically using fp8_e5m2 for FP8 KV cache on TPU v6e. INFO 04-24 07:33:38 [tpu_platform.py:152] Automatically using fp8_e5m2 for FP8 KV cache on TPU v6e. INFO 04-24 07:33:38 [tpu_platform.py:152] Automatically using fp8_e5m2 for FP8 KV cache on TPU v6e. INFO 04-24 07:33:38 [nixl_utils.py:20] Setting UCX_RCACHE_MAX_UNRELEASED to '1024' to avoid a rare memory leak in UCX when using NIXL. WARNING 04-24 07:33:38 [nixl_utils.py:34] NIXL is not available WARNING 04-24 07:33:38 [nixl_utils.py:44] NIXL agent config is not available INFO 04-24 07:33:38 [__init__.py:110] Registered model loader `<class 'tpu_inference.models.jax.utils.weight_utils.JaxDummyModelLoader'>` with load format `jax_dummy` INFO 04-24 07:33:38 [__init__.py:110] Registered model loader `<class 'tpu_inference.models.common.pathways_dummy_loader.PathwaysDummyModelLoader'>` with load format `pathways_dummy` WARNING 04-24 07:33:38 [__init__.py:85] The quantization method 'awq' already exists and will be overwritten by the quantization config <class 'tpu_inference.layers.vllm.quantization.awq.VllmAWQConfig'>. WARNING 04-24 07:33:40 [__init__.py:85] The quantization method 'compressed-tensors' already exists and will be overwritten by the quantization config <class 'tpu_inference.layers.vllm.quantization.compressed_tensors.compressed_tensors.VllmCompressedTensorsConfig'>. WARNING 04-24 07:33:40 [__init__.py:85] The quantization method 'fp8' already exists and will be overwritten by the quantization config <class 'tpu_inference.layers.vllm.quantization.fp8.VllmFp8Config'>. WARNING 04-24 07:33:40 [__init__.py:85] The quantization method 'gpt_oss_mxfp4' already exists and will be overwritten by the quantization config <class 'tpu_inference.layers.vllm.quantization.mxfp4.VllmMxfp4Config'>. INFO 04-24 07:33:40 [__init__.py:110] Registered model loader `<class 'tpu_inference.models.vllm.vllm_model_loader.IncrementalModelLoader'>` with load format `tpu_streaming_loader` WARNING 04-24 07:33:40 [__init__.py:99] Load format `runai_streamer` is already registered, and will be overwritten by the new loader class `<class 'tpu_inference.models.vllm.vllm_model_loader.RunaiIncrementalModelLoader'>`. INFO 04-24 07:33:40 [__init__.py:110] Registered model loader `<class 'tpu_inference.models.vllm.vllm_model_loader.RunaiIncrementalModelLoader'>` with load format `runai_streamer` WARNING 04-24 07:33:40 [interface.py:240] Failed to import from vllm._C: ModuleNotFoundError("No module named 'vllm._C'") WARNING 04-24 07:33:40 [interface.py:240] Failed to import from vllm._C: ModuleNotFoundError("No module named 'vllm._C'") WARNING 04-24 07:33:40 [interface.py:240] Failed to import from vllm._C: ModuleNotFoundError("No module named 'vllm._C'") W0424 07:33:40.925116 134898864674624 ops_registry.py:52] Duplicate op registration for aten.__and__ WARNING 04-24 07:33:40 [tpu_platform.py:317] Pin memory is not supported on TPU. INFO 04-24 07:33:40 [__init__.py:31] Registering MaxTextForCausalLM model with tpu_inference and vllm. INFO 04-24 07:33:40 [model_loader.py:682] Registered JAX model MaxTextForCausalLM with tpu_inference and vLLM registries. INFO 04-24 07:33:40 [__init__.py:33] Successfully registered MaxTextForCausalLM model. INFO 04-24 07:33:40 [utils.py:233] non-default args: {'hf_config_path': '', 'load_format': 'dummy', 'max_model_len': 1280, 'tensor_parallel_size': 16, 'data_parallel_size': 2, 'enable_prefix_caching': True, 'gpu_memory_utilization': 0.72, 'max_logprobs': 1, 'disable_log_stats': True, 'additional_config': {'sharding': {'sharding_strategy': {'expert_parallelism': 1, 'device_indexes': [0, 4, 8, 12, 16, 20, 24, 28, 1, 5, 9, 13, 17, 21, 25, 29, 2, 6, 10, 14, 18, 22, 26, 30, 3, 7, 11, 15, 19, 23, 27, 31], 'enable_dp_attention': False}}}, 'async_scheduling': False} WARNING 04-24 07:33:41 [arg_utils.py:1440] The global random seed is set to 0. Since VLLM_ENABLE_V1_MULTIPROCESSING is set to False, this may affect the random state of the Python process that launched vLLM. INFO 04-24 07:34:00 [model.py:554] Resolved architecture: Qwen3ForCausalLM INFO 04-24 07:34:00 [model.py:1685] Using max model len 1280 INFO 04-24 07:34:00 [scheduler.py:239] Chunked prefill is enabled with max_num_batched_tokens=8192. INFO 04-24 07:34:00 [vllm.py:845] Asynchronous scheduling is disabled. INFO 04-24 07:34:00 [kernel.py:199] Final IR op priority after setting platform defaults: IrOpPriorityConfig(rms_norm=['native']) INFO 04-24 07:34:00 [tpu_platform.py:190] Initialized sharding configuration: ShardingConfigManager(total_devices=32, sharding_strategy=ShardingStrategy(tensor_parallelism=16, expert_parallelism=1, sequence_parallelism=1, data_parallelism=2, attention_data_parallelism=1, attention_data_expert_parallelism=1), device_indexes=[0, 4, 8, 12, 16, 20, 24, 28, 1, 5, 9, 13, 17, 21, 25, 29, 2, 6, 10, 14, 18, 22, 26, 30, 3, 7, 11, 15, 19, 23, 27, 31]) INFO 04-24 07:34:00 [tpu_platform.py:245] Using KV cache block size: 128 INFO 04-24 07:34:00 [tpu_platform.py:256] Force using UniProcExecutor for JAX on single host without pipeline parallelism. INFO 04-24 07:34:00 [compilation.py:303] Enabled custom fusions: norm_quant, act_quant INFO 04-24 07:34:01 [core.py:107] Initializing a V1 LLM engine (v0.19.2rc1.dev43+g595562651) with config: model='Qwen/Qwen3-0.6B', speculative_config=None, tokenizer='Qwen/Qwen3-0.6B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=1280, download_dir=None, load_format=dummy, tensor_parallel_size=16, pipeline_parallel_size=1, data_parallel_size=1, decode_context_parallel_size=1, dcp_comm_backend=ag_rs, disable_custom_all_reduce=True, quantization=None, quantization_config=None, enforce_eager=False, enable_return_routed_experts=False, kv_cache_dtype=auto, device_config=None, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False, enable_logging_iteration_details=False), seed=0, served_model_name=Qwen/Qwen3-0.6B, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'mode': <CompilationMode.DYNAMO_TRACE_ONCE: 2>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'openxla', 'custom_ops': ['all'], 'ir_enable_torch_wrap': False, 'splitting_ops': [], 'compile_mm_encoder': False, 'cudagraph_mm_encoder': False, 'encoder_cudagraph_token_budgets': [], 'encoder_cudagraph_max_vision_items_per_batch': 0, 'encoder_cudagraph_max_frames_per_batch': 0, 'compile_sizes': None, 'compile_ranges_endpoints': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'size_asserts': False, 'alignment_asserts': False, 'scalar_asserts': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.NONE: 0>, 'cudagraph_num_of_warmups': 0, 'cudagraph_capture_sizes': None, 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': True, 'fuse_act_quant': True, 'fuse_attn_quant': False, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': None, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': False}, 'local_cache_dir': None, 'fast_moe_cold_start': True, 'static_all_moe_layers': []}, kernel_config=KernelConfig(ir_op_priority=IrOpPriorityConfig(rms_norm=['native']), enable_flashinfer_autotune=True, moe_backend='auto') Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/deps/src/maxtext/trainers/post_train/rl/train_rl.py", line 661, in <module> app.run(main) File "/usr/local/lib/python3.12/site-packages/absl/app.py", line 316, in run _run_main(main, args) File "/usr/local/lib/python3.12/site-packages/absl/app.py", line 261, in _run_main sys.exit(main(argv)) ^^^^^^^^^^ File "/deps/src/maxtext/trainers/post_train/rl/train_rl.py", line 657, in main rl_train(argv, kwargs) File "/deps/src/maxtext/trainers/post_train/rl/train_rl.py", line 583, in rl_train rl_cluster, rl_trainer, _ = create_rl_components( ^^^^^^^^^^^^^^^^^^^^^ File "/deps/src/maxtext/trainers/post_train/rl/train_rl.py", line 441, in create_rl_components rl_cluster = rl_cluster_lib.RLCluster( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/tunix/rl/rl_cluster.py", line 250, in __init__ self._init_cluster() File "/usr/local/lib/python3.12/site-packages/tunix/rl/rl_cluster.py", line 415, in _init_cluster self._rollout = vllm_rollout.VllmRollout( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/tunix/rl/rollout/vllm_rollout.py", line 43, in __init__ self._sampler = vllm_sampler.VllmSampler( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/tunix/generate/vllm_sampler.py", line 156, in __init__ self.llm = LLM(**self.args) ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 381, in __init__ self.llm_engine = LLMEngine.from_engine_args( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 171, in from_engine_args return cls( ^^^^ File "/usr/local/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 105, in __init__ self.engine_core = EngineCoreClient.make_client( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 103, in make_client return InprocClient(vllm_config, executor_class, log_stats) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 285, in __init__ self.engine_core = EngineCore(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 116, in __init__ self.model_executor = executor_class(vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/vllm/tracing/otel.py", line 178, in sync_wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 109, in __init__ self._init_executor() File "/usr/local/lib/python3.12/site-packages/vllm/v1/executor/uniproc_executor.py", line 47, in _init_executor self.driver_worker.init_device() File "/usr/local/lib/python3.12/site-packages/vllm/v1/worker/worker_base.py", line 317, in init_device self.worker.init_device() # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/tpu_inference/worker/tpu_worker.py", line 241, in init_device device = device_dict[device_index] ~~~~~~~~~~~^^^^^^^^^^^^^^ KeyError: 0 XPK End: Fri Apr 24 07:34:13 UTC 2026 EXIT_CODE=1