Dataset Viewer
timestamp
string | end_timestamp
string | stage_name
string | stage_number
int64 | level
string | message
string | stdout_content
string | stderr_content
string | experiment_name
string | elapsed_time_seconds
float64 | stage_complete
bool |
|---|---|---|---|---|---|---|---|---|---|---|
2025-11-23T19:35:33.935920
|
2025-11-23T19:39:15.499867
|
verl_rl
| 1
|
INFO
|
Complete log capture for stage: verl_rl
|
[INFO] Starting stage: VeRL RL training - rl
[INFO] Data preparation succeeded
[INFO] Starting checkpoint monitoring for intermediate uploads...
[INFO] Intermediate checkpoint upload enabled
[DEBUG] Found 0 global_step directories
[DEBUG] Running verl command:
python -m verl.trainer.main_ppo trainer.total_epochs=50 actor_rollout_ref.actor.optim.lr=1e-06 trainer.save_freq=25 trainer.test_freq=25 trainer.val_before_train=True algorithm.adv_estimator=grpo actor_rollout_ref.rollout.n=16 data.train_batch_size=256 actor_rollout_ref.actor.ppo_mini_batch_size=32 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=1 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=2 actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=2 custom_reward_function.reward_kwargs.response_or_sample=sample custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0 custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0 custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0 custom_reward_function.reward_kwargs.transition_penalty_weight=0.0 custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0 custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0 custom_reward_function.reward_kwargs.reward_min=0.0 custom_reward_function.reward_kwargs.reward_max=10.0 reward_model.reward_manager=batch custom_reward_function.name=compute_score_batch reward_model.launch_reward_fn_async=True actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.model.enable_activation_offload=True actor_rollout_ref.rollout.gpu_memory_utilization=0.8 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.strategy=fsdp2 actor_rollout_ref.actor.fsdp_config.forward_prefetch=True actor_rollout_ref.ref.fsdp_config.forward_prefetch=True reward_model.model.fsdp_config.forward_prefetch=True actor_rollout_ref.rollout.max_num_batched_tokens=16384 actor_rollout_ref.rollout.max_num_seqs=2048 hydra.run.dir=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/hydra hydra.output_subdir=null hydra.job.chdir=False actor_rollout_ref.rollout.tensor_model_parallel_size=1 data.max_prompt_length=512 data.max_response_length=4096 actor_rollout_ref.model.path=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/prefetched_models/SkillFactory__M_Olmo_7B_3args_ours_sft_sft actor_rollout_ref.rollout.dtype=bfloat16 critic.optim.lr=1e-05 critic.model.path=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/prefetched_models/SkillFactory__M_Olmo_7B_3args_ours_sft_sft critic.ppo_micro_batch_size_per_gpu=1 algorithm.kl_ctrl.kl_coef=0.001 trainer.logger=[console,wandb] trainer.project_name=jackrl trainer.experiment_name=1123_newmodels__olmo7b_sft_ours_ct3arg_rl data.train_files=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/data/train.parquet data.val_files=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/data/test.parquet custom_reward_function.path=/scratch/yl11330/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py trainer.default_local_dir=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/checkpoints actor_rollout_ref.model.trust_remote_code=True critic.model.trust_remote_code=True trainer.nnodes=1 trainer.n_gpus_per_node=4
[DEBUG] Found 0 global_step directories
2025-11-23 19:36:59,382 INFO worker.py:1918 -- Started a local Ray instance. View the dashboard at [1m[32m127.0.0.1:8265 [39m[22m
[DEBUG] Found 0 global_step directories
[36m(TaskRunner pid=1660693)[0m TaskRunner hostname: gh123.hpc.nyu.edu, PID: 1660693
[36m(TaskRunner pid=1660693)[0m {'actor_rollout_ref': {'actor': {'checkpoint': {'load_contents': ['model',
[36m(TaskRunner pid=1660693)[0m 'optimizer',
[36m(TaskRunner pid=1660693)[0m 'extra'],
[36m(TaskRunner pid=1660693)[0m 'save_contents': ['model',
[36m(TaskRunner pid=1660693)[0m 'optimizer',
[36m(TaskRunner pid=1660693)[0m 'extra']},
[36m(TaskRunner pid=1660693)[0m 'clip_ratio': 0.2,
[36m(TaskRunner pid=1660693)[0m 'clip_ratio_c': 3.0,
[36m(TaskRunner pid=1660693)[0m 'clip_ratio_high': 0.2,
[36m(TaskRunner pid=1660693)[0m 'clip_ratio_low': 0.2,
[36m(TaskRunner pid=1660693)[0m 'entropy_checkpointing': False,
[36m(TaskRunner pid=1660693)[0m 'entropy_coeff': 0,
[36m(TaskRunner pid=1660693)[0m 'entropy_from_logits_with_chunking': False,
[36m(TaskRunner pid=1660693)[0m 'fsdp_config': {'forward_prefetch': True,
[36m(TaskRunner pid=1660693)[0m 'fsdp_size': -1,
[36m(TaskRunner pid=1660693)[0m 'offload_policy': False,
[36m(TaskRunner pid=1660693)[0m 'optimizer_offload': False,
[36m(TaskRunner pid=1660693)[0m 'param_offload': False,
[36m(TaskRunner pid=1660693)[0m 'reshard_after_forward': True,
[36m(TaskRunner pid=1660693)[0m 'wrap_policy': {'min_num_params': 0}},
[36m(TaskRunner pid=1660693)[0m 'grad_clip': 1.0,
[36m(TaskRunner pid=1660693)[0m 'kl_loss_coef': 0.001,
[36m(TaskRunner pid=1660693)[0m 'kl_loss_type': 'low_var_kl',
[36m(TaskRunner pid=1660693)[0m 'loss_agg_mode': 'token-mean',
[36m(TaskRunner pid=1660693)[0m 'optim': {'lr': 1e-06,
[36m(TaskRunner pid=1660693)[0m 'lr_warmup_steps': -1,
[36m(TaskRunner pid=1660693)[0m 'lr_warmup_steps_ratio': 0.0,
[36m(TaskRunner pid=1660693)[0m 'min_lr_ratio': 0.0,
[36m(TaskRunner pid=1660693)[0m 'num_cycles': 0.5,
[36m(TaskRunner pid=1660693)[0m 'total_training_steps': -1,
[36m(TaskRunner pid=1660693)[0m 'warmup_style': 'constant',
[36m(TaskRunner pid=1660693)[0m 'weight_decay': 0.01},
[36m(TaskRunner pid=1660693)[0m 'policy_loss': {'clip_cov_lb': 1.0,
[36m(TaskRunner pid=1660693)[0m 'clip_cov_ratio': 0.0002,
[36m(TaskRunner pid=1660693)[0m 'clip_cov_ub': 5.0,
[36m(TaskRunner pid=1660693)[0m 'kl_cov_ratio': 0.0002,
[36m(TaskRunner pid=1660693)[0m 'loss_mode': 'vanilla',
[36m(TaskRunner pid=1660693)[0m 'ppo_kl_coef': 0.1},
[36m(TaskRunner pid=1660693)[0m 'ppo_epochs': 1,
[36m(TaskRunner pid=1660693)[0m 'ppo_max_token_len_per_gpu': 16384,
[36m(TaskRunner pid=1660693)[0m 'ppo_micro_batch_size': None,
[36m(TaskRunner pid=1660693)[0m 'ppo_micro_batch_size_per_gpu': 1,
[36m(TaskRunner pid=1660693)[0m 'ppo_mini_batch_size': 32,
[36m(TaskRunner pid=1660693)[0m 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig',
[36m(TaskRunner pid=1660693)[0m 'all_ranks': False,
[36m(TaskRunner pid=1660693)[0m 'discrete': False,
[36m(TaskRunner pid=1660693)[0m 'ranks': []},
[36m(TaskRunner pid=1660693)[0m 'shuffle': False,
[36m(TaskRunner pid=1660693)[0m 'strategy': 'fsdp2',
[36m(TaskRunner pid=1660693)[0m 'ulysses_sequence_parallel_size': 1,
[36m(TaskRunner pid=1660693)[0m 'use_dynamic_bsz': False,
[36m(TaskRunner pid=1660693)[0m 'use_kl_loss': False,
[36m(TaskRunner pid=1660693)[0m 'use_torch_compile': True},
[36m(TaskRunner pid=1660693)[0m 'hybrid_engine': True,
[36m(TaskRunner pid=1660693)[0m 'model': {'custom_chat_template': None,
[36m(TaskRunner pid=1660693)[0m 'enable_activation_offload': True,
[36m(TaskRunner pid=1660693)[0m 'enable_gradient_checkpointing': True,
[36m(TaskRunner pid=1660693)[0m 'exclude_modules': None,
[36m(TaskRunner pid=1660693)[0m 'external_lib': None,
[36m(TaskRunner pid=1660693)[0m 'fused_kernel_options': {'impl_backend': 'torch'},
[36m(TaskRunner pid=1660693)[0m 'lora_alpha': 16,
[36m(TaskRunner pid=1660693)[0m 'lora_rank': 0,
[36m(TaskRunner pid=1660693)[0m 'override_config': {},
[36m(TaskRunner pid=1660693)[0m 'path': '/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/prefetched_models/SkillFactory__M_Olmo_7B_3args_ours_sft_sft',
[36m(TaskRunner pid=1660693)[0m 'target_modules': 'all-linear',
[36m(TaskRunner pid=1660693)[0m 'trust_remote_code': True,
[36m(TaskRunner pid=1660693)[0m 'use_fused_kernels': False,
[36m(TaskRunner pid=1660693)[0m 'use_liger': False,
[36m(TaskRunner pid=1660693)[0m 'use_remove_padding': True,
[36m(TaskRunner pid=1660693)[0m 'use_shm': False},
[36m(TaskRunner pid=1660693)[0m 'ref': {'entropy_checkpointing': False,
[36m(TaskRunner pid=1660693)[0m 'entropy_from_logits_with_chunking': False,
[36m(TaskRunner pid=1660693)[0m 'fsdp_config': {'forward_prefetch': True,
[36m(TaskRunner pid=1660693)[0m 'param_offload': False,
[36m(TaskRunner pid=1660693)[0m 'reshard_after_forward': True,
[36m(TaskRunner pid=1660693)[0m 'wrap_policy': {'min_num_params': 0}},
[36m(TaskRunner pid=1660693)[0m 'log_prob_max_token_len_per_gpu': 16384,
[36m(TaskRunner pid=1660693)[0m 'log_prob_micro_batch_size': None,
[36m(TaskRunner pid=1660693)[0m 'log_prob_micro_batch_size_per_gpu': 2,
[36m(TaskRunner pid=1660693)[0m 'log_prob_use_dynamic_bsz': False,
[36m(TaskRunner pid=1660693)[0m 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig',
[36m(TaskRunner pid=1660693)[0m 'all_ranks': False,
[36m(TaskRunner pid=1660693)[0m 'discrete': False,
[36m(TaskRunner pid=1660693)[0m 'ranks': []},
[36m(TaskRunner pid=1660693)[0m 'strategy': 'fsdp2',
[36m(TaskRunner pid=1660693)[0m 'ulysses_sequence_parallel_size': 1,
[36m(TaskRunner pid=1660693)[0m 'use_torch_compile': True},
[36m(TaskRunner pid=1660693)[0m 'rollout': {'agent': {'custom_async_server': {'name': None,
[36m(TaskRunner pid=1660693)[0m 'path': None},
[36m(TaskRunner pid=1660693)[0m 'num_workers': 8},
[36m(TaskRunner pid=1660693)[0m 'calculate_log_probs': False,
[36m(TaskRunner pid=1660693)[0m 'disable_log_stats': True,
[36m(TaskRunner pid=1660693)[0m 'do_sample': True,
[36m(TaskRunner pid=1660693)[0m 'dtype': 'bfloat16',
[36m(TaskRunner pid=1660693)[0m 'enable_chunked_prefill': True,
[36m(TaskRunner pid=1660693)[0m 'enforce_eager': True,
[36m(TaskRunner pid=1660693)[0m 'engine_kwargs': {'sglang': {'attention_backend': None},
[36m(TaskRunner pid=1660693)[0m 'vllm': {'disable_mm_preprocessor_cache': False,
[36m(TaskRunner pid=1660693)[0m 'swap_space': None}},
[36m(TaskRunner pid=1660693)[0m 'free_cache_engine': True,
[36m(TaskRunner pid=1660693)[0m 'gpu_memory_utilization': 0.8,
[36m(TaskRunner pid=1660693)[0m 'ignore_eos': False,
[36m(TaskRunner pid=1660693)[0m 'layered_summon': False,
[36m(TaskRunner pid=1660693)[0m 'load_format': 'dummy_dtensor',
[36m(TaskRunner pid=1660693)[0m 'log_prob_max_token_len_per_gpu': 16384,
[36m(TaskRunner pid=1660693)[0m 'log_prob_micro_batch_size': None,
[36m(TaskRunner pid=1660693)[0m 'log_prob_micro_batch_size_per_gpu': 2,
[36m(TaskRunner pid=1660693)[0m 'log_prob_use_dynamic_bsz': False,
[36m(TaskRunner pid=1660693)[0m 'max_model_len': None,
[36m(TaskRunner pid=1660693)[0m 'max_num_batched_tokens': 16384,
[36m(TaskRunner pid=1660693)[0m 'max_num_seqs': 2048,
[36m(TaskRunner pid=1660693)[0m 'mode': 'sync',
[36m(TaskRunner pid=1660693)[0m 'multi_stage_wake_up': False,
[36m(TaskRunner pid=1660693)[0m 'multi_turn': {'completion_callback': None,
[36m(TaskRunner pid=1660693)[0m 'enable': False,
[36m(TaskRunner pid=1660693)[0m 'format': 'hermes',
[36m(TaskRunner pid=1660693)[0m 'interaction_config_path': None,
[36m(TaskRunner pid=1660693)[0m 'max_assistant_turns': None,
[36m(TaskRunner pid=1660693)[0m 'max_parallel_calls': 1,
[36m(TaskRunner pid=1660693)[0m 'max_tool_response_length': 256,
[36m(TaskRunner pid=1660693)[0m 'max_user_turns': None,
[36m(TaskRunner pid=1660693)[0m 'tokenization_sanity_check_mode': 'strict',
[36m(TaskRunner pid=1660693)[0m 'tool_config_path': None,
[36m(TaskRunner pid=1660693)[0m 'tool_response_truncate_side': 'middle',
[36m(TaskRunner pid=1660693)[0m 'use_inference_chat_template': False},
[36m(TaskRunner pid=1660693)[0m 'n': 16,
[36m(TaskRunner pid=1660693)[0m 'name': 'vllm',
[36m(TaskRunner pid=1660693)[0m 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig',
[36m(TaskRunner pid=1660693)[0m 'all_ranks': False,
[36m(TaskRunner pid=1660693)[0m 'discrete': False,
[36m(TaskRunner pid=1660693)[0m 'ranks': []},
[36m(TaskRunner pid=1660693)[0m 'prompt_length': 512,
[36m(TaskRunner pid=1660693)[0m 'response_length': 4096,
[36m(TaskRunner pid=1660693)[0m 'temperature': 1.0,
[36m(TaskRunner pid=1660693)[0m 'tensor_model_parallel_size': 1,
[36m(TaskRunner pid=1660693)[0m 'top_k': -1,
[36m(TaskRunner pid=1660693)[0m 'top_p': 1,
[36m(TaskRunner pid=1660693)[0m 'val_kwargs': {'do_sample': False,
[36m(TaskRunner pid=1660693)[0m 'n': 1,
[36m(TaskRunner pid=1660693)[0m 'temperature': 0,
[36m(TaskRunner pid=1660693)[0m 'top_k': -1,
[36m(TaskRunner pid=1660693)[0m 'top_p': 1.0}}},
[36m(TaskRunner pid=1660693)[0m 'algorithm': {'_target_': 'verl.trainer.config.AlgoConfig',
[36m(TaskRunner pid=1660693)[0m 'adv_estimator': 'grpo',
[36m(TaskRunner pid=1660693)[0m 'gamma': 1.0,
[36m(TaskRunner pid=1660693)[0m 'kl_ctrl': {'_target_': 'verl.trainer.config.KLControlConfig',
[36m(TaskRunner pid=1660693)[0m 'horizon': 10000,
[36m(TaskRunner pid=1660693)[0m 'kl_coef': 0.001,
[36m(TaskRunner pid=1660693)[0m 'target_kl': 0.1,
[36m(TaskRunner pid=1660693)[0m 'type': 'fixed'},
[36m(TaskRunner pid=1660693)[0m 'kl_penalty': 'kl',
[36m(TaskRunner pid=1660693)[0m 'lam': 1.0,
[36m(TaskRunner pid=1660693)[0m 'norm_adv_by_std_in_grpo': True,
[36m(TaskRunner pid=1660693)[0m 'pf_ppo': {'_target_': 'verl.trainer.config.PFPPOConfig',
[36m(TaskRunner pid=1660693)[0m 'reweight_method': 'pow',
[36m(TaskRunner pid=1660693)[0m 'weight_pow': 2.0},
[36m(TaskRunner pid=1660693)[0m 'use_kl_in_reward': False,
[36m(TaskRunner pid=1660693)[0m 'use_pf_ppo': False},
[36m(TaskRunner pid=1660693)[0m 'critic': {'checkpoint': {'load_contents': ['model', 'optimizer', 'extra'],
[36m(TaskRunner pid=1660693)[0m 'save_contents': ['model', 'optimizer', 'extra']},
[36m(TaskRunner pid=1660693)[0m 'cliprange_value': 0.5,
[36m(TaskRunner pid=1660693)[0m 'forward_max_token_len_per_gpu': 32768,
[36m(TaskRunner pid=1660693)[0m 'forward_micro_batch_size': None,
[36m(TaskRunner pid=1660693)[0m 'forward_micro_batch_size_per_gpu': 1,
[36m(TaskRunner pid=1660693)[0m 'grad_clip': 1.0,
[36m(TaskRunner pid=1660693)[0m 'loss_agg_mode': 'token-mean',
[36m(TaskRunner pid=1660693)[0m 'model': {'enable_activation_offload': False,
[36m(TaskRunner pid=1660693)[0m 'enable_gradient_checkpointing': True,
[36m(TaskRunner pid=1660693)[0m 'external_lib': None,
[36m(TaskRunner pid=1660693)[0m 'fsdp_config': {'forward_prefetch': False,
[36m(TaskRunner pid=1660693)[0m 'fsdp_size': -1,
[36m(TaskRunner pid=1660693)[0m 'offload_policy': False,
[36m(TaskRunner pid=1660693)[0m 'optimizer_offload': False,
[36m(TaskRunner pid=1660693)[0m 'param_offload': False,
[36m(TaskRunner pid=1660693)[0m 'reshard_after_forward': True,
[36m(TaskRunner pid=1660693)[0m 'wrap_policy': {'min_num_params': 0}},
[36m(TaskRunner pid=1660693)[0m 'lora_alpha': 16,
[36m(TaskRunner pid=1660693)[0m 'lora_rank': 0,
[36m(TaskRunner pid=1660693)[0m 'override_config': {},
[36m(TaskRunner pid=1660693)[0m 'path': '/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/prefetched_models/SkillFactory__M_Olmo_7B_3args_ours_sft_sft',
[36m(TaskRunner pid=1660693)[0m 'target_modules': 'all-linear',
[36m(TaskRunner pid=1660693)[0m 'tokenizer_path': '/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/prefetched_models/SkillFactory__M_Olmo_7B_3args_ours_sft_sft',
[36m(TaskRunner pid=1660693)[0m 'trust_remote_code': True,
[36m(TaskRunner pid=1660693)[0m 'use_remove_padding': False,
[36m(TaskRunner pid=1660693)[0m 'use_shm': False},
[36m(TaskRunner pid=1660693)[0m 'optim': {'lr': 1e-05,
[36m(TaskRunner pid=1660693)[0m 'lr_warmup_steps_ratio': 0.0,
[36m(TaskRunner pid=1660693)[0m 'min_lr_ratio': None,
[36m(TaskRunner pid=1660693)[0m 'total_training_steps': -1,
[36m(TaskRunner pid=1660693)[0m 'warmup_style': 'constant',
[36m(TaskRunner pid=1660693)[0m 'weight_decay': 0.01},
[36m(TaskRunner pid=1660693)[0m 'ppo_epochs': 1,
[36m(TaskRunner pid=1660693)[0m 'ppo_max_token_len_per_gpu': 32768,
[36m(TaskRunner pid=1660693)[0m 'ppo_micro_batch_size': None,
[36m(TaskRunner pid=1660693)[0m 'ppo_micro_batch_size_per_gpu': 1,
[36m(TaskRunner pid=1660693)[0m 'ppo_mini_batch_size': 32,
[36m(TaskRunner pid=1660693)[0m 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig',
[36m(TaskRunner pid=1660693)[0m 'all_ranks': False,
[36m(TaskRunner pid=1660693)[0m 'discrete': False,
[36m(TaskRunner pid=1660693)[0m 'ranks': []},
[36m(TaskRunner pid=1660693)[0m 'rollout_n': 16,
[36m(TaskRunner pid=1660693)[0m 'shuffle': False,
[36m(TaskRunner pid=1660693)[0m 'strategy': 'fsdp2',
[36m(TaskRunner pid=1660693)[0m 'ulysses_sequence_parallel_size': 1,
[36m(TaskRunner pid=1660693)[0m 'use_dynamic_bsz': False},
[36m(TaskRunner pid=1660693)[0m 'custom_reward_function': {'name': 'compute_score_batch',
[36m(TaskRunner pid=1660693)[0m 'path': '/scratch/yl11330/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py',
[36m(TaskRunner pid=1660693)[0m 'reward_kwargs': {'complex_format_reward_weight': 0.0,
[36m(TaskRunner pid=1660693)[0m 'final_answer_in_samples_reward_weight': 0.0,
[36m(TaskRunner pid=1660693)[0m 'reflection_correctness_reward_weight': 0.0,
[36m(TaskRunner pid=1660693)[0m 'response_or_sample': 'sample',
[36m(TaskRunner pid=1660693)[0m 'reward_max': 10.0,
[36m(TaskRunner pid=1660693)[0m 'reward_min': 0.0,
[36m(TaskRunner pid=1660693)[0m 'sample_correctness_reward_weight': 0.0,
[36m(TaskRunner pid=1660693)[0m 'sample_count_penalty_weight': 0.0,
[36m(TaskRunner pid=1660693)[0m 'similarity_penalty_weight': 0.0,
[36m(TaskRunner pid=1660693)[0m 'simple_format_reward_weight': 0.0,
[36m(TaskRunner pid=1660693)[0m 'transition_penalty_weight': 0.0,
[36m(TaskRunner pid=1660693)[0m 'verdict_correctness_reward_weight': 0.0}},
[36m(TaskRunner pid=1660693)[0m 'data': {'custom_cls': {'name': None, 'path': None},
[36m(TaskRunner pid=1660693)[0m 'dataloader_num_workers': 8,
[36m(TaskRunner pid=1660693)[0m 'filter_overlong_prompts': False,
[36m(TaskRunner pid=1660693)[0m 'filter_overlong_prompts_workers': 1,
[36m(TaskRunner pid=1660693)[0m 'image_key': 'images',
[36m(TaskRunner pid=1660693)[0m 'max_prompt_length': 512,
[36m(TaskRunner pid=1660693)[0m 'max_response_length': 4096,
[36m(TaskRunner pid=1660693)[0m 'prompt_key': 'prompt',
[36m(TaskRunner pid=1660693)[0m 'return_full_prompt': False,
[36m(TaskRunner pid=1660693)[0m 'return_multi_modal_inputs': True,
[36m(TaskRunner pid=1660693)[0m 'return_raw_chat': False,
[36m(TaskRunner pid=1660693)[0m 'return_raw_input_ids': False,
[36m(TaskRunner pid=1660693)[0m 'reward_fn_key': 'data_source',
[36m(TaskRunner pid=1660693)[0m 'sampler': {'class_name': None, 'class_path': None},
[36m(TaskRunner pid=1660693)[0m 'shuffle': True,
[36m(TaskRunner pid=1660693)[0m 'tokenizer': None,
[36m(TaskRunner pid=1660693)[0m 'train_batch_size': 256,
[36m(TaskRunner pid=1660693)[0m 'train_files': '/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/data/train.parquet',
[36m(TaskRunner pid=1660693)[0m 'truncation': 'error',
[36m(TaskRunner pid=1660693)[0m 'trust_remote_code': False,
[36m(TaskRunner pid=1660693)[0m 'use_shm': False,
[36m(TaskRunner pid=1660693)[0m 'val_batch_size': None,
[36m(TaskRunner pid=1660693)[0m 'val_files': '/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/data/test.parquet',
[36m(TaskRunner pid=1660693)[0m 'validation_shuffle': False,
[36m(TaskRunner pid=1660693)[0m 'video_key': 'videos'},
[36m(TaskRunner pid=1660693)[0m 'ray_init': {'num_cpus': None, 'timeline_json_file': None},
[36m(TaskRunner pid=1660693)[0m 'reward_model': {'enable': False,
[36m(TaskRunner pid=1660693)[0m 'forward_max_token_len_per_gpu': 32768,
[36m(TaskRunner pid=1660693)[0m 'launch_reward_fn_async': True,
[36m(TaskRunner pid=1660693)[0m 'max_length': None,
[36m(TaskRunner pid=1660693)[0m 'micro_batch_size': None,
[36m(TaskRunner pid=1660693)[0m 'micro_batch_size_per_gpu': None,
[36m(TaskRunner pid=1660693)[0m 'model': {'external_lib': None,
[36m(TaskRunner pid=1660693)[0m 'fsdp_config': {'forward_prefetch': True,
[36m(TaskRunner pid=1660693)[0m 'fsdp_size': -1,
[36m(TaskRunner pid=1660693)[0m 'param_offload': False,
[36m(TaskRunner pid=1660693)[0m 'reshard_after_forward': True,
[36m(TaskRunner pid=1660693)[0m 'wrap_policy': {'min_num_params': 0}},
[36m(TaskRunner pid=1660693)[0m 'input_tokenizer': '/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/prefetched_models/SkillFactory__M_Olmo_7B_3args_ours_sft_sft',
[36m(TaskRunner pid=1660693)[0m 'path': '~/models/FsfairX-LLaMA3-RM-v0.1',
[36m(TaskRunner pid=1660693)[0m 'trust_remote_code': False,
[36m(TaskRunner pid=1660693)[0m 'use_fused_kernels': False,
[36m(TaskRunner pid=1660693)[0m 'use_remove_padding': False,
[36m(TaskRunner pid=1660693)[0m 'use_shm': False},
[36m(TaskRunner pid=1660693)[0m 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig',
[36m(TaskRunner pid=1660693)[0m 'all_ranks': False,
[36m(TaskRunner pid=1660693)[0m 'discrete': False,
[36m(TaskRunner pid=1660693)[0m 'ranks': []},
[36m(TaskRunner pid=1660693)[0m 'reward_manager': 'batch',
[36m(TaskRunner pid=1660693)[0m 'sandbox_fusion': {'max_concurrent': 64,
[36m(TaskRunner pid=1660693)[0m 'memory_limit_mb': 1024,
[36m(TaskRunner pid=1660693)[0m 'url': None},
[36m(TaskRunner pid=1660693)[0m 'strategy': 'fsdp2',
[36m(TaskRunner pid=1660693)[0m 'ulysses_sequence_parallel_size': 1,
[36m(TaskRunner pid=1660693)[0m 'use_dynamic_bsz': False},
[36m(TaskRunner pid=1660693)[0m 'trainer': {'balance_batch': True,
[36m(TaskRunner pid=1660693)[0m 'controller_nsight_options': {'cuda-graph-trace': 'graph',
[36m(TaskRunner pid=1660693)[0m 'cuda-memory-usage': 'true',
[36m(TaskRunner pid=1660693)[0m 'trace': 'cuda,nvtx,cublas,ucx'},
[36m(TaskRunner pid=1660693)[0m 'critic_warmup': 0,
[36m(TaskRunner pid=1660693)[0m 'default_hdfs_dir': None,
[36m(TaskRunner pid=1660693)[0m 'default_local_dir': '/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/checkpoints',
[36m(TaskRunner pid=1660693)[0m 'del_local_ckpt_after_load': False,
[36m(TaskRunner pid=1660693)[0m 'device': 'cuda',
[36m(TaskRunner pid=1660693)[0m 'esi_redundant_time': 0,
[36m(TaskRunner pid=1660693)[0m 'experiment_name': '1123_newmodels__olmo7b_sft_ours_ct3arg_rl',
[36m(TaskRunner pid=1660693)[0m 'log_val_generations': 0,
[36m(TaskRunner pid=1660693)[0m 'logger': ['console', 'wandb'],
[36m(TaskRunner pid=1660693)[0m 'max_actor_ckpt_to_keep': None,
[36m(TaskRunner pid=1660693)[0m 'max_critic_ckpt_to_keep': None,
[36m(TaskRunner pid=1660693)[0m 'n_gpus_per_node': 4,
[36m(TaskRunner pid=1660693)[0m 'nnodes': 1,
[36m(TaskRunner pid=1660693)[0m 'profile_steps': None,
[36m(TaskRunner pid=1660693)[0m 'project_name': 'jackrl',
[36m(TaskRunner pid=1660693)[0m 'ray_wait_register_center_timeout': 300,
[36m(TaskRunner pid=1660693)[0m 'resume_from_path': None,
[36m(TaskRunner pid=1660693)[0m 'resume_mode': 'auto',
[36m(TaskRunner pid=1660693)[0m 'rollout_data_dir': None,
[36m(TaskRunner pid=1660693)[0m 'save_freq': 25,
[36m(TaskRunner pid=1660693)[0m 'test_freq': 25,
[36m(TaskRunner pid=1660693)[0m 'total_epochs': 50,
[36m(TaskRunner pid=1660693)[0m 'total_training_steps': None,
[36m(TaskRunner pid=1660693)[0m 'val_before_train': True,
[36m(TaskRunner pid=1660693)[0m 'val_only': False,
[36m(TaskRunner pid=1660693)[0m 'validation_data_dir': None,
[36m(TaskRunner pid=1660693)[0m 'worker_nsight_options': {'capture-range': 'cudaProfilerApi',
[36m(TaskRunner pid=1660693)[0m 'capture-range-end': None,
[36m(TaskRunner pid=1660693)[0m 'cuda-graph-trace': 'graph',
[36m(TaskRunner pid=1660693)[0m 'cuda-memory-usage': 'true',
[36m(TaskRunner pid=1660693)[0m 'kill': 'none',
[36m(TaskRunner pid=1660693)[0m 'trace': 'cuda,nvtx,cublas,ucx'}}}
[36m(TaskRunner pid=1660693)[0m /scratch/yl11330/skill-factory/thirdparty/verl/verl/utils/tokenizer.py:83: UserWarning: Failed to create processor: The checkpoint you are trying to load has model type `olmo3` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
[36m(TaskRunner pid=1660693)[0m
[36m(TaskRunner pid=1660693)[0m You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`. This may affect multimodal processing
[36m(TaskRunner pid=1660693)[0m warnings.warn(f"Failed to create processor: {e}. This may affect multimodal processing", stacklevel=1)
[DEBUG] Found 0 global_step directories
[36m(TaskRunner pid=1660693)[0m Registered source: gpqa
[36m(TaskRunner pid=1660693)[0m Registered source: aime
[36m(TaskRunner pid=1660693)[0m Registered source: amc
[36m(TaskRunner pid=1660693)[0m Registered source: longmult
[36m(TaskRunner pid=1660693)[0m Registered source: countdown
[36m(TaskRunner pid=1660693)[0m Registered source: gsm8k
[36m(TaskRunner pid=1660693)[0m Registered source: arc
[36m(TaskRunner pid=1660693)[0m Registered source: arc_challenge
[36m(TaskRunner pid=1660693)[0m Registered source: arc_easy
[36m(TaskRunner pid=1660693)[0m Registered source: piqa
[36m(TaskRunner pid=1660693)[0m Registered source: mmlu
[36m(TaskRunner pid=1660693)[0m Registered source: mmlu_pro
[36m(TaskRunner pid=1660693)[0m Registered source: csqa
[36m(TaskRunner pid=1660693)[0m Registered source: social_iqa
[36m(TaskRunner pid=1660693)[0m Registered source: strategy_qa
[36m(TaskRunner pid=1660693)[0m Registered source: winogrande
[36m(TaskRunner pid=1660693)[0m Registered source: bbh
[36m(TaskRunner pid=1660693)[0m Registered source: openthoughts
[36m(TaskRunner pid=1660693)[0m Registered source: letter_countdown
[36m(TaskRunner pid=1660693)[0m Registered source: acronym
[36m(TaskRunner pid=1660693)[0m Registered source: math500
[36m(TaskRunner pid=1660693)[0m using customized reward function 'compute_score_batch' from '/scratch/yl11330/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py'
[36m(TaskRunner pid=1660693)[0m using customized reward function 'compute_score_batch' from '/scratch/yl11330/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py'
[36m(TaskRunner pid=1660693)[0m Using dataset class: RLHFDataset
[36m(TaskRunner pid=1660693)[0m
Generating train split: 0 examples [00:00, ? examples/s]
[36m(TaskRunner pid=1660693)[0m
Generating train split: 1000 examples [00:00, 2996.65 examples/s]
[36m(TaskRunner pid=1660693)[0m
Generating train split: 1000 examples [00:00, 1481.58 examples/s]
[36m(TaskRunner pid=1660693)[0m dataset len: 1000
[36m(TaskRunner pid=1660693)[0m Using dataset class: RLHFDataset
[36m(TaskRunner pid=1660693)[0m
Generating train split: 0 examples [00:00, ? examples/s]
[36m(TaskRunner pid=1660693)[0m dataset len: 3291
[36m(TaskRunner pid=1660693)[0m Using critic: False
[36m(TaskRunner pid=1660693)[0m [validate_config] All configuration checks passed successfully!
[36m(TaskRunner pid=1660693)[0m
Generating train split: 3291 examples [00:00, 32136.71 examples/s]
[36m(TaskRunner pid=1660693)[0m Size of train dataloader: 3, Size of val dataloader: 1
[36m(TaskRunner pid=1660693)[0m Total training steps: 150
[36m(TaskRunner pid=1660693)[0m {'b0ce2fa33dfad2d75c6db9a36cbb9b793e520e69e4a136148d74d553': {'CPU': 127.0,
[36m(TaskRunner pid=1660693)[0m 'GPU': 4.0,
[36m(TaskRunner pid=1660693)[0m 'accelerator_type:H200': 1.0,
[36m(TaskRunner pid=1660693)[0m 'memory': 1938776821760.0,
[36m(TaskRunner pid=1660693)[0m 'node:10.32.37.27': 1.0,
[36m(TaskRunner pid=1660693)[0m 'node:__internal_head__': 1.0,
[36m(TaskRunner pid=1660693)[0m 'object_store_memory': 200000000000.0}}
[36m(TaskRunner pid=1660693)[0m ('Resource pool to cls: {<verl.single_controller.ray.base.RayResourcePool '
[36m(TaskRunner pid=1660693)[0m "object at 0x147e65de4190>: {'actor_rollout': "
[36m(TaskRunner pid=1660693)[0m '<verl.single_controller.ray.base.RayClassWithInitArgs object at '
[36m(TaskRunner pid=1660693)[0m '0x147e65de41c0>}}')
[36m(TaskRunner pid=1660693)[0m colocated worker base class <class 'verl.single_controller.base.worker.Worker'>
[36m(TaskRunner pid=1660693)[0m DeprecationWarning: `ray.state.available_resources_per_node` is a private attribute and access will be removed in a future Ray version.
[36m(TaskRunner pid=1660693)[0m WARNING:2025-11-23 19:38:00,839:Waiting for register center actor 5Qsfyc_register_center to be ready. Elapsed time: 0 seconds out of 300 seconds.
[DEBUG] Found 0 global_step directories
[DEBUG] Found 0 global_step directories
[36m(WorkerDict pid=1665563)[0m /scratch/yl11330/skill-factory/thirdparty/verl/verl/utils/tokenizer.py:83: UserWarning: Failed to create processor: The checkpoint you are trying to load has model type `olmo3` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
[36m(WorkerDict pid=1665563)[0m
[36m(WorkerDict pid=1665563)[0m You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`. This may affect multimodal processing
[36m(WorkerDict pid=1665563)[0m warnings.warn(f"Failed to create processor: {e}. This may affect multimodal processing", stacklevel=1)
Error executing job with overrides: ['trainer.total_epochs=50', 'actor_rollout_ref.actor.optim.lr=1e-06', 'trainer.save_freq=25', 'trainer.test_freq=25', 'trainer.val_before_train=True', 'algorithm.adv_estimator=grpo', 'actor_rollout_ref.rollout.n=16', 'data.train_batch_size=256', 'actor_rollout_ref.actor.ppo_mini_batch_size=32', 'actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=1', 'actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=2', 'actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=2', 'custom_reward_function.reward_kwargs.response_or_sample=sample', 'custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0', 'custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0', 'custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0', 'custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0', 'custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0', 'custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0', 'custom_reward_function.reward_kwargs.transition_penalty_weight=0.0', 'custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0', 'custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0', 'custom_reward_function.reward_kwargs.reward_min=0.0', 'custom_reward_function.reward_kwargs.reward_max=10.0', 'reward_model.reward_manager=batch', 'custom_reward_function.name=compute_score_batch', 'reward_model.launch_reward_fn_async=True', 'actor_rollout_ref.model.enable_gradient_checkpointing=True', 'actor_rollout_ref.model.enable_activation_offload=True', 'actor_rollout_ref.rollout.gpu_memory_utilization=0.8', 'actor_rollout_ref.model.use_remove_padding=True', 'actor_rollout_ref.actor.strategy=fsdp2', 'actor_rollout_ref.actor.fsdp_config.forward_prefetch=True', 'actor_rollout_ref.ref.fsdp_config.forward_prefetch=True', 'reward_model.model.fsdp_config.forward_prefetch=True', 'actor_rollout_ref.rollout.max_num_batched_tokens=16384', 'actor_rollout_ref.rollout.max_num_seqs=2048', 'actor_rollout_ref.rollout.tensor_model_parallel_size=1', 'data.max_prompt_length=512', 'data.max_response_length=4096', 'actor_rollout_ref.model.path=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/prefetched_models/SkillFactory__M_Olmo_7B_3args_ours_sft_sft', 'actor_rollout_ref.rollout.dtype=bfloat16', 'critic.optim.lr=1e-05', 'critic.model.path=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/prefetched_models/SkillFactory__M_Olmo_7B_3args_ours_sft_sft', 'critic.ppo_micro_batch_size_per_gpu=1', 'algorithm.kl_ctrl.kl_coef=0.001', 'trainer.logger=[console,wandb]', 'trainer.project_name=jackrl', 'trainer.experiment_name=1123_newmodels__olmo7b_sft_ours_ct3arg_rl', 'data.train_files=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/data/train.parquet', 'data.val_files=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/data/test.parquet', 'custom_reward_function.path=/scratch/yl11330/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py', 'trainer.default_local_dir=/scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/checkpoints', 'actor_rollout_ref.model.trust_remote_code=True', 'critic.model.trust_remote_code=True', 'trainer.nnodes=1', 'trainer.n_gpus_per_node=4']
Traceback (most recent call last):
File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/trainer/main_ppo.py", line 39, in main
run_ppo(config)
File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/trainer/main_ppo.py", line 69, in run_ppo
ray.get(runner.run.remote(config))
File "/scratch/yl11330/skill-factory/penv/lib/python3.10/site-packages/ray/_private/auto_init_hook.py", line 22, in auto_init_wrapper
return fn(*args, **kwargs)
File "/scratch/yl11330/skill-factory/penv/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 104, in wrapper
return func(*args, **kwargs)
File "/scratch/yl11330/skill-factory/penv/lib/python3.10/site-packages/ray/_private/worker.py", line 2858, in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
File "/scratch/yl11330/skill-factory/penv/lib/python3.10/site-packages/ray/_private/worker.py", line 958, in get_objects
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(ValueError): [36mray::TaskRunner.run()[39m (pid=1660693, ip=10.32.37.27, actor_id=60e3084f6625830b2a80b3df01000000, repr=<main_ppo.TaskRunner object at 0x147eb9b6b880>)
File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/trainer/main_ppo.py", line 232, in run
trainer.init_workers()
File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/trainer/ppo/ray_trainer.py", line 931, in init_workers
self.actor_rollout_wg.init_model()
File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/single_controller/ray/base.py", line 51, in __call__
output = ray.get(output)
ray.exceptions.RayTaskError(ValueError): [36mray::WorkerDict.actor_rollout_init_model()[39m (pid=1665563, ip=10.32.37.27, actor_id=217a0258165dbbbea5f788fd01000000, repr=<verl.single_controller.ray.base.WorkerDict object at 0x14bef12faa40>)
File "/scratch/yl11330/skill-factory/penv/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 872, in __getitem__
raise KeyError(key)
KeyError: 'olmo3'
During handling of the above exception, another exception occurred:
[36mray::WorkerDict.actor_rollout_init_model()[39m (pid=1665563, ip=10.32.37.27, actor_id=217a0258165dbbbea5f788fd01000000, repr=<verl.single_controller.ray.base.WorkerDict object at 0x14bef12faa40>)
File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/single_controller/ray/base.py", line 708, in func
return getattr(self.worker_dict[key], name)(*args, **kwargs)
File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/single_controller/base/decorator.py", line 549, in inner
return func(*args, **kwargs)
File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/workers/fsdp_workers.py", line 594, in init_model
) = self._build_model_optimizer(
File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/workers/fsdp_workers.py", line 257, in _build_model_optimizer
actor_model_config = AutoConfig.from_pretrained(
File "/scratch/yl11330/skill-factory/penv/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1172, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type `olmo3` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
[36m(TaskRunner pid=1660693)[0m Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): [36mray::WorkerDict.actor_rollout_init_model()[39m (pid=1665564, ip=10.32.37.27, actor_id=8e3e6ffc1444114ccfc2b36101000000, repr=<verl.single_controller.ray.base.WorkerDict object at 0x14803fdb6bf0>)
[36m(TaskRunner pid=1660693)[0m File "/scratch/yl11330/skill-factory/penv/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 872, in __getitem__
[36m(TaskRunner pid=1660693)[0m raise KeyError(key)
[36m(TaskRunner pid=1660693)[0m KeyError: 'olmo3'
[36m(TaskRunner pid=1660693)[0m
[36m(TaskRunner pid=1660693)[0m During handling of the above exception, another exception occurred:
[36m(TaskRunner pid=1660693)[0m
[36m(TaskRunner pid=1660693)[0m [36mray::WorkerDict.actor_rollout_init_model()[39m (pid=1665564, ip=10.32.37.27, actor_id=8e3e6ffc1444114ccfc2b36101000000, repr=<verl.single_controller.ray.base.WorkerDict object at 0x14803fdb6bf0>)
[36m(TaskRunner pid=1660693)[0m File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/single_controller/ray/base.py", line 708, in func
[36m(TaskRunner pid=1660693)[0m return getattr(self.worker_dict[key], name)(*args, **kwargs)
[36m(TaskRunner pid=1660693)[0m File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/single_controller/base/decorator.py", line 549, in inner
[36m(TaskRunner pid=1660693)[0m return func(*args, **kwargs)
[36m(TaskRunner pid=1660693)[0m File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/workers/fsdp_workers.py", line 594, in init_model
[36m(TaskRunner pid=1660693)[0m ) = self._build_model_optimizer(
[36m(TaskRunner pid=1660693)[0m File "/scratch/yl11330/skill-factory/thirdparty/verl/verl/workers/fsdp_workers.py", line 257, in _build_model_optimizer
[36m(TaskRunner pid=1660693)[0m actor_model_config = AutoConfig.from_pretrained(
[36m(TaskRunner pid=1660693)[0m File "/scratch/yl11330/skill-factory/penv/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1172, in from_pretrained
[36m(TaskRunner pid=1660693)[0m raise ValueError(
[36m(TaskRunner pid=1660693)[0m ValueError: The checkpoint you are trying to load has model type `olmo3` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
[36m(TaskRunner pid=1660693)[0m
[36m(TaskRunner pid=1660693)[0m You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
[36m(WorkerDict pid=1665564)[0m
[36m(WorkerDict pid=1665565)[0m
[36m(WorkerDict pid=1665565)[0m /scratch/yl11330/skill-factory/thirdparty/verl/verl/utils/tokenizer.py:83: UserWarning: Failed to create processor: The checkpoint you are trying to load has model type `olmo3` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.[32m [repeated 2x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)[0m
[36m(WorkerDict pid=1665565)[0m You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`. This may affect multimodal processing[32m [repeated 2x across cluster][0m
[36m(WorkerDict pid=1665565)[0m warnings.warn(f"Failed to create processor: {e}. This may affect multimodal processing", stacklevel=1)[32m [repeated 2x across cluster][0m
[INFO] Extracting model from VeRL checkpoint at /scratch/yl11330/skill-factory/workflow_out/1123_newmodels__olmo7b_sft_ours_ct3arg/verl/checkpoints
[ERROR] No global_step directories found
EXTRACT OUT: False
[ERROR] Stage error: RuntimeError: Model extraction failed
|
Fetching 15 files: 0%| | 0/15 [00:00<?, ?it/s]/scratch/yl11330/skill-factory/penv/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`.
For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder.
warnings.warn(
Fetching 15 files: 100%|ββββββββββ| 15/15 [00:00<00:00, 170.21it/s]
Fetching 15 files: 0%| | 0/15 [00:00<?, ?it/s]
Fetching 15 files: 100%|ββββββββββ| 15/15 [00:00<00:00, 289.84it/s]
|
1123_newmodels__olmo7b_sft_ours_ct3arg
| 221.563947
| true
|
README.md exists but content is empty.
- Downloads last month
- 18