kie3-bs-plus-lora / running_log.txt
jpraysz's picture
Upload 21 files
850f107 verified
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file vocab.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/vocab.json
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file merges.txt from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/merges.txt
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/tokenizer.json
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file added_tokens.json from cache at None
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file special_tokens_map.json from cache at None
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/tokenizer_config.json
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file chat_template.jinja from cache at None
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2336 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|2025-09-10 08:36:51] image_processing_base.py:378 >> loading configuration file preprocessor_config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/preprocessor_config.json
[INFO|2025-09-10 08:36:51] image_processing_base.py:378 >> loading configuration file preprocessor_config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/preprocessor_config.json
[INFO|2025-09-10 08:36:51] image_processing_base.py:423 >> Image processor Qwen2VLImageProcessorFast {
"crop_size": null,
"data_format": "channels_first",
"default_to_square": true,
"device": null,
"disable_grouping": null,
"do_center_crop": null,
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessorFast",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"input_data_format": null,
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"return_tensors": null,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file vocab.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/vocab.json
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file merges.txt from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/merges.txt
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/tokenizer.json
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file added_tokens.json from cache at None
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file special_tokens_map.json from cache at None
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/tokenizer_config.json
[INFO|2025-09-10 08:36:51] tokenization_utils_base.py:2067 >> loading file chat_template.jinja from cache at None
[INFO|2025-09-10 08:36:52] tokenization_utils_base.py:2336 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[WARNING|2025-09-10 08:36:52] logging.py:328 >> You have video processor config saved in `preprocessor.json` file which is deprecated. Video processor configs should be saved in their own `video_preprocessor.json` file. You can rename the file or load and save the processor back which renames it automatically. Loading from `preprocessor.json` will be removed in v5.0.
[INFO|2025-09-10 08:36:52] video_processing_utils.py:720 >> loading configuration file preprocessor_config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/preprocessor_config.json
[INFO|2025-09-10 08:36:52] video_processing_utils.py:764 >> Video processor Qwen2VLVideoProcessor {
"crop_size": null,
"data_format": "channels_first",
"default_to_square": true,
"device": null,
"do_center_crop": null,
"do_convert_rgb": true,
"do_normalize": true,
"do_pad": null,
"do_rescale": true,
"do_resize": true,
"do_sample_frames": false,
"fps": null,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"input_data_format": null,
"max_frames": 768,
"max_pixels": 12845056,
"merge_size": 2,
"min_frames": 4,
"min_pixels": 3136,
"num_frames": null,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"size_divisor": null,
"temporal_patch_size": 2,
"video_metadata": null,
"video_processor_type": "Qwen2VLVideoProcessor"
}
[INFO|2025-09-10 08:36:53] processing_utils.py:1117 >> Processor Qwen2_5_VLProcessor:
- image_processor: Qwen2VLImageProcessorFast {
"crop_size": null,
"data_format": "channels_first",
"default_to_square": true,
"device": null,
"disable_grouping": null,
"do_center_crop": null,
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "Qwen2VLImageProcessorFast",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"input_data_format": null,
"max_pixels": 12845056,
"merge_size": 2,
"min_pixels": 3136,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"return_tensors": null,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"temporal_patch_size": 2
}
- tokenizer: Qwen2TokenizerFast(name_or_path='Qwen/Qwen2.5-VL-7B-Instruct', vocab_size=151643, model_max_length=131072, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>', '<|object_ref_start|>', '<|object_ref_end|>', '<|box_start|>', '<|box_end|>', '<|quad_start|>', '<|quad_end|>', '<|vision_start|>', '<|vision_end|>', '<|vision_pad|>', '<|image_pad|>', '<|video_pad|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={
151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151646: AddedToken("<|object_ref_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151647: AddedToken("<|object_ref_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151648: AddedToken("<|box_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151649: AddedToken("<|box_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151650: AddedToken("<|quad_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151651: AddedToken("<|quad_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151652: AddedToken("<|vision_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151653: AddedToken("<|vision_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151654: AddedToken("<|vision_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151655: AddedToken("<|image_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151656: AddedToken("<|video_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
151657: AddedToken("<tool_call>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151658: AddedToken("</tool_call>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151659: AddedToken("<|fim_prefix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151660: AddedToken("<|fim_middle|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151661: AddedToken("<|fim_suffix|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151662: AddedToken("<|fim_pad|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151663: AddedToken("<|repo_name|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
151664: AddedToken("<|file_sep|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=False),
}
)
- video_processor: Qwen2VLVideoProcessor {
"crop_size": null,
"data_format": "channels_first",
"default_to_square": true,
"device": null,
"do_center_crop": null,
"do_convert_rgb": true,
"do_normalize": true,
"do_pad": null,
"do_rescale": true,
"do_resize": true,
"do_sample_frames": false,
"fps": null,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"input_data_format": null,
"max_frames": 768,
"max_pixels": 12845056,
"merge_size": 2,
"min_frames": 4,
"min_pixels": 3136,
"num_frames": null,
"patch_size": 14,
"processor_class": "Qwen2_5_VLProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 12845056,
"shortest_edge": 3136
},
"size_divisor": null,
"temporal_patch_size": 2,
"video_metadata": null,
"video_processor_type": "Qwen2VLVideoProcessor"
}
{
"processor_class": "Qwen2_5_VLProcessor"
}
[INFO|2025-09-10 08:36:53] logging.py:143 >> Loading dataset BsKIE3.json...
[INFO|2025-09-10 08:40:28] configuration_utils.py:752 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/config.json
[INFO|2025-09-10 08:40:28] configuration_utils.py:817 >> Model config Qwen2_5_VLConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "qwen2_5_vl",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"text_config": {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": null,
"initializer_range": 0.02,
"intermediate_size": 18944,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "qwen2_5_vl_text",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": null,
"torch_dtype": "bfloat16",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": null,
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
},
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.55.2",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"depth": 32,
"fullatt_block_indexes": [
7,
15,
23,
31
],
"hidden_act": "silu",
"hidden_size": 1280,
"in_channels": 3,
"in_chans": 3,
"initializer_range": 0.02,
"intermediate_size": 3420,
"model_type": "qwen2_5_vl",
"num_heads": 16,
"out_hidden_size": 3584,
"patch_size": 14,
"spatial_merge_size": 2,
"spatial_patch_size": 14,
"temporal_patch_size": 2,
"tokens_per_second": 2,
"window_size": 112
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
[INFO|2025-09-10 08:40:28] logging.py:143 >> KV cache is disabled during training.
[INFO|2025-09-10 08:40:28] modeling_utils.py:1309 >> loading weights file model.safetensors from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/model.safetensors.index.json
[INFO|2025-09-10 08:40:28] modeling_utils.py:2412 >> Instantiating Qwen2_5_VLForConditionalGeneration model under default dtype torch.bfloat16.
[INFO|2025-09-10 08:40:28] configuration_utils.py:1098 >> Generate config GenerationConfig {
"bos_token_id": 151643,
"eos_token_id": 151645,
"use_cache": false
}
[INFO|2025-09-10 08:40:28] modeling_utils.py:2412 >> Instantiating Qwen2_5_VisionTransformerPretrainedModel model under default dtype torch.bfloat16.
[INFO|2025-09-10 08:40:28] modeling_utils.py:2412 >> Instantiating Qwen2_5_VLTextModel model under default dtype torch.bfloat16.
[INFO|2025-09-10 08:40:39] modeling_utils.py:5614 >> All model checkpoint weights were used when initializing Qwen2_5_VLForConditionalGeneration.
[INFO|2025-09-10 08:40:39] modeling_utils.py:5622 >> All the weights of Qwen2_5_VLForConditionalGeneration were initialized from the model checkpoint at Qwen/Qwen2.5-VL-7B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2_5_VLForConditionalGeneration for predictions without further training.
[INFO|2025-09-10 08:40:39] configuration_utils.py:1053 >> loading configuration file generation_config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/generation_config.json
[INFO|2025-09-10 08:40:39] configuration_utils.py:1098 >> Generate config GenerationConfig {
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 1e-06
}
[INFO|2025-09-10 08:40:39] logging.py:143 >> Gradient checkpointing enabled.
[INFO|2025-09-10 08:40:39] logging.py:143 >> Using torch SDPA for faster training and inference.
[INFO|2025-09-10 08:40:39] logging.py:143 >> Upcasting trainable params to float32.
[INFO|2025-09-10 08:40:39] logging.py:143 >> Fine-tuning method: LoRA
[INFO|2025-09-10 08:40:39] logging.py:143 >> Found linear modules: v_proj,o_proj,up_proj,k_proj,down_proj,q_proj,gate_proj
[INFO|2025-09-10 08:40:39] logging.py:143 >> Set vision model not trainable: ['visual.patch_embed', 'visual.blocks'].
[INFO|2025-09-10 08:40:39] logging.py:143 >> Set multi model projector not trainable: visual.merger.
[INFO|2025-09-10 08:40:40] logging.py:143 >> trainable params: 20,185,088 || all params: 8,312,351,744 || trainable%: 0.2428
[INFO|2025-09-10 08:40:40] trainer.py:757 >> Using auto half precision backend
[INFO|2025-09-10 08:40:40] trainer.py:2433 >> ***** Running training *****
[INFO|2025-09-10 08:40:40] trainer.py:2434 >> Num examples = 1,495
[INFO|2025-09-10 08:40:40] trainer.py:2435 >> Num Epochs = 3
[INFO|2025-09-10 08:40:40] trainer.py:2436 >> Instantaneous batch size per device = 2
[INFO|2025-09-10 08:40:40] trainer.py:2439 >> Total train batch size (w. parallel, distributed & accumulation) = 16
[INFO|2025-09-10 08:40:40] trainer.py:2440 >> Gradient Accumulation steps = 8
[INFO|2025-09-10 08:40:40] trainer.py:2441 >> Total optimization steps = 282
[INFO|2025-09-10 08:40:40] trainer.py:2442 >> Number of trainable parameters = 20,185,088
[WARNING|2025-09-10 08:40:42] logging.py:328 >> `loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
[INFO|2025-09-10 08:42:11] logging.py:143 >> {'loss': 0.2185, 'learning_rate': 4.9975e-05, 'epoch': 0.05, 'throughput': 3058.36}
[INFO|2025-09-10 08:43:45] logging.py:143 >> {'loss': 0.1329, 'learning_rate': 4.9874e-05, 'epoch': 0.11, 'throughput': 3055.20}
[INFO|2025-09-10 08:45:23] logging.py:143 >> {'loss': 0.1124, 'learning_rate': 4.9697e-05, 'epoch': 0.16, 'throughput': 3042.04}
[INFO|2025-09-10 08:46:59] logging.py:143 >> {'loss': 0.0841, 'learning_rate': 4.9442e-05, 'epoch': 0.21, 'throughput': 3046.79}
[INFO|2025-09-10 08:48:31] logging.py:143 >> {'loss': 0.0670, 'learning_rate': 4.9112e-05, 'epoch': 0.27, 'throughput': 3053.41}
[INFO|2025-09-10 08:50:03] logging.py:143 >> {'loss': 0.0633, 'learning_rate': 4.8707e-05, 'epoch': 0.32, 'throughput': 3059.39}
[INFO|2025-09-10 08:51:36] logging.py:143 >> {'loss': 0.0508, 'learning_rate': 4.8228e-05, 'epoch': 0.37, 'throughput': 3062.17}
[INFO|2025-09-10 08:53:08] logging.py:143 >> {'loss': 0.0452, 'learning_rate': 4.7677e-05, 'epoch': 0.43, 'throughput': 3064.58}
[INFO|2025-09-10 08:54:41] logging.py:143 >> {'loss': 0.0401, 'learning_rate': 4.7056e-05, 'epoch': 0.48, 'throughput': 3064.56}
[INFO|2025-09-10 08:56:14] logging.py:143 >> {'loss': 0.0418, 'learning_rate': 4.6367e-05, 'epoch': 0.53, 'throughput': 3066.16}
[INFO|2025-09-10 08:57:50] logging.py:143 >> {'loss': 0.0468, 'learning_rate': 4.5611e-05, 'epoch': 0.59, 'throughput': 3063.56}
[INFO|2025-09-10 08:59:22] logging.py:143 >> {'loss': 0.0421, 'learning_rate': 4.4791e-05, 'epoch': 0.64, 'throughput': 3065.33}
[INFO|2025-09-10 09:00:55] logging.py:143 >> {'loss': 0.0400, 'learning_rate': 4.3910e-05, 'epoch': 0.70, 'throughput': 3065.66}
[INFO|2025-09-10 09:02:28] logging.py:143 >> {'loss': 0.0279, 'learning_rate': 4.2971e-05, 'epoch': 0.75, 'throughput': 3066.01}
[INFO|2025-09-10 09:04:00] logging.py:143 >> {'loss': 0.0322, 'learning_rate': 4.1975e-05, 'epoch': 0.80, 'throughput': 3067.38}
[INFO|2025-09-10 09:05:32] logging.py:143 >> {'loss': 0.0340, 'learning_rate': 4.0927e-05, 'epoch': 0.86, 'throughput': 3068.16}
[INFO|2025-09-10 09:07:02] logging.py:143 >> {'loss': 0.0388, 'learning_rate': 3.9829e-05, 'epoch': 0.91, 'throughput': 3070.18}
[INFO|2025-09-10 09:08:29] logging.py:143 >> {'loss': 0.0395, 'learning_rate': 3.8686e-05, 'epoch': 0.96, 'throughput': 3073.05}
[INFO|2025-09-10 09:09:56] logging.py:143 >> {'loss': 0.0300, 'learning_rate': 3.7500e-05, 'epoch': 1.01, 'throughput': 3070.50}
[INFO|2025-09-10 09:11:31] logging.py:143 >> {'loss': 0.0282, 'learning_rate': 3.6275e-05, 'epoch': 1.06, 'throughput': 3070.00}
[INFO|2025-09-10 09:11:31] trainer.py:4074 >> Saving model checkpoint to saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-100
[INFO|2025-09-10 09:11:31] configuration_utils.py:752 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/config.json
[INFO|2025-09-10 09:11:31] configuration_utils.py:817 >> Model config Qwen2_5_VLConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "qwen2_5_vl",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"text_config": {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": null,
"initializer_range": 0.02,
"intermediate_size": 18944,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "qwen2_5_vl_text",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": null,
"torch_dtype": "bfloat16",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": null,
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
},
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.55.2",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"depth": 32,
"fullatt_block_indexes": [
7,
15,
23,
31
],
"hidden_act": "silu",
"hidden_size": 1280,
"in_channels": 3,
"in_chans": 3,
"initializer_range": 0.02,
"intermediate_size": 3420,
"model_type": "qwen2_5_vl",
"num_heads": 16,
"out_hidden_size": 3584,
"patch_size": 14,
"spatial_merge_size": 2,
"spatial_patch_size": 14,
"temporal_patch_size": 2,
"tokens_per_second": 2,
"window_size": 112
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
[INFO|2025-09-10 09:11:31] tokenization_utils_base.py:2393 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-100/chat_template.jinja
[INFO|2025-09-10 09:11:31] tokenization_utils_base.py:2562 >> tokenizer config file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-100/tokenizer_config.json
[INFO|2025-09-10 09:11:31] tokenization_utils_base.py:2571 >> Special tokens file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-100/special_tokens_map.json
[INFO|2025-09-10 09:11:32] image_processing_base.py:258 >> Image processor saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-100/preprocessor_config.json
[INFO|2025-09-10 09:11:32] tokenization_utils_base.py:2393 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-100/chat_template.jinja
[INFO|2025-09-10 09:11:32] tokenization_utils_base.py:2562 >> tokenizer config file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-100/tokenizer_config.json
[INFO|2025-09-10 09:11:32] tokenization_utils_base.py:2571 >> Special tokens file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-100/special_tokens_map.json
[INFO|2025-09-10 09:11:32] video_processing_utils.py:582 >> Video processor saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-100/video_preprocessor_config.json
[INFO|2025-09-10 09:11:32] processing_utils.py:745 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-100/chat_template.jinja
[INFO|2025-09-10 09:13:07] logging.py:143 >> {'loss': 0.0452, 'learning_rate': 3.5016e-05, 'epoch': 1.12, 'throughput': 3067.60}
[INFO|2025-09-10 09:14:42] logging.py:143 >> {'loss': 0.0266, 'learning_rate': 3.3725e-05, 'epoch': 1.17, 'throughput': 3066.95}
[INFO|2025-09-10 09:16:15] logging.py:143 >> {'loss': 0.0329, 'learning_rate': 3.2407e-05, 'epoch': 1.22, 'throughput': 3067.78}
[INFO|2025-09-10 09:17:50] logging.py:143 >> {'loss': 0.0232, 'learning_rate': 3.1066e-05, 'epoch': 1.28, 'throughput': 3067.40}
[INFO|2025-09-10 09:19:23] logging.py:143 >> {'loss': 0.0366, 'learning_rate': 2.9706e-05, 'epoch': 1.33, 'throughput': 3067.80}
[INFO|2025-09-10 09:20:58] logging.py:143 >> {'loss': 0.0301, 'learning_rate': 2.8332e-05, 'epoch': 1.39, 'throughput': 3067.34}
[INFO|2025-09-10 09:22:34] logging.py:143 >> {'loss': 0.0253, 'learning_rate': 2.6948e-05, 'epoch': 1.44, 'throughput': 3066.79}
[INFO|2025-09-10 09:24:03] logging.py:143 >> {'loss': 0.0291, 'learning_rate': 2.5557e-05, 'epoch': 1.49, 'throughput': 3067.92}
[INFO|2025-09-10 09:25:37] logging.py:143 >> {'loss': 0.0236, 'learning_rate': 2.4165e-05, 'epoch': 1.55, 'throughput': 3067.40}
[INFO|2025-09-10 09:27:11] logging.py:143 >> {'loss': 0.0288, 'learning_rate': 2.2775e-05, 'epoch': 1.60, 'throughput': 3067.26}
[INFO|2025-09-10 09:28:44] logging.py:143 >> {'loss': 0.0334, 'learning_rate': 2.1392e-05, 'epoch': 1.65, 'throughput': 3067.76}
[INFO|2025-09-10 09:30:11] logging.py:143 >> {'loss': 0.0273, 'learning_rate': 2.0020e-05, 'epoch': 1.71, 'throughput': 3069.37}
[INFO|2025-09-10 09:31:41] logging.py:143 >> {'loss': 0.0219, 'learning_rate': 1.8664e-05, 'epoch': 1.76, 'throughput': 3070.12}
[INFO|2025-09-10 09:33:12] logging.py:143 >> {'loss': 0.0246, 'learning_rate': 1.7328e-05, 'epoch': 1.81, 'throughput': 3070.76}
[INFO|2025-09-10 09:34:45] logging.py:143 >> {'loss': 0.0283, 'learning_rate': 1.6015e-05, 'epoch': 1.87, 'throughput': 3070.80}
[INFO|2025-09-10 09:36:15] logging.py:143 >> {'loss': 0.0196, 'learning_rate': 1.4730e-05, 'epoch': 1.92, 'throughput': 3071.62}
[INFO|2025-09-10 09:37:47] logging.py:143 >> {'loss': 0.0222, 'learning_rate': 1.3477e-05, 'epoch': 1.97, 'throughput': 3071.83}
[INFO|2025-09-10 09:39:11] logging.py:143 >> {'loss': 0.0228, 'learning_rate': 1.2260e-05, 'epoch': 2.02, 'throughput': 3071.66}
[INFO|2025-09-10 09:40:44] logging.py:143 >> {'loss': 0.0294, 'learning_rate': 1.1082e-05, 'epoch': 2.07, 'throughput': 3071.74}
[INFO|2025-09-10 09:42:18] logging.py:143 >> {'loss': 0.0171, 'learning_rate': 9.9472e-06, 'epoch': 2.13, 'throughput': 3071.46}
[INFO|2025-09-10 09:42:18] trainer.py:4074 >> Saving model checkpoint to saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-200
[INFO|2025-09-10 09:42:18] configuration_utils.py:752 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/config.json
[INFO|2025-09-10 09:42:18] configuration_utils.py:817 >> Model config Qwen2_5_VLConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "qwen2_5_vl",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"text_config": {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": null,
"initializer_range": 0.02,
"intermediate_size": 18944,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "qwen2_5_vl_text",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": null,
"torch_dtype": "bfloat16",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": null,
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
},
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.55.2",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"depth": 32,
"fullatt_block_indexes": [
7,
15,
23,
31
],
"hidden_act": "silu",
"hidden_size": 1280,
"in_channels": 3,
"in_chans": 3,
"initializer_range": 0.02,
"intermediate_size": 3420,
"model_type": "qwen2_5_vl",
"num_heads": 16,
"out_hidden_size": 3584,
"patch_size": 14,
"spatial_merge_size": 2,
"spatial_patch_size": 14,
"temporal_patch_size": 2,
"tokens_per_second": 2,
"window_size": 112
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
[INFO|2025-09-10 09:42:18] tokenization_utils_base.py:2393 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-200/chat_template.jinja
[INFO|2025-09-10 09:42:18] tokenization_utils_base.py:2562 >> tokenizer config file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-200/tokenizer_config.json
[INFO|2025-09-10 09:42:18] tokenization_utils_base.py:2571 >> Special tokens file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-200/special_tokens_map.json
[INFO|2025-09-10 09:42:18] image_processing_base.py:258 >> Image processor saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-200/preprocessor_config.json
[INFO|2025-09-10 09:42:18] tokenization_utils_base.py:2393 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-200/chat_template.jinja
[INFO|2025-09-10 09:42:18] tokenization_utils_base.py:2562 >> tokenizer config file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-200/tokenizer_config.json
[INFO|2025-09-10 09:42:18] tokenization_utils_base.py:2571 >> Special tokens file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-200/special_tokens_map.json
[INFO|2025-09-10 09:42:18] video_processing_utils.py:582 >> Video processor saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-200/video_preprocessor_config.json
[INFO|2025-09-10 09:42:19] processing_utils.py:745 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-200/chat_template.jinja
[INFO|2025-09-10 09:43:49] logging.py:143 >> {'loss': 0.0199, 'learning_rate': 8.8593e-06, 'epoch': 2.18, 'throughput': 3071.28}
[INFO|2025-09-10 09:45:21] logging.py:143 >> {'loss': 0.0250, 'learning_rate': 7.8215e-06, 'epoch': 2.24, 'throughput': 3071.79}
[INFO|2025-09-10 09:46:55] logging.py:143 >> {'loss': 0.0191, 'learning_rate': 6.8369e-06, 'epoch': 2.29, 'throughput': 3071.27}
[INFO|2025-09-10 09:48:30] logging.py:143 >> {'loss': 0.0276, 'learning_rate': 5.9087e-06, 'epoch': 2.34, 'throughput': 3070.96}
[INFO|2025-09-10 09:50:02] logging.py:143 >> {'loss': 0.0299, 'learning_rate': 5.0397e-06, 'epoch': 2.40, 'throughput': 3071.27}
[INFO|2025-09-10 09:51:36] logging.py:143 >> {'loss': 0.0230, 'learning_rate': 4.2326e-06, 'epoch': 2.45, 'throughput': 3071.07}
[INFO|2025-09-10 09:53:10] logging.py:143 >> {'loss': 0.0224, 'learning_rate': 3.4900e-06, 'epoch': 2.50, 'throughput': 3071.01}
[INFO|2025-09-10 09:54:41] logging.py:143 >> {'loss': 0.0246, 'learning_rate': 2.8140e-06, 'epoch': 2.56, 'throughput': 3071.67}
[INFO|2025-09-10 09:56:10] logging.py:143 >> {'loss': 0.0249, 'learning_rate': 2.2069e-06, 'epoch': 2.61, 'throughput': 3072.55}
[INFO|2025-09-10 09:57:42] logging.py:143 >> {'loss': 0.0226, 'learning_rate': 1.6705e-06, 'epoch': 2.66, 'throughput': 3072.70}
[INFO|2025-09-10 09:59:15] logging.py:143 >> {'loss': 0.0188, 'learning_rate': 1.2064e-06, 'epoch': 2.72, 'throughput': 3072.67}
[INFO|2025-09-10 10:00:47] logging.py:143 >> {'loss': 0.0222, 'learning_rate': 8.1619e-07, 'epoch': 2.77, 'throughput': 3072.99}
[INFO|2025-09-10 10:02:19] logging.py:143 >> {'loss': 0.0230, 'learning_rate': 5.0096e-07, 'epoch': 2.82, 'throughput': 3073.21}
[INFO|2025-09-10 10:03:57] logging.py:143 >> {'loss': 0.0211, 'learning_rate': 2.6172e-07, 'epoch': 2.88, 'throughput': 3072.52}
[INFO|2025-09-10 10:05:32] logging.py:143 >> {'loss': 0.0306, 'learning_rate': 9.9221e-08, 'epoch': 2.93, 'throughput': 3072.24}
[INFO|2025-09-10 10:07:04] logging.py:143 >> {'loss': 0.0195, 'learning_rate': 1.3961e-08, 'epoch': 2.98, 'throughput': 3072.67}
[INFO|2025-09-10 10:07:30] trainer.py:4074 >> Saving model checkpoint to saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-282
[INFO|2025-09-10 10:07:30] configuration_utils.py:752 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/config.json
[INFO|2025-09-10 10:07:30] configuration_utils.py:817 >> Model config Qwen2_5_VLConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "qwen2_5_vl",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"text_config": {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": null,
"initializer_range": 0.02,
"intermediate_size": 18944,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "qwen2_5_vl_text",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": null,
"torch_dtype": "bfloat16",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": null,
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
},
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.55.2",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"depth": 32,
"fullatt_block_indexes": [
7,
15,
23,
31
],
"hidden_act": "silu",
"hidden_size": 1280,
"in_channels": 3,
"in_chans": 3,
"initializer_range": 0.02,
"intermediate_size": 3420,
"model_type": "qwen2_5_vl",
"num_heads": 16,
"out_hidden_size": 3584,
"patch_size": 14,
"spatial_merge_size": 2,
"spatial_patch_size": 14,
"temporal_patch_size": 2,
"tokens_per_second": 2,
"window_size": 112
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
[INFO|2025-09-10 10:07:30] tokenization_utils_base.py:2393 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-282/chat_template.jinja
[INFO|2025-09-10 10:07:30] tokenization_utils_base.py:2562 >> tokenizer config file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-282/tokenizer_config.json
[INFO|2025-09-10 10:07:30] tokenization_utils_base.py:2571 >> Special tokens file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-282/special_tokens_map.json
[INFO|2025-09-10 10:07:30] image_processing_base.py:258 >> Image processor saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-282/preprocessor_config.json
[INFO|2025-09-10 10:07:30] tokenization_utils_base.py:2393 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-282/chat_template.jinja
[INFO|2025-09-10 10:07:30] tokenization_utils_base.py:2562 >> tokenizer config file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-282/tokenizer_config.json
[INFO|2025-09-10 10:07:30] tokenization_utils_base.py:2571 >> Special tokens file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-282/special_tokens_map.json
[INFO|2025-09-10 10:07:30] video_processing_utils.py:582 >> Video processor saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-282/video_preprocessor_config.json
[INFO|2025-09-10 10:07:31] processing_utils.py:745 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/checkpoint-282/chat_template.jinja
[INFO|2025-09-10 10:07:31] trainer.py:2718 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|2025-09-10 10:07:31] image_processing_base.py:258 >> Image processor saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/preprocessor_config.json
[INFO|2025-09-10 10:07:31] tokenization_utils_base.py:2393 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/chat_template.jinja
[INFO|2025-09-10 10:07:31] tokenization_utils_base.py:2562 >> tokenizer config file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/tokenizer_config.json
[INFO|2025-09-10 10:07:31] tokenization_utils_base.py:2571 >> Special tokens file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/special_tokens_map.json
[INFO|2025-09-10 10:07:31] video_processing_utils.py:582 >> Video processor saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/video_preprocessor_config.json
[INFO|2025-09-10 10:07:31] processing_utils.py:745 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/chat_template.jinja
[INFO|2025-09-10 10:07:31] trainer.py:4074 >> Saving model checkpoint to saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54
[INFO|2025-09-10 10:07:31] configuration_utils.py:752 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-VL-7B-Instruct/snapshots/cc594898137f460bfe9f0759e9844b3ce807cfb5/config.json
[INFO|2025-09-10 10:07:31] configuration_utils.py:817 >> Model config Qwen2_5_VLConfig {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": 151655,
"initializer_range": 0.02,
"intermediate_size": 18944,
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "qwen2_5_vl",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": 32768,
"text_config": {
"architectures": [
"Qwen2_5_VLForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 3584,
"image_token_id": null,
"initializer_range": 0.02,
"intermediate_size": 18944,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 128000,
"max_window_layers": 28,
"model_type": "qwen2_5_vl_text",
"num_attention_heads": 28,
"num_hidden_layers": 28,
"num_key_value_heads": 4,
"rms_norm_eps": 1e-06,
"rope_scaling": {
"mrope_section": [
16,
24,
24
],
"rope_type": "default",
"type": "default"
},
"rope_theta": 1000000.0,
"sliding_window": null,
"torch_dtype": "bfloat16",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": null,
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
},
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.55.2",
"use_cache": true,
"use_sliding_window": false,
"video_token_id": 151656,
"vision_config": {
"depth": 32,
"fullatt_block_indexes": [
7,
15,
23,
31
],
"hidden_act": "silu",
"hidden_size": 1280,
"in_channels": 3,
"in_chans": 3,
"initializer_range": 0.02,
"intermediate_size": 3420,
"model_type": "qwen2_5_vl",
"num_heads": 16,
"out_hidden_size": 3584,
"patch_size": 14,
"spatial_merge_size": 2,
"spatial_patch_size": 14,
"temporal_patch_size": 2,
"tokens_per_second": 2,
"window_size": 112
},
"vision_end_token_id": 151653,
"vision_start_token_id": 151652,
"vision_token_id": 151654,
"vocab_size": 152064
}
[INFO|2025-09-10 10:07:31] tokenization_utils_base.py:2393 >> chat template saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/chat_template.jinja
[INFO|2025-09-10 10:07:31] tokenization_utils_base.py:2562 >> tokenizer config file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/tokenizer_config.json
[INFO|2025-09-10 10:07:31] tokenization_utils_base.py:2571 >> Special tokens file saved in saves/Qwen2.5-VL-7B-Instruct/lora/train_2025-09-10-08-35-54/special_tokens_map.json
[WARNING|2025-09-10 10:07:32] logging.py:148 >> No metric eval_loss to plot.
[WARNING|2025-09-10 10:07:32] logging.py:148 >> No metric eval_accuracy to plot.
[INFO|2025-09-10 10:07:32] modelcard.py:456 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}