Create custom processor for easier inference

#11
by pcuenq HF Staff - opened

This improves the custom transformers processor to handle all the input pre-processing logic, achieving the following benefits:

  • Inference with transformers is much simpler, as it can be done in exactly the same way as in native transformers models (except for the use of trust_remote_code to download the processor from the Hub). See the updates to the README for an example snippet. The user does not have to prepare the inputs manually.
  • Fixes #10.
  • Unlocks downstream use cases like conversion to MLX.
  • Could potentially help with #4 (not tested yet).

Notes:

  • I created a chat template that only applies to the processor, but preserves the existing chat template configured in the tokenizer. This may be confusing for some users, my recommendation would be to use the same definition for both, unless the tokenizer template is used elsewhere. If backwards compatibility is not a concern, I can update this PR to use the same.
  • I can open PRs to the other FastVLM models if this one is accepted.

Hello, thank you for this improvement! Could You also fix the forward method to be compatible with the new preprocessor?

When calling forward() I get:
TypeError: LlavaQwen2ForCausalLM.forward() got an unexpected keyword argument 'pixel_values'

are you guys talking to me?

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment