MultiTalk Handler for Hugging Face Inference Endpoints

This is a custom handler for deploying the MeiGen-AI/MeiGen-MultiTalk model on Hugging Face Inference Endpoints.

Model Description

This handler wraps the MeiGen-AI/MeiGen-MultiTalk model for audio-driven multi-person conversational video generation.

Usage

This model should be used with Hugging Face Inference Endpoints with the following configuration:

  • GPU: A100 (80GB recommended)
  • Framework: Custom
  • Task: Custom

Requirements

  • PyTorch 2.4.1
  • CUDA 12.1
  • Various dependencies listed in requirements.txt

Handler Details

The custom handler (handler.py) implements the necessary interface for Hugging Face Inference Endpoints to run the MultiTalk model.

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support