--- title: June2025ProjectMCP emoji: 🚀 colorFrom: blue colorTo: green sdk: gradio app_file: app.py pinned: false hf_oauth: true tags: [mcp-server-track, agent-demo-track, video-processing] --- # June 2025 MCP Project Features: This is a Gradio application demonstrating Model-Context-Protocol (MCP) with video processing tools, powered by local Ollama models or a remote Hugging Face model. For a little context here: this is a very simple project, built for the purpose of learning about Gradio a little bit more, since I'm more used to working directly with React, React-Native and even ComfyUI on the frontend. Also I wanted to test out MCP, get used to Hugging Face's API, play around with LLM tool calling, and more. And on top of it all, there's some vibe coding going on. All in all, it's been a great experience, even though this is a very basic project compared to what else is going to be submitted during this Hackathon. One last thing: this was built primarily for running locally, rather than on HF itself, since I don't want to tie anything to my HF API key without fully understanding the implications of that. But this is a straightforward project that should be easy to run locally, if anyone's inclined to go that far with it. Thanks for reading all this, I really didn't expect anyone to, ha ha. ## Features - **FFmpeg Check:** Verifies that FFmpeg is installed. - **Video Uploader:** Upload and validate MP4 files. - **Manual Tools:** Extract the first and last frames of a video. - **LLM Integration:** Connect to Ollama or Hugging Face. - **Tool-Calling:** Use natural language to command the LLM to execute video tools, specifically the first frame, last frame, and convert to gif with max resolution and FPS settings (50fps max, 100px min, resolution is in pixel format and chooses the max value of width or height, whatever is greater, to reduce/increase the resolution to, scaling proportionally) - **Hugging Face OAuth:** Users can log in with their own HF accounts to use the remote LLM. ## Usage Notes - Run the repo locally. Drag in an mp4. Set up your Ollama configuation on the LLM configuration tab, or login to Hugging Face locall. Ideally in either case you're going to want to target something like llama3.2:3b-instruct to get this going, which is what I've developed on. Preferred model and configuration will be sagved locally. From there, go to the LLM Video Commands tab, type in your prompt ("Get me the last Frame", "Get me the first frame", "Make this mp4 into a gif with a max resolution of 300 and an FPS of 50") and hit the button. It should get you what you asked for.