YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Deployment Scripts for Medguide (Built with Gradio)
This document provides instructions for deploying the Medguide model for inference using Gradio.
Set up the Conda environment: Follow the instructions in the PKU-Alignment/align-anything repository to configure your Conda environment.
Configure the model path: After setting up the environment, update the
MODEL_PATHvariable indeploy_medguide.shto point to your local Medguide model directory.Verify inference script parameters: Check the following three parameters in both
text_inference.py:# NOTE: Replace with your own model path if not loaded via the API base model = ''These scripts utilize an OpenAI-compatible server approach. The
deploy_medguide.shscript launches the Medguide model locally and exposes it on port 8231 for external access via the specified API base URL.Running Inference:
- Streamed Output:
bash deploy_medguide.sh python text_inference.py
- Streamed Output:
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support