import os # For reading environment variables import shutil # For directory cleanup import zipfile # For extracting model archives import pathlib # For path manipulations import tempfile # For creating temporary files/directories import requests # For downloading images import gradio # For interactive UI import pandas # For tabular data handling import PIL.Image # For image I/O import huggingface_hub # For downloading model assets import autogluon.multimodal # For loading AutoGluon image classifier # Hardcoded Hub model (native zip) # Note: In a real deployment, you might want to download the unzipped model directory directly # from the repo where you uploaded it in the previous step, rather than re-downloading the zip. # For this example, we'll keep the zip download logic for simplicity, assuming the zip # is also in the same model repo or a different designated location. MODEL_REPO_ID = "yusenthebot/sign-identification-autogluon" # Updated model ID ZIP_FILENAME = "autogluon_sign_predictor_dir.zip" # Access the token from environment variables for Hugging Face Spaces HF_TOKEN = os.getenv("HF_TOKEN", None) # Local cache/extract dirs CACHE_DIR = pathlib.Path("./hf_assets") # Use a path relative to the app.py location EXTRACT_DIR = CACHE_DIR / "predictor_native" EXAMPLES_DIR = CACHE_DIR / "examples" # Download & load the native predictor def _prepare_predictor_dir() -> str: CACHE_DIR.mkdir(parents=True, exist_ok=True) # Check if the extracted directory already exists to avoid re-downloading/extracting if not EXTRACT_DIR.exists() or not any(EXTRACT_DIR.iterdir()): print("Downloading and extracting model zip...") try: local_zip = huggingface_hub.hf_hub_download( repo_id=MODEL_REPO_ID, filename=ZIP_FILENAME, repo_type="model", token=HF_TOKEN, local_dir=str(CACHE_DIR), local_dir_use_symlinks=False, # Deprecated but kept for consistency with original code ) if EXTRACT_DIR.exists(): shutil.rmtree(EXTRACT_DIR) EXTRACT_DIR.mkdir(parents=True, exist_ok=True) with zipfile.ZipFile(local_zip, "r") as zf: zf.extractall(str(EXTRACT_DIR)) contents = list(EXTRACT_DIR.iterdir()) # Adjust for potential sub-directory within the zip predictor_root = contents[0] if (len(contents) == 1 and contents[0].is_dir()) else EXTRACT_DIR return str(predictor_root) except Exception as e: print(f"Error preparing predictor directory: {e}") # Handle the error appropriately, perhaps raise it or return None raise e else: print("Model directory already exists, skipping download and extraction.") contents = list(EXTRACT_DIR.iterdir()) predictor_root = contents[0] if (len(contents) == 1 and contents[0].is_dir()) else EXTRACT_DIR return str(predictor_root) PREDICTOR_DIR = _prepare_predictor_dir() PREDICTOR = autogluon.multimodal.MultiModalPredictor.load(PREDICTOR_DIR) # Explicit class labels (edit copy as desired) CLASS_LABELS = {0: "no stop sign", 1: "stop sign"} # Helper to map model class -> human label def _human_label(c): try: ci = int(c) return CLASS_LABELS.get(ci, str(c)) except Exception: return CLASS_LABELS.get(c, str(c)) # Do the prediction! def do_predict(pil_img_input): # Renamed input parameter for clarity # Check the type of the input if pil_img_input is None: # Return empty results for the outputs if no image is provided return {"no stop sign": 0.0, "stop sign": 0.0} # Handle potential tuple input from webcam if isinstance(pil_img_input, tuple): print(f"Received tuple input from webcam: {pil_img_input}") # Print for debugging # Assuming the tuple contains the image data as the first element if pil_img_input and isinstance(pil_img_input[0], PIL.Image.Image): pil_img = pil_img_input[0] print("Successfully extracted PIL Image from tuple.") else: # If we can't extract a PIL Image from the tuple print("Could not extract PIL Image from tuple input.") return {"Error": "Could not process webcam input format."} elif isinstance(pil_img_input, PIL.Image.Image): pil_img = pil_img_input else: # Handle unexpected input types print(f"Received unexpected input type: {type(pil_img_input)}") return {"Error": f"Unexpected input format: {type(pil_img_input)}"} try: # IF we have something to work with, save it and prepare the input tmpdir = pathlib.Path(tempfile.mkdtemp()) img_path = tmpdir / "input.png" pil_img.save(img_path) df = pandas.DataFrame({"image": [str(img_path)]}) # For AutoGluon expected input format # For class probabilities proba_df = PREDICTOR.predict_proba(df) # Clean up the temporary directory shutil.rmtree(tmpdir) # For user-friendly column names - adjust based on actual column names in proba_df # Assuming proba_df columns are 0 and 1 based on CLASS_LABELS keys proba_df = proba_df.rename(columns={0: "no stop sign", 1: "stop sign"}) row = proba_df.iloc[0] # For pretty ranked dict expected by gr.Label # Ensure the keys match the CLASS_LABELS values pretty_dict = { "no stop sign": float(row.get("no stop sign", 0.0)), "stop sign": float(row.get("stop sign", 0.0)), } return pretty_dict except Exception as e: # Return an error message or empty results if something goes wrong during processing print(f"Error processing image: {e}") # Return a dictionary with error information or default values return {"Error": f"Processing failed: {e}"} # Representative example images! These can be local or links. # Using local examples downloaded previously EXAMPLES_URLS = [ "https://www.portland.gov/sites/default/files/styles/2_1_1600w/public/2022/stop-sign1.jpg?itok=dU7UMFnn", "https://driving-tests.org/wp-content/uploads/2020/09/shutterstock_65862670.jpg", "https://signaturestreetscapes.com/cdn/shop/files/Pedestrian-Symbol-Sign---installation-photo_1080x.jpg?v=1695149743" ] # Download examples - this part might be better handled outside the app.py if running on Spaces, # by including the example images directly in the repo. However, keeping the download logic # here for self-containedness, with error handling. EXAMPLES_DIR.mkdir(parents=True, exist_ok=True) EXAMPLES = [] for i, url in enumerate(EXAMPLES_URLS): try: # Check if example file already exists file_path = EXAMPLES_DIR / f"example_{i}.jpg" if not file_path.exists(): print(f"Downloading example image from {url}...") response = requests.get(url, stream=True) response.raise_for_status() # Raise an HTTPError for bad responses (4xx or 5xx) with open(file_path, 'wb') as f: shutil.copyfileobj(response.raw, f) print(f"Downloaded to {file_path}") else: print(f"Example image {file_path} already exists, skipping download.") EXAMPLES.append([str(file_path)]) except requests.exceptions.RequestException as e: print(f"Error downloading example image from {url}: {e}") # If download fails, the example won't be added to the list except Exception as e: print(f"An unexpected error occurred processing example {url}: {e}") # Gradio UI with gradio.Blocks() as demo: # Provide an introduction gradio.Markdown("# Stop Sign Detection") gradio.Markdown(""" Upload an image to check if it contains a stop sign. """) # Interface for the incoming image image_in = gradio.Image(type="pil", label="Input image", sources=["upload", "webcam"]) # Interface elements to show the result and probabilities proba_pretty = gradio.Label(num_top_classes=2, label="Class probabilities") # Whenever a new image is uploaded, update the result image_in.change(fn=do_predict, inputs=[image_in], outputs=[proba_pretty]) # For clickable example images if EXAMPLES: # Only add examples if any were successfully downloaded gradio.Examples( examples=EXAMPLES, inputs=[image_in], label="Representative examples", examples_per_page=8, cache_examples=False, # Set to False to re-run the function on example click ) else: gradio.Markdown("Could not load example images.") if __name__ == "__main__": # When running on Hugging Face Spaces, the HF_TOKEN environment variable # will be automatically available if configured in the Space settings. # We don't need to explicitly pass it to launch() unless required by Gradio itself # for specific features (which is not the case for basic launch). demo.launch()