# Contributing to ViT Auditing Toolkit First off, thank you for considering contributing to the ViT Auditing Toolkit! It's people like you that make this tool better for everyone. ## 🌟 Ways to Contribute ### 1. Reporting Bugs 🐛 Before creating bug reports, please check existing issues to avoid duplicates. When creating a bug report, include: - **Clear title and description** - **Steps to reproduce** the behavior - **Expected vs actual behavior** - **Screenshots** if applicable - **Environment details** (OS, Python version, etc.) **Example:** ```markdown **Bug**: GradCAM visualization fails with ViT-Large model **Steps to reproduce:** 1. Select ViT-Large from dropdown 2. Upload any image 3. Select GradCAM method 4. Click "Analyze Image" **Expected:** GradCAM heatmap visualization **Actual:** Error message "AttributeError: ..." **Environment:** - OS: Ubuntu 22.04 - Python: 3.10.12 - PyTorch: 2.2.0 ``` ### 2. Suggesting Features ✨ Feature requests are welcome! Please provide: - **Clear use case**: Why is this feature needed? - **Proposed solution**: How should it work? - **Alternatives considered**: Other approaches you've thought about - **Additional context**: Screenshots, mockups, references ### 3. Contributing Code 💻 #### Development Setup ```bash # 1. Fork the repository on GitHub # 2. Clone your fork git clone https://github.com/YOUR-USERNAME/ViT-XAI-Dashboard.git cd ViT-XAI-Dashboard # 3. Create a virtual environment python -m venv venv source venv/bin/activate # Windows: venv\Scripts\activate # 4. Install dependencies pip install -r requirements.txt # 5. Install development dependencies pip install pytest black flake8 mypy # 6. Create a feature branch git checkout -b feature/amazing-feature ``` #### Code Style Guidelines **Python Style:** - Follow [PEP 8](https://pep8.org/) - Use 4 spaces for indentation - Maximum line length: 100 characters - Use meaningful variable names **Formatting:** ```bash # Format code with Black black src/ tests/ app.py # Check style with flake8 flake8 src/ tests/ app.py --max-line-length=100 # Type checking with mypy mypy src/ --ignore-missing-imports ``` **Documentation:** - Add docstrings to all functions and classes - Use Google-style docstrings - Update README.md if adding new features **Example:** ```python def explain_attention(model, processor, image, layer_index=6, head_index=0): """ Extract and visualize attention weights from a specific layer and head. Args: model: Pre-trained ViT model with attention outputs enabled. processor: Image processor for model input preparation. image (PIL.Image): Input image to analyze. layer_index (int): Transformer layer index (0-11 for base model). head_index (int): Attention head index (0-11 for base model). Returns: matplotlib.figure.Figure: Visualization of attention patterns. Raises: ValueError: If layer_index or head_index is out of range. RuntimeError: If attention weights cannot be extracted. Example: >>> from PIL import Image >>> image = Image.open("cat.jpg") >>> fig = explain_attention(model, processor, image, layer_index=6) >>> fig.savefig("attention.png") """ # Implementation... ``` #### Testing All new features must include tests: ```bash # Run all tests pytest tests/ # Run specific test file pytest tests/test_explainer.py # Run with coverage pytest --cov=src tests/ ``` **Writing Tests:** ```python import pytest from src.explainer import explain_attention def test_attention_visualization(): """Test attention visualization with valid inputs.""" # Setup model, processor = load_test_model() image = create_test_image() # Execute fig = explain_attention(model, processor, image, layer_index=6) # Assert assert fig is not None assert len(fig.axes) > 0 def test_attention_invalid_layer(): """Test attention visualization with invalid layer index.""" model, processor = load_test_model() image = create_test_image() with pytest.raises(ValueError): explain_attention(model, processor, image, layer_index=99) ``` #### Commit Messages Follow the [Conventional Commits](https://www.conventionalcommits.org/) specification: ``` ():