--- datasets: - Sweaterdog/Andy-4-base - Sweaterdog/Andy-4-ft language: - en base_model: - unsloth/Qwen3-8B-bnb-4bit tags: - gaming - minecraft - mindcraft --- # 🧠 Andy‑4 ⛏️ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66960602f0ffd8e3a381106a/raWYEDo2An1biTLXd5PfN.png) **Andy‑4** is an 8 billion‑parameter specialist model tuned for Minecraft gameplay via the Mindcraft framework. Trained on a single RTX 3090 over **three weeks**, Andy‑4 delivers advanced reasoning, multi‑step planning, and robust in‑game decision‑making. > ⚠️ **Certification:** > Andy‑4 is **not yet certified** by the Mindcraft developers. Use in production at your own discretion. --- ## 🔍 Model Specifications - **Parameters:** 8 B - **Training Hardware:** 1 × NVIDIA RTX 3090 - **Duration:** ~3 weeks total - **Data Volumes:** - **Messages:** 179,384 - **Tokens:** 425,535,198 - **Conversations:** 62,149 - **Base Architecture:** Qwen 3 8b - **License:** [Andy 1.1 License](LICENSE) - **Repository:** https://huggingface.co/Sweaterdog/Andy‑4 --- ## 📊 Training Regimen 1. **Andy‑4‑base‑1** dataset - **Epochs:** 3 - **Learning Rate:** 7e-5 - **Dataset Size:** 47.4k 3. **Fine‑tune (FT) dataset** - **Epochs:** 2.5 - **Learning Rate:** 2e-5 - **Dataset Size:** 4.12k - **Optimizer:** AdamW_8bit with cosine decay - **Quantization:** 4‑bit (`bnb-4bit`) for inference - **Warm Up Steps:** 0.1% of each dataset --- ## 🚀 Installation First, you need to choose your quantization, this chart is with the base of `8192` set as the context window | Quantization | VRAM Required | |--------------|---------------| | F16 | 16 GB+ | | Q5_K_M | 8 GB+ | | Q4_K_M | 6–8 GB | | Q3_K_M | 6 GB (low) | | Q2_K | 4–6 GB (ultra)| ### 1. Installation directly on Ollama 1. Visit [Andy-4 on Ollama](https://ollama.com/Sweaterdog/Andy-4) 2. Copy the command after choosing model type / quantization 3. Run the command in the terminal 4. Set the profile's model to be what you installed, such as `ollama/sweaterdog/andy-4:latest` ### 2. Manual Download & Modelfile 1. **Download** - From the HF **Files** tab, grab your chosen `.GGUF` quant weights (e.g. `Andy-4.Q4_K_M.gguf`). - Download the provided `Modelfile`. 2. **Edit** Change ```text FROM YOUR/PATH/HERE ``` to ```text FROM /path/to/Andy-4.Q4_K_M.gguf ``` *Optional*: Increase the parameter `num_ctx` to a higher value for longer conversations if you: **A.** Have extra VRAM **B.** Quantized the context window **C.** Can use a smaller model 3. **Create** ```bash ollama create andy-4 -f Modelfile ``` This registers the **Andy‑4** model locally. --- If you lack a GPU, check the [Mindcraft Discord guide](https://ptb.discord.com/channels/1303399789995626667/1347027684768878644/1347027684768878644) for free cloud setups. ## 🔧 Context‑Window Quantization To lower VRAM use for context windows: #### **Windows** 1. Close Ollama. 2. In **System Properties → Environment Variables**, add: ```text OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 # or q4_0 for extra savings, but far more unstable ``` 3. Restart Ollama. #### **Linux/macOS** ```bash export OLLAMA_FLASH_ATTENTION=1 export OLLAMA_KV_CACHE_TYPE="q8_0" # or "q4_0", but far more unstable ollama serve ``` --- ## 📌 Acknowledgments
Click to expand - **Data & Models by:** @Sweaterdog - **Framework:** Mindcraft (https://github.com/kolbytn/mindcraft) - **LoRA Weights:** https://huggingface.co/Sweaterdog/Andy-4-LoRA
--- ## ⚖️ License See [Andy 1.1 License](LICENSE). *This work uses data and models created by @Sweaterdog.*