---
license: bigscience-openrail-m
language:
- en
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
pipeline_tag: text-classification
tags:
- tensor-rt
- image-generation
- xxx
- uncensored
- nasty-stuff
- stable-diffusion-xl
- fast
- fast-inference
---
ABSYNTH-SDXXXL-TENSOR-RT Engine 🚀
Model Description
Meet the caffeinated cousin of GurilaMash XXX SDXL! This model has been put through the TensorRT blender and come out the other side running faster than a cheetah on Red Bull.
Base Model: GurilaMash XXX SDXL - because why start with boring when you can start with... interesting? 😏
What's New: I took an already spicy model and gave it the TensorRT treatment - think of it as putting racing stripes on a sports car, except the racing stripes actually make it go faster!

🔥 Why TensorRT + NVIDIA = Match Made in Silicon Heaven
TensorRT is NVIDIA's secret sauce for making AI models go BRRRRR on their GPUs. Think of it as a personal trainer for your neural networks - it takes your chubby, slow model and turns it into a lean, mean, inference machine.
🎯 Intended Use
Perfect for when you need your diffusion models to have the artistic sensibilities of GurilaMash but the speed of a caffeinated hummingbird. Ideal for:
Fast prototyping (when "fast" actually means fast)
Production environments where waiting is not an option
Impressing friends with unnecessarily quick generation times
Testing the limits of your cooling system

🚨 Limitations & Warnings
May cause your other models to feel inadequate about their generation speeds
Side effects include: addiction to fast inference, chronic impatience with non-TensorRT models
Not responsible for any existential crises your GPU might experience
Still bound by the laws of physics (unfortunately)
Hands? Its still SDXL so... you can fix it
🚀 Quick Start Guide (Because Nobody Reads Manuals)
- Download the TensorRT Engine File, take V3 for better quality 📥
- Grab the
.engine file and drop it in \ComfyUI\output\tensorrt
- No unzipping, no fuss, just drag and drop like it's hot
- Get the Original Model Too 🎭
- Download the base GurilaMash XXX SDXL model
- Put it in
models\checkpoint folder
- This is for FaceDetailer nodes (because faces need extra love)
- Download My Workflow ⚙️
- Grab the workflow file (probably a
.json)
- Import it into ComfyUI
- Install any missing custom nodes (ComfyUI will yell at you about these)
- Profit!
- Enjoy TensorRT speeds that'll make your old workflows cry
- Generate faster than your GPU can say "thermal throttling"
- Bask in the glory of optimized inference
Pro Tip: If something breaks, you probably skipped step 2. The FaceDetailer needs the original model to work its magic! ✨

Capabilities
This model inherited all the... ahem... "artistic versatility" of its GurilaMash parent, but now with TensorRT turbo boost! It can generate:
All the spicy content you'd expect (and probably some you didn't)
The full spectrum of "artistic expression" at breakneck speeds
Content that would make your grandmother clutch her pearls AND your GPU clutch its cooling fans
Basically everything the original could do, but faster than you can say "NSFW"

Subscribe if you like this
https://www.youtube.com/@Electric-Dreamz
https://github.com/Absynth-Vibe-Coding