This repository is a fork of the original almatkai/ingredientExtractor-Mistral-7b, with custom GGUF quantizations, specifically tailored for NeurochainAI's inference network. The models provided here are a fundamental part of NeurochainAI's state-of-the-art AI inference solutions.
NeurochainAI leverages these models to optimize and run inference across distributed networks, enabling efficient and robust language model processing across various platforms and devices.
Additionally, this repository includes customizations of LoRA adapters specifically developed for Darkfrontiers and ImaginaryOnes game chatbots, enhancing AI interactions within these gaming environments.
- Downloads last month
- 4
							Hardware compatibility
						Log In
								
								to view the estimation
8-bit
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support
Model tree for neurochainai/ingredient-extractor-mistral-7b-instruct-v0.1
Base model
mistralai/Mistral-7B-v0.1
				Finetuned
	
	
mistralai/Mistral-7B-Instruct-v0.1