saishshinde15 commited on
Commit
0cb7aaf
·
verified ·
1 Parent(s): 01b886d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -8
README.md CHANGED
@@ -14,15 +14,15 @@ language:
14
  - en
15
  ---
16
 
17
- # TethysAI Base Reasoning (GGUF - Q4)
18
 
19
- - **Developed by:** TethysAI
20
  - **License:** apache-2.0
21
  - **Fine-tuned from:** Qwen/Qwen2.5-3B-Instruct
22
  - **GGUF Format:** 4-bit quantized (Q4) for optimized inference
23
 
24
  ## **Model Description**
25
- TethysAI Base Reasoning (GGUF - Q4) is a **4-bit GGUF quantized** version of `saishshinde15/TethysAI_Base_Reasoning`, a fine-tuned model based on **Qwen 2.5**. This version is designed for **high-efficiency inference on CPU/GPU with minimal memory usage**, making it ideal for on-device applications and low-latency AI systems.
26
 
27
  Trained using **GRPO (General Reinforcement with Policy Optimization)**, the model excels in **self-reasoning, logical deduction, and structured problem-solving**, comparable to **DeepSeek-R1**. The **Q4 quantization** ensures significantly lower memory requirements while maintaining strong reasoning performance.
28
 
@@ -43,7 +43,7 @@ Trained using **GRPO (General Reinforcement with Policy Optimization)**, the mod
43
  # Use this prompt for more detailed and personalized results. This is the recommended prompt as the model was tuned on it.
44
 
45
  ```python
46
- You are a reasoning model made by researcher at TethysAI and your role is to respond in the following format only and in detail :
47
 
48
  <reasoning>
49
  ...
@@ -64,7 +64,4 @@ Respond in the following format:
64
  <answer>
65
  ...
66
  </answer>
67
- """
68
-
69
-
70
-
 
14
  - en
15
  ---
16
 
17
+ # TBH.AI Base Reasoning (GGUF - Q4)
18
 
19
+ - **Developed by:** TBH.AI
20
  - **License:** apache-2.0
21
  - **Fine-tuned from:** Qwen/Qwen2.5-3B-Instruct
22
  - **GGUF Format:** 4-bit quantized (Q4) for optimized inference
23
 
24
  ## **Model Description**
25
+ TBH.AI Base Reasoning (GGUF - Q4) is a **4-bit GGUF quantized** version of `saishshinde15/TBH.AI_Base_Reasoning`, a fine-tuned model based on **Qwen 2.5**. This version is designed for **high-efficiency inference on CPU/GPU with minimal memory usage**, making it ideal for on-device applications and low-latency AI systems.
26
 
27
  Trained using **GRPO (General Reinforcement with Policy Optimization)**, the model excels in **self-reasoning, logical deduction, and structured problem-solving**, comparable to **DeepSeek-R1**. The **Q4 quantization** ensures significantly lower memory requirements while maintaining strong reasoning performance.
28
 
 
43
  # Use this prompt for more detailed and personalized results. This is the recommended prompt as the model was tuned on it.
44
 
45
  ```python
46
+ You are a reasoning model made by researcher at TBH.AI and your role is to respond in the following format only and in detail :
47
 
48
  <reasoning>
49
  ...
 
64
  <answer>
65
  ...
66
  </answer>
67
+ """