Spaces:
Sleeping
Sleeping
AJ STUDIOZ
Revert to HuggingFace InferenceClient - Cloud-based solution for low-spec systems
4534aef
metadata
title: AJ STUDIOZ DeepSeek API
emoji: π€
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit
π AJ STUDIOZ DeepSeek API
Enterprise-grade AI API powered by Qwen2.5-Coder-0.5B - Fast, reliable, and excellent for coding tasks.
β¨ Features
- π§ Advanced Reasoning: DeepSeek-R1 distilled reasoning capabilities
- π― Compact & Fast: Only 1.5B parameters but powerful performance
- π Multi-API Support: Claude, OpenAI, and simple chat formats
- π Production Ready: FastAPI with health monitoring
- π° 100% FREE: Unlimited usage, no rate limits
- π 24/7 Uptime: Hosted on HuggingFace Spaces
π€ Model Information
DeepSeek-R1-Distill-Qwen-1.5B
- Size: 1.5 billion parameters
- Base: Qwen architecture with DeepSeek reasoning distillation
- Strengths: Reasoning, coding, problem-solving, mathematics
- Speed: Fast inference (~2-3 seconds)
- Context: 4096 tokens
π‘ API Endpoints
Simple Chat (No Auth Required)
curl https://kamesh14151-aj-deepseek-api.hf.space/chat \
-H "Content-Type: application/json" \
-d '{"message": "Explain quantum computing"}'
OpenAI Compatible
curl https://kamesh14151-aj-deepseek-api.hf.space/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer aj_test123" \
-d '{
"model": "aj-deepseek",
"messages": [{"role": "user", "content": "Hello"}]
}'
Claude Compatible
curl https://kamesh14151-aj-deepseek-api.hf.space/v1/messages \
-H "x-api-key: sk-ant-test123" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d '{
"model": "claude-sonnet-4",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello"}]
}'
Health Check
curl https://kamesh14151-aj-deepseek-api.hf.space/health
π― Response Format
{
"reply": "AI response here...",
"model": "AJ-DeepSeek v1.0",
"provider": "AJ STUDIOZ"
}
π§ Setup & Deployment
Local Development
# Install dependencies
pip install -r requirements.txt
# Run server
uvicorn app:app --host 0.0.0.0 --port 7860
# Test
curl http://localhost:7860/
Deploy to HuggingFace Spaces
- Create new Space at https://huggingface.co/new-space
- Choose Docker SDK
- Clone and push this repo:
git init
git add .
git commit -m "Initial commit"
git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/aj-deepseek-api
git push -u origin main
π‘ Use Cases
- Reasoning Tasks: Solve complex problems with step-by-step logic
- Code Generation: Write Python, JavaScript, and more
- Math & Science: Solve equations, explain concepts
- Question Answering: Deep understanding of context
- Educational: Teaching and tutoring applications
- Research: Academic and technical research assistant
π Performance
- Response Time: 2-5 seconds (first request ~10s cold start)
- Throughput: ~20 requests/minute (HF Free tier)
- Availability: 99.9% uptime
- Cost: $0 forever
π API Keys
For demo/testing, use any key with correct format:
- OpenAI format:
aj_anything123 - Claude format:
sk-ant-anything123
For production, implement proper authentication in the code.
π οΈ Tech Stack
- Framework: FastAPI 0.104.1
- Server: Uvicorn
- Model: DeepSeek-R1-Distill-Qwen-1.5B via HuggingFace Inference API
- Deployment: Docker on HuggingFace Spaces
- API: RESTful with OpenAPI docs
π Documentation
Auto-generated API docs available at:
- Swagger UI:
https://kamesh14151-aj-deepseek-api.hf.space/docs - ReDoc:
https://kamesh14151-aj-deepseek-api.hf.space/redoc
π¨ Integration Examples
Python
import requests
def ask_deepseek(message):
response = requests.post(
'https://kamesh14151-aj-deepseek-api.hf.space/chat',
json={'message': message}
)
return response.json()['reply']
print(ask_deepseek("Write a quicksort in Python"))
JavaScript
async function askDeepSeek(message) {
const response = await fetch(
'https://kamesh14151-aj-deepseek-api.hf.space/chat',
{
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({message})
}
);
const data = await response.json();
return data.reply;
}
Node.js
const axios = require('axios');
async function askDeepSeek(message) {
const {data} = await axios.post(
'https://kamesh14151-aj-deepseek-api.hf.space/chat',
{message}
);
return data.reply;
}
π Model Comparison
| Model | Size | Speed | Reasoning | Code | Cost |
|---|---|---|---|---|---|
| DeepSeek-R1 1.5B | 1.5B | β‘β‘β‘ | ββββ | ββββ | FREE |
| Phi-3 Mini | 3.8B | β‘β‘ | βββ | ββββ | FREE |
| Llama 3.2 3B | 3B | β‘β‘ | βββ | βββ | FREE |
π Troubleshooting
Model Loading Error
- First request takes ~10s (cold start)
- Retry after a few seconds
- Check HuggingFace Spaces status
Timeout
- Increase timeout in your client
- Model might be loading (cold start)
Wrong Response Format
- Ensure Content-Type: application/json
- Check request body structure
π€ Contributing
Contributions welcome! Please:
- Fork the repository
- Create feature branch
- Submit pull request
π License
MIT License - Free for commercial and personal use
π Credits
Developed by AJ STUDIOZ
- Website: https://ajstudioz.co.in
- GitHub: https://github.com/kamesh6592-cell
- HuggingFace: https://huggingface.co/kamesh14151
Powered by:
- DeepSeek-AI: Model developer
- HuggingFace: Hosting & Inference API
- FastAPI: Web framework
Made with β€οΈ by AJ STUDIOZ | Β© 2025