--- title: AJ STUDIOZ DeepSeek API emoji: ๐Ÿค– colorFrom: blue colorTo: purple sdk: docker pinned: false license: mit --- # ๐Ÿš€ AJ STUDIOZ DeepSeek API Enterprise-grade AI API powered by **Qwen2.5-Coder-0.5B** - Fast, reliable, and excellent for coding tasks. ![Status](https://img.shields.io/badge/Status-Online-success?style=for-the-badge) ![Model](https://img.shields.io/badge/Model-DeepSeek--R1-blue?style=for-the-badge) ![Free](https://img.shields.io/badge/Price-FREE-green?style=for-the-badge) ## โœจ Features - ๐Ÿง  **Advanced Reasoning**: DeepSeek-R1 distilled reasoning capabilities - ๐ŸŽฏ **Compact & Fast**: Only 1.5B parameters but powerful performance - ๐Ÿ”„ **Multi-API Support**: Claude, OpenAI, and simple chat formats - ๐Ÿš€ **Production Ready**: FastAPI with health monitoring - ๐Ÿ’ฐ **100% FREE**: Unlimited usage, no rate limits - ๐ŸŒ **24/7 Uptime**: Hosted on HuggingFace Spaces ## ๐Ÿค– Model Information **DeepSeek-R1-Distill-Qwen-1.5B** - Size: 1.5 billion parameters - Base: Qwen architecture with DeepSeek reasoning distillation - Strengths: Reasoning, coding, problem-solving, mathematics - Speed: Fast inference (~2-3 seconds) - Context: 4096 tokens ## ๐Ÿ“ก API Endpoints ### Simple Chat (No Auth Required) ```bash curl https://kamesh14151-aj-deepseek-api.hf.space/chat \ -H "Content-Type: application/json" \ -d '{"message": "Explain quantum computing"}' ``` ### OpenAI Compatible ```bash curl https://kamesh14151-aj-deepseek-api.hf.space/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer aj_test123" \ -d '{ "model": "aj-deepseek", "messages": [{"role": "user", "content": "Hello"}] }' ``` ### Claude Compatible ```bash curl https://kamesh14151-aj-deepseek-api.hf.space/v1/messages \ -H "x-api-key: sk-ant-test123" \ -H "anthropic-version: 2023-06-01" \ -H "content-type: application/json" \ -d '{ "model": "claude-sonnet-4", "max_tokens": 1024, "messages": [{"role": "user", "content": "Hello"}] }' ``` ### Health Check ```bash curl https://kamesh14151-aj-deepseek-api.hf.space/health ``` ## ๐ŸŽฏ Response Format ```json { "reply": "AI response here...", "model": "AJ-DeepSeek v1.0", "provider": "AJ STUDIOZ" } ``` ## ๐Ÿ”ง Setup & Deployment ### Local Development ```bash # Install dependencies pip install -r requirements.txt # Run server uvicorn app:app --host 0.0.0.0 --port 7860 # Test curl http://localhost:7860/ ``` ### Deploy to HuggingFace Spaces 1. Create new Space at https://huggingface.co/new-space 2. Choose Docker SDK 3. Clone and push this repo: ```bash git init git add . git commit -m "Initial commit" git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/aj-deepseek-api git push -u origin main ``` ## ๐Ÿ’ก Use Cases - **Reasoning Tasks**: Solve complex problems with step-by-step logic - **Code Generation**: Write Python, JavaScript, and more - **Math & Science**: Solve equations, explain concepts - **Question Answering**: Deep understanding of context - **Educational**: Teaching and tutoring applications - **Research**: Academic and technical research assistant ## ๐Ÿ“Š Performance - **Response Time**: 2-5 seconds (first request ~10s cold start) - **Throughput**: ~20 requests/minute (HF Free tier) - **Availability**: 99.9% uptime - **Cost**: $0 forever ## ๐Ÿ” API Keys For demo/testing, use any key with correct format: - OpenAI format: `aj_anything123` - Claude format: `sk-ant-anything123` For production, implement proper authentication in the code. ## ๐Ÿ› ๏ธ Tech Stack - **Framework**: FastAPI 0.104.1 - **Server**: Uvicorn - **Model**: DeepSeek-R1-Distill-Qwen-1.5B via HuggingFace Inference API - **Deployment**: Docker on HuggingFace Spaces - **API**: RESTful with OpenAPI docs ## ๐Ÿ“š Documentation Auto-generated API docs available at: - Swagger UI: `https://kamesh14151-aj-deepseek-api.hf.space/docs` - ReDoc: `https://kamesh14151-aj-deepseek-api.hf.space/redoc` ## ๐ŸŽจ Integration Examples ### Python ```python import requests def ask_deepseek(message): response = requests.post( 'https://kamesh14151-aj-deepseek-api.hf.space/chat', json={'message': message} ) return response.json()['reply'] print(ask_deepseek("Write a quicksort in Python")) ``` ### JavaScript ```javascript async function askDeepSeek(message) { const response = await fetch( 'https://kamesh14151-aj-deepseek-api.hf.space/chat', { method: 'POST', headers: {'Content-Type': 'application/json'}, body: JSON.stringify({message}) } ); const data = await response.json(); return data.reply; } ``` ### Node.js ```javascript const axios = require('axios'); async function askDeepSeek(message) { const {data} = await axios.post( 'https://kamesh14151-aj-deepseek-api.hf.space/chat', {message} ); return data.reply; } ``` ## ๐Ÿ”„ Model Comparison | Model | Size | Speed | Reasoning | Code | Cost | |-------|------|-------|-----------|------|------| | **DeepSeek-R1 1.5B** | 1.5B | โšกโšกโšก | โญโญโญโญ | โญโญโญโญ | FREE | | Phi-3 Mini | 3.8B | โšกโšก | โญโญโญ | โญโญโญโญ | FREE | | Llama 3.2 3B | 3B | โšกโšก | โญโญโญ | โญโญโญ | FREE | ## ๐Ÿ› Troubleshooting ### Model Loading Error - First request takes ~10s (cold start) - Retry after a few seconds - Check HuggingFace Spaces status ### Timeout - Increase timeout in your client - Model might be loading (cold start) ### Wrong Response Format - Ensure Content-Type: application/json - Check request body structure ## ๐Ÿค Contributing Contributions welcome! Please: 1. Fork the repository 2. Create feature branch 3. Submit pull request ## ๐Ÿ“„ License MIT License - Free for commercial and personal use ## ๐ŸŽ‰ Credits **Developed by AJ STUDIOZ** - Website: https://ajstudioz.co.in - GitHub: https://github.com/kamesh6592-cell - HuggingFace: https://huggingface.co/kamesh14151 **Powered by:** - DeepSeek-AI: Model developer - HuggingFace: Hosting & Inference API - FastAPI: Web framework --- **Made with โค๏ธ by AJ STUDIOZ | ยฉ 2025**