aj-deepseek-api / README.md
AJ STUDIOZ
Revert to HuggingFace InferenceClient - Cloud-based solution for low-spec systems
4534aef
metadata
title: AJ STUDIOZ DeepSeek API
emoji: πŸ€–
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit

πŸš€ AJ STUDIOZ DeepSeek API

Enterprise-grade AI API powered by Qwen2.5-Coder-0.5B - Fast, reliable, and excellent for coding tasks.

Status Model Free

✨ Features

  • 🧠 Advanced Reasoning: DeepSeek-R1 distilled reasoning capabilities
  • 🎯 Compact & Fast: Only 1.5B parameters but powerful performance
  • πŸ”„ Multi-API Support: Claude, OpenAI, and simple chat formats
  • πŸš€ Production Ready: FastAPI with health monitoring
  • πŸ’° 100% FREE: Unlimited usage, no rate limits
  • 🌐 24/7 Uptime: Hosted on HuggingFace Spaces

πŸ€– Model Information

DeepSeek-R1-Distill-Qwen-1.5B

  • Size: 1.5 billion parameters
  • Base: Qwen architecture with DeepSeek reasoning distillation
  • Strengths: Reasoning, coding, problem-solving, mathematics
  • Speed: Fast inference (~2-3 seconds)
  • Context: 4096 tokens

πŸ“‘ API Endpoints

Simple Chat (No Auth Required)

curl https://kamesh14151-aj-deepseek-api.hf.space/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Explain quantum computing"}'

OpenAI Compatible

curl https://kamesh14151-aj-deepseek-api.hf.space/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer aj_test123" \
  -d '{
    "model": "aj-deepseek",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

Claude Compatible

curl https://kamesh14151-aj-deepseek-api.hf.space/v1/messages \
  -H "x-api-key: sk-ant-test123" \
  -H "anthropic-version: 2023-06-01" \
  -H "content-type: application/json" \
  -d '{
    "model": "claude-sonnet-4",
    "max_tokens": 1024,
    "messages": [{"role": "user", "content": "Hello"}]
  }'

Health Check

curl https://kamesh14151-aj-deepseek-api.hf.space/health

🎯 Response Format

{
  "reply": "AI response here...",
  "model": "AJ-DeepSeek v1.0",
  "provider": "AJ STUDIOZ"
}

πŸ”§ Setup & Deployment

Local Development

# Install dependencies
pip install -r requirements.txt

# Run server
uvicorn app:app --host 0.0.0.0 --port 7860

# Test
curl http://localhost:7860/

Deploy to HuggingFace Spaces

  1. Create new Space at https://huggingface.co/new-space
  2. Choose Docker SDK
  3. Clone and push this repo:
git init
git add .
git commit -m "Initial commit"
git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/aj-deepseek-api
git push -u origin main

πŸ’‘ Use Cases

  • Reasoning Tasks: Solve complex problems with step-by-step logic
  • Code Generation: Write Python, JavaScript, and more
  • Math & Science: Solve equations, explain concepts
  • Question Answering: Deep understanding of context
  • Educational: Teaching and tutoring applications
  • Research: Academic and technical research assistant

πŸ“Š Performance

  • Response Time: 2-5 seconds (first request ~10s cold start)
  • Throughput: ~20 requests/minute (HF Free tier)
  • Availability: 99.9% uptime
  • Cost: $0 forever

πŸ” API Keys

For demo/testing, use any key with correct format:

  • OpenAI format: aj_anything123
  • Claude format: sk-ant-anything123

For production, implement proper authentication in the code.

πŸ› οΈ Tech Stack

  • Framework: FastAPI 0.104.1
  • Server: Uvicorn
  • Model: DeepSeek-R1-Distill-Qwen-1.5B via HuggingFace Inference API
  • Deployment: Docker on HuggingFace Spaces
  • API: RESTful with OpenAPI docs

πŸ“š Documentation

Auto-generated API docs available at:

  • Swagger UI: https://kamesh14151-aj-deepseek-api.hf.space/docs
  • ReDoc: https://kamesh14151-aj-deepseek-api.hf.space/redoc

🎨 Integration Examples

Python

import requests

def ask_deepseek(message):
    response = requests.post(
        'https://kamesh14151-aj-deepseek-api.hf.space/chat',
        json={'message': message}
    )
    return response.json()['reply']

print(ask_deepseek("Write a quicksort in Python"))

JavaScript

async function askDeepSeek(message) {
  const response = await fetch(
    'https://kamesh14151-aj-deepseek-api.hf.space/chat',
    {
      method: 'POST',
      headers: {'Content-Type': 'application/json'},
      body: JSON.stringify({message})
    }
  );
  const data = await response.json();
  return data.reply;
}

Node.js

const axios = require('axios');

async function askDeepSeek(message) {
  const {data} = await axios.post(
    'https://kamesh14151-aj-deepseek-api.hf.space/chat',
    {message}
  );
  return data.reply;
}

πŸ”„ Model Comparison

Model Size Speed Reasoning Code Cost
DeepSeek-R1 1.5B 1.5B ⚑⚑⚑ ⭐⭐⭐⭐ ⭐⭐⭐⭐ FREE
Phi-3 Mini 3.8B ⚑⚑ ⭐⭐⭐ ⭐⭐⭐⭐ FREE
Llama 3.2 3B 3B ⚑⚑ ⭐⭐⭐ ⭐⭐⭐ FREE

πŸ› Troubleshooting

Model Loading Error

  • First request takes ~10s (cold start)
  • Retry after a few seconds
  • Check HuggingFace Spaces status

Timeout

  • Increase timeout in your client
  • Model might be loading (cold start)

Wrong Response Format

  • Ensure Content-Type: application/json
  • Check request body structure

🀝 Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create feature branch
  3. Submit pull request

πŸ“„ License

MIT License - Free for commercial and personal use

πŸŽ‰ Credits

Developed by AJ STUDIOZ

Powered by:

  • DeepSeek-AI: Model developer
  • HuggingFace: Hosting & Inference API
  • FastAPI: Web framework

Made with ❀️ by AJ STUDIOZ | © 2025