Rakshit Aralimatti
RakshitAralimatti
AI & ML interests
Nvidia
Recent Activity
commented on
their
article
about 3 hours ago
I Built a RAG System That Listens to Live BBC News and Answers Questions About "What Happened 10 Minutes Ago"
replied to
their
post
about 3 hours ago
I built something crazy you never saw before.
Please check - https://huggingface.co/blog/RakshitAralimatti/streaming-data-rag
A real-time Streaming Data to RAG system that listens to live radio, transcribes it on-the-fly, and lets you query across TIME.
Not just "what was discussed" โ but "what happened in the last 10 minutes on channel 0?" or "at 9 AM, what was the breaking news?" This is RAG that understands temporal context.
reacted
to
their
post
with ๐ฅ
about 5 hours ago
I built something crazy you never saw before.
Please check - https://huggingface.co/blog/RakshitAralimatti/streaming-data-rag
A real-time Streaming Data to RAG system that listens to live radio, transcribes it on-the-fly, and lets you query across TIME.
Not just "what was discussed" โ but "what happened in the last 10 minutes on channel 0?" or "at 9 AM, what was the breaking news?" This is RAG that understands temporal context.
Organizations
commented on
I Built a RAG System That Listens to Live BBC News and Answers Questions About "What Happened 10 Minutes Ago"
about 3 hours ago
replied to
their
post
about 3 hours ago
Code to github - https://github.com/rakshit2020/Live-Streaming-Data-RAG
Post
327
I built something crazy you never saw before.
Please check - https://huggingface.co/blog/RakshitAralimatti/streaming-data-rag
A real-time Streaming Data to RAG system that listens to live radio, transcribes it on-the-fly, and lets you query across TIME.
Not just "what was discussed" โ but "what happened in the last 10 minutes on channel 0?" or "at 9 AM, what was the breaking news?" This is RAG that understands temporal context.
Please check - https://huggingface.co/blog/RakshitAralimatti/streaming-data-rag
A real-time Streaming Data to RAG system that listens to live radio, transcribes it on-the-fly, and lets you query across TIME.
Not just "what was discussed" โ but "what happened in the last 10 minutes on channel 0?" or "at 9 AM, what was the breaking news?" This is RAG that understands temporal context.
upvoted
an
article
about 5 hours ago
Article
I Built a RAG System That Listens to Live BBC News and Answers Questions About "What Happened 10 Minutes Ago"
โข
2
posted
an
update
about 8 hours ago
Post
327
I built something crazy you never saw before.
Please check - https://huggingface.co/blog/RakshitAralimatti/streaming-data-rag
A real-time Streaming Data to RAG system that listens to live radio, transcribes it on-the-fly, and lets you query across TIME.
Not just "what was discussed" โ but "what happened in the last 10 minutes on channel 0?" or "at 9 AM, what was the breaking news?" This is RAG that understands temporal context.
Please check - https://huggingface.co/blog/RakshitAralimatti/streaming-data-rag
A real-time Streaming Data to RAG system that listens to live radio, transcribes it on-the-fly, and lets you query across TIME.
Not just "what was discussed" โ but "what happened in the last 10 minutes on channel 0?" or "at 9 AM, what was the breaking news?" This is RAG that understands temporal context.
published
an
article
about 8 hours ago
Article
I Built a RAG System That Listens to Live BBC News and Answers Questions About "What Happened 10 Minutes Ago"
โข
2
reacted to
ovi054's
post with ๐ฅ
8 days ago
Post
6060
Introducing Anim Lab AIโก
My submission for the MCP 1st Birthday Hackathon
Turn any math concept or logic into a clear video explanation instantly using AI.
๐ Try it now: MCP-1st-Birthday/anim-lab-ai
Demo outputs are attached ๐
My submission for the MCP 1st Birthday Hackathon
Turn any math concept or logic into a clear video explanation instantly using AI.
๐ Try it now: MCP-1st-Birthday/anim-lab-ai
Demo outputs are attached ๐
upvoted
an
article
12 days ago
Article
Continuous batching from first principles
- +1
โข
259
replied to
their
post
23 days ago
Modern OCR in healthcare is extremely reliable when implemented correctly. I've personally built OCR + RAG systems for healthcare clients, and the results have been remarkable.
posted
an
update
26 days ago
Post
1369
OCR has absolutely blown up in 2025, and honestly, my perspective on document processing has completely changed.
This year has been wild. Vision Language Models like Nanonets OCR2-3B hit the scene and suddenly we're getting accuracy on complex forms (vs for traditional OCR). We're talking handwritten checkboxes, watermarked documents, multi-column layouts, even LaTeX equations all handled in a single pass.โ
The market numbers say it all: OCR accuracy passed 98% for printed text, AI integration is everywhere, and real-time processing is now standard. The entire OCR market is hitting $25.13 billion in 2025 because this tech actually works now.
I wrote a detailed Medium article walking through:
1. Why vision LMs changed the game
2. NVIDIA NeMo Retriever architecture
3. Complete code breakdown
4. Real government/healthcare use cases
5. Production deployment guide
Article: https://medium.com/@rakshitaralimatti2001/nvidia-nemo-retriever-ocr-building-document-intelligence-systems-for-enterprise-and-government-42a6684c37a1
Try It Yourself
This year has been wild. Vision Language Models like Nanonets OCR2-3B hit the scene and suddenly we're getting accuracy on complex forms (vs for traditional OCR). We're talking handwritten checkboxes, watermarked documents, multi-column layouts, even LaTeX equations all handled in a single pass.โ
The market numbers say it all: OCR accuracy passed 98% for printed text, AI integration is everywhere, and real-time processing is now standard. The entire OCR market is hitting $25.13 billion in 2025 because this tech actually works now.
I wrote a detailed Medium article walking through:
1. Why vision LMs changed the game
2. NVIDIA NeMo Retriever architecture
3. Complete code breakdown
4. Real government/healthcare use cases
5. Production deployment guide
Article: https://medium.com/@rakshitaralimatti2001/nvidia-nemo-retriever-ocr-building-document-intelligence-systems-for-enterprise-and-government-42a6684c37a1
Try It Yourself
upvoted
an
article
about 2 months ago
Article
Supercharge your OCR Pipelines with Open Models
- +5
โข
275
Article
SOTA OCR with Core ML and dots.ocr
โข
62
Article
Deploy NVIDIA Riva ASR on Kubernetes GPU Cluster in Just 5 Minutes
โข
1
reacted to
prithivMLmods's
post with ๐ฅ
3 months ago
Post
3143
I'm a Hugging Face Fellow now, guys!๐คโค๏ธ
With the same passion, trust, and momentum to contribute to the community, Iโm excited to do some amazing things to wrap up Q3 and Q4 of 2025. And importantly, Iโve been lucky enough to receive some knowledge and guidance from @merve to build open-source demos and stuff. Thank you for the belief.
Thank you โ much love.
Long live open source!
โ Prithiv
With the same passion, trust, and momentum to contribute to the community, Iโm excited to do some amazing things to wrap up Q3 and Q4 of 2025. And importantly, Iโve been lucky enough to receive some knowledge and guidance from @merve to build open-source demos and stuff. Thank you for the belief.
Thank you โ much love.
Long live open source!
โ Prithiv
published
an
article
3 months ago
Article
Deploy NVIDIA Riva ASR on Kubernetes GPU Cluster in Just 5 Minutes
โข
1
replied to
andywu-kby's
post
3 months ago
I tried it, Its very COOL.
posted
an
update
3 months ago
Post
261
Have you ever wanted to easily deploy a cutting-edge speech recognition system that actually works in real time? How about one powered by NVIDIA GPUs on Kubernetes, but without the headache of complicated installs?
Well, your wait is over! My latest blog shows how to deploy NVIDIA Riva ASR in just 5 minutes using Helm charts. From validating GPU readiness in Kubernetes to customizing your ASR models and spinning up the service, this guide covers it all.
Read it here - https://medium.com/@rakshitaralimatti2001/deploy-nvidia-riva-asr-on-kubernetes-gpu-ready-in-minutes-30955d6ed7b8
BONUS: I even built simple Streamlit apps so you can test with your mic or upload audio files to see the magic live.
โจ Bookmark this post and the blog for your next voice AI project or production-ready speech application!
Well, your wait is over! My latest blog shows how to deploy NVIDIA Riva ASR in just 5 minutes using Helm charts. From validating GPU readiness in Kubernetes to customizing your ASR models and spinning up the service, this guide covers it all.
Read it here - https://medium.com/@rakshitaralimatti2001/deploy-nvidia-riva-asr-on-kubernetes-gpu-ready-in-minutes-30955d6ed7b8
BONUS: I even built simple Streamlit apps so you can test with your mic or upload audio files to see the magic live.
โจ Bookmark this post and the blog for your next voice AI project or production-ready speech application!
reacted to
ACloudCenter's
post with ๐ฅ
3 months ago
Post
1848
I've really been into testing the various ASR, TTS, and other audio related models. This space showcases the Nvidia Canary-Qwen 2.5B model. The model is able to transcribe incredibly fast and and combine qwen for queries about the transcript.
All audio example files were generated with my adjacent VibeVoice Conference Generator Space. Another really cool model!!
ACloudCenter/canary-qwen-transcriber-2.5b
All audio example files were generated with my adjacent VibeVoice Conference Generator Space. Another really cool model!!
ACloudCenter/canary-qwen-transcriber-2.5b
reacted to
codelion's
post with ๐ฅ
3 months ago
Post
6182
I recently worked on a LoRA that improves tool use in LLM. Thought the approach might interest folks here.
The issue I have had when trying to use some of the local LLMs with coding agents is this:
Me: "Find all API endpoints with authentication in this codebase"
LLM: "You should look for @app .route decorators and check if they have auth middleware..."
But I often want it to search the files and show me but the LLM doesn't trigger a tool use call.
To fine-tune it for tool use I combined two data sources:
1. Magpie scenarios - 5000+ diverse tasks (bug hunting, refactoring, security audits)
2. Real execution - Ran these on actual repos (FastAPI, Django, React) to get authentic tool responses
This ensures the model learns both breadth (many scenarios) and depth (real tool behavior).
Tools We Taught:
-
-
-
-
-
-
Improvements:
- Tool calling accuracy: 12% โ 80%
- Correct parameters: 8% โ 87%
- Multi-step tasks: 3% โ 78%
- End-to-end completion: 5% โ 80%
- Tools per task: 0.2 โ 3.8
The LoRA really improves on intential tool call as an example consider the query: "Find ValueError in payment module"
The response proceeds as follows:
1. Calls
2. Gets 4 matches across 3 files
3. Calls
4. Analyzes context
5. Reports: "Found 3 ValueError instances: payment/processor.py:47 for invalid amount, payment/validator.py:23 for unsupported currency..."
Resources:
- Colab notebook https://colab.research.google.com/github/codelion/ellora/blob/main/Ellora_Recipe_3_Enhanced_Tool_Calling_and_Code_Understanding.ipynb
- Model - codelion/Llama-3.2-1B-Instruct-tool-calling-lora
- GitHub - https://github.com/codelion/ellora
The issue I have had when trying to use some of the local LLMs with coding agents is this:
Me: "Find all API endpoints with authentication in this codebase"
LLM: "You should look for @app .route decorators and check if they have auth middleware..."
But I often want it to search the files and show me but the LLM doesn't trigger a tool use call.
To fine-tune it for tool use I combined two data sources:
1. Magpie scenarios - 5000+ diverse tasks (bug hunting, refactoring, security audits)
2. Real execution - Ran these on actual repos (FastAPI, Django, React) to get authentic tool responses
This ensures the model learns both breadth (many scenarios) and depth (real tool behavior).
Tools We Taught:
-
read_file - Actually read file contents-
search_files - Regex/pattern search across codebases-
find_definition - Locate classes/functions-
analyze_imports - Dependency tracking-
list_directory - Explore structure-
run_tests - Execute test suitesImprovements:
- Tool calling accuracy: 12% โ 80%
- Correct parameters: 8% โ 87%
- Multi-step tasks: 3% โ 78%
- End-to-end completion: 5% โ 80%
- Tools per task: 0.2 โ 3.8
The LoRA really improves on intential tool call as an example consider the query: "Find ValueError in payment module"
The response proceeds as follows:
1. Calls
search_files with pattern "ValueError"2. Gets 4 matches across 3 files
3. Calls
read_file on each match4. Analyzes context
5. Reports: "Found 3 ValueError instances: payment/processor.py:47 for invalid amount, payment/validator.py:23 for unsupported currency..."
Resources:
- Colab notebook https://colab.research.google.com/github/codelion/ellora/blob/main/Ellora_Recipe_3_Enhanced_Tool_Calling_and_Code_Understanding.ipynb
- Model - codelion/Llama-3.2-1B-Instruct-tool-calling-lora
- GitHub - https://github.com/codelion/ellora
reacted to
codelion's
post with ๐ฅ
3 months ago
Post
5276
I wanted to share a technique that's been working really well for recovering performance after INT4 quantization.
Typically, quantizing the LLM to INT4 (unlike say INT8) for inference can incur some accuracy loss. Instead of accepting the quality loss, we used the FP16 model as a teacher to train a tiny LoRA adapter (rank=16) for the quantized model. The cool part: the model generates its own training data using the Magpie technique so no external datasets needed. This is critical because we want to remain as much as possible in the distribution of the model's natural responses.
Last year Apple's foundational models paper (https://arxiv.org/pdf/2407.21075) had proposed a similar technique and found "By using accuracy-recovery LoRA adapters with only rank 16, Alpaca win rate can be improved by 7-18%, GMS8K accuracy is boosted by 5-10%." (page 47).
We saw similar results on Qwen3-0.6B:
Perplexity: 2.40 โ 2.09 (only 5.7% degradation from FP16 baseline)
Memory: Only 0.28GB vs 1.0GB for FP16 (75% reduction)
Speed: 3.0x faster inference than FP16
Quality: Generates correct, optimized code solutions
- Pre-trained adapter: codelion/Qwen3-0.6B-accuracy-recovery-lora
- GitHub repo: https://github.com/codelion/ellora
Happy to answer questions about the implementation or help anyone trying to replicate this. The key insight is that quantization errors are systematic and learnable - a small adapter can bridge the gap without negating the benefits of quantization.
Has anyone else experimented with self-distillation for quantization recovery? Would love to hear about different approaches!
Typically, quantizing the LLM to INT4 (unlike say INT8) for inference can incur some accuracy loss. Instead of accepting the quality loss, we used the FP16 model as a teacher to train a tiny LoRA adapter (rank=16) for the quantized model. The cool part: the model generates its own training data using the Magpie technique so no external datasets needed. This is critical because we want to remain as much as possible in the distribution of the model's natural responses.
Last year Apple's foundational models paper (https://arxiv.org/pdf/2407.21075) had proposed a similar technique and found "By using accuracy-recovery LoRA adapters with only rank 16, Alpaca win rate can be improved by 7-18%, GMS8K accuracy is boosted by 5-10%." (page 47).
We saw similar results on Qwen3-0.6B:
Perplexity: 2.40 โ 2.09 (only 5.7% degradation from FP16 baseline)
Memory: Only 0.28GB vs 1.0GB for FP16 (75% reduction)
Speed: 3.0x faster inference than FP16
Quality: Generates correct, optimized code solutions
- Pre-trained adapter: codelion/Qwen3-0.6B-accuracy-recovery-lora
- GitHub repo: https://github.com/codelion/ellora
Happy to answer questions about the implementation or help anyone trying to replicate this. The key insight is that quantization errors are systematic and learnable - a small adapter can bridge the gap without negating the benefits of quantization.
Has anyone else experimented with self-distillation for quantization recovery? Would love to hear about different approaches!