AI & ML interests

None defined yet.

Recent Activity

Articles

KingNish 
posted an update 3 months ago
KingNish 
posted an update 5 months ago
view post
Post
1157
What's currently the biggest gap in Open Source Datasets ??
·
not-lain 
posted an update 8 months ago
not-lain 
posted an update 9 months ago
not-lain 
posted an update 10 months ago
view post
Post
1810
we now have more than 2000 public AI models using ModelHubMixin🤗
not-lain 
posted an update 10 months ago
view post
Post
4155
Published a new blogpost 📖
In this blogpost I have gone through the transformers' architecture emphasizing how shapes propagate throughout each layer.
🔗 https://huggingface.co/blog/not-lain/tensor-dims
some interesting takeaways :
not-lain 
posted an update 12 months ago
view post
Post
2474
ever wondered how you can make an API call to a visual-question-answering model without sending an image url 👀

you can do that by converting your local image to base64 and sending it to the API.

recently I made some changes to my library "loadimg" that allows you to make converting images to base64 a breeze.
🔗 https://github.com/not-lain/loadimg

API request example 🛠️:
from loadimg import load_img
from huggingface_hub import InferenceClient

# or load a local image
my_b64_img = load_img(imgPath_url_pillow_or_numpy ,output_type="base64" ) 

client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")

messages = [
	{
		"role": "user",
		"content": [
			{
				"type": "text",
				"text": "Describe this image in one sentence."
			},
			{
				"type": "image_url",
				"image_url": {
					"url": my_b64_img # base64 allows using images without uploading them to the web
				}
			}
		]
	}
]

stream = client.chat.completions.create(
    model="meta-llama/Llama-3.2-11B-Vision-Instruct", 
	messages=messages, 
	max_tokens=500,
	stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
KingNish 
posted an update about 1 year ago
KingNish 
posted an update about 1 year ago
view post
Post
8309
Exciting news! Introducing super-fast AI video assistant, currently in beta. With a minimum latency of under 500ms and an average latency of just 600ms.

DEMO LINK:
KingNish/Live-Video-Chat
  • 1 reply
·
KingNish 
posted an update about 1 year ago
KingNish 
posted an update about 1 year ago
view post
Post
3625
Mistral Nemo is better than many models in 1st grader level reasoning.
KingNish 
posted an update about 1 year ago
view post
Post
3957
I am experimenting with Flux and trying to push it to its limits without training (as I am GPU-poor 😅).
I found some flaws in the pipelines, which I resolved, and now I am able to generate an approx similar quality image as Flux Schnell 4 steps in just 1 step.
Demo Link:
KingNish/Realtime-FLUX

  • 1 reply
·
KingNish 
posted an update about 1 year ago
view post
Post
1950
I am excited to announce a major speed updated in Voicee, a superfast voice assistant.

It has now achieved latency <250 ms.
While its average latency is about 500ms.
KingNish/Voicee

This become Possible due to newly launched @sambanovasystems cloud.

You can also use your own API Key to get fastest speed.
You can get on from here: https://cloud.sambanova.ai/apis

For optimal performance use Google Chrome.

Please try Voicee and share your valuable feedback to help me further improve its performance and usability.
Thank you!
KingNish 
posted an update about 1 year ago
view post
Post
3621
Introducing Voicee, A superfast voice fast assistant.
KingNish/Voicee
It achieved latency <500 ms.
While its average latency is 700ms.
It works best in Google Chrome.
Please try and give your feedbacks.
Thank you. 🤗
·
not-lain 
posted an update about 1 year ago