Yesterday, we dropped a new conversational viewer for datasets on the hub! ๐ฌ
Actually being able to view and inspect your data is extremely important. This is a big step in making data more accessible and actionable for everyone.
hey hey @mradermacher - VB from Hugging Face here, we'd love to onboard you over to our optimised xet backend! ๐ฅ
as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!
in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.
If you haven't seen yet, we just released Inference Providers ๐
> 4 new serverless inference providers on the Hub ๐คฏ > Use your HF API key or personal key with all providers ๐ > Chat with Deepseek R1, V3, and more on HF Hub ๐ > We support Sambanova, TogetherAI, Replicate, and Fal.ai ๐ช
Best of all, we don't charge any markup on top of the provider ๐ซฐ Have you tried it out yet? HF Pro accounts get $2 of free usage for the provider inference.
All the responses get saved in the cfahlgren1/react-code-instructions dataset. Hopefully we can build one of the biggest, highest quality frontend datasets on the hub ๐ช