AI & ML interests

None defined yet.

TonicΒ 
posted an update 1 day ago
view post
Post
2200
πŸ™‹πŸ»β€β™‚οΈ Hey there folks,

since everyone liked my previous announcement post ( https://huggingface.co/posts/Tonic/338509028435394 ) so much , i'm back with more high quality proceedural datasets in the Geospacial domain for SFT training !

Check this one out :
NuTonic/sat-bbox-metadata-sft-v1

the goal is to be able to train vision models on multiple images for remote sensing analysis with one shot .

hope you like it ! πŸš€
  • 1 reply
Β·
TonicΒ 
posted an update 7 days ago
view post
Post
3364
πŸ™‹πŸ»β€β™‚οΈ Hey there folks ,

I'm sharing huggingface's largest dataset of annotated statelite images today.

check it out here : NuTonic/sat-image-boundingbox-sft-full

I hope you like it , the idea is to be able to use this with small vision models πŸš€
prithivMLmodsΒ 
posted an update 7 days ago
view post
Post
1578
Now, a collection of various compression schemes for Qwen3.6 and the abliterated version 1 of dense models is available on the Hub. Check it out via the links below. πŸ‘‡

πŸ”— Qwen3.6-MoE: https://huggingface.co/collections/prithivMLmods/qwen36-35b-a3b-compressions
πŸ”— Qwen3.6-27B Compressions: https://huggingface.co/collections/prithivMLmods/qwen36-27b-compressions

πŸ€— > To learn more, visit the app page or the respective model pages.
prithivMLmodsΒ 
posted an update 12 days ago
view post
Post
4126
HY-World-2.0 β€” A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds is now available on Spaces, and it works both as native Gradio components and in Gradio server mode.

> HY-World-2.0-Demo: prithivMLmods/HY-World-2.0-Demo
> HY-World-2.0 [Server Mode]: prithivMLmods/HY-World-2.0-Demo
> Featuring 3D reconstruction and Gaussian splats with the Rerun viewer, along with camera poses, depth maps, and surface normals.
> In Server Mode, Gradio is served via FastAPI, with FastAPI remaining the top-level server.
> Model: tencent/HY-World-2.0
> GitHub: https://github.com/PRITHIVSAKTHIUR/HY-World-2.0-Demo

πŸ€—To learn more, visit the app page or the respective model pages.
prithivMLmodsΒ 
posted an update 17 days ago
view post
Post
6171
A new comparator on Spaces showcases Standard FLUX.2 Decoder vs. FLUX.2 Small Decoder. The Small Decoder is ~1.4Γ— faster, uses ~1.4Γ— less VRAM, and maintains near-identical image quality. It has ~28M parameters with narrower channels [96, 192, 384, 384] vs. [128, 256, 512, 512], and the demo supports sequence generation by running both decoders simultaneously and comparing the results side by side.

πŸ€— Comparator: prithivMLmods/Flux.2-4B-Decoder-Comparator
πŸ”— FLUX.2-small-decoder: black-forest-labs/FLUX.2-small-decoder
πŸ”— GitHub: https://github.com/PRITHIVSAKTHIUR/Flux.2-4B-Encoder-Comparator
🚁 Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection

πŸ€— > App built on the Gradio SDK. To learn more, visit the app page or the respective model pages.
prithivMLmodsΒ 
posted an update 18 days ago
view post
Post
4214
Now, a collection of various compression schemes for Gemma 4 and the abliterated version 1 of dense models is available on the Hub. Check it out via the links below. πŸ‘‡

πŸ”—Gemma 4 Compression(s)- https://huggingface.co/collections/prithivMLmods/gemma-4-compressions
πŸ”—Gemma 4 Uncensored [MAX] + Compression(s) - [`Ξ² ]- https://huggingface.co/collections/prithivMLmods/gemma-4-uncensored-max-compressions
πŸ”—Gemma 4 Compression(s) - MoE- https://huggingface.co/collections/prithivMLmods/gemma-4-compressions-moe
πŸ”—Gemma-4 F32 GGUF- https://huggingface.co/collections/prithivMLmods/gemma-4-f32-gguf

πŸ€— > To learn more, visit the app page or the respective model pages.
prithivMLmodsΒ 
posted an update 21 days ago
view post
Post
2301
Now the demo for image detection based on SAM3 and Gemma-4 (*Filter) is available on Spaces, using full-fledged Transformers inference with multimodal reasoning for processed images. It also supports video segmentation (mask), video segmentation (annotation), and image click segmentation.

πŸ€— Demo Space: prithivMLmods/SAM3-Gemma4-CUDA
πŸ₯½ SAM3: facebook/sam3
πŸ”— gemma-4-E2B-it: google/gemma-4-E2B-it

To learn more, visit the app page or the respective model pages.
  • 1 reply
Β·
prithivMLmodsΒ 
posted an update 24 days ago
view post
Post
4752
The demo for Image Detection (*Filter) based on SAM3 and Qwen-3.5 is now available on Hugging Face Spaces using Transformers inference, with multimodal reasoning for processed images, and it also supports video segmentation (mask), video segmentation (annotation), and image click segmentation.

πŸ€— Demo Space: prithivMLmods/SAM3-Plus-Qwen3.5
πŸ₯½ SAM3: facebook/sam3
πŸ”— Qwen-3.5: Qwen/Qwen3.5-2B

To learn more, visit the app page or the respective model pages.
  • 5 replies
Β·
prithivMLmodsΒ 
posted an update about 1 month ago
view post
Post
5309
Flux-Klein-KV-Edit-Consistency demo is now available on Spaces. It preserves character identity and delivers high-quality, realistic results after edits. No need for any special prompts, just upload the image, type your prompt, and get the resulting image blazing fast.

πŸ”₯ Demo Space: prithivMLmods/flux-klein-kv-edit-consistency
πŸ€— Model: black-forest-labs/FLUX.2-klein-9b-kv
πŸ€— Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection
πŸ”— Gradio Server Mode: https://www.gradio.app/main/guides/server-mode

βž” Built with Headless Gradio, an alternative to using gr.Blocks for creating the frontend and triggering events, powered by FastAPI + Gradio. You can now design the frontend however you want, with continued support for APIs, MCP, and ZeroGPU.

βž” Gradio Server Mode is now available from gradio@v6.10.0.

To learn more, visit the app page or the respective model pages.
prithivMLmodsΒ 
posted an update about 1 month ago
view post
Post
4479
Map-Anything v1 (Universal Feed-Forward Metric 3D Reconstruction) demo is now available on Hugging Face Spaces. Built with Gradio and integrated with Rerun, it performs multi-image and video-based 3D reconstruction, depth, normal map, and interactive measurements.

πŸ€— Demo: prithivMLmods/Map-Anything-v1
πŸ€— Model: facebook/map-anything-v1
πŸ€— Hf-Papers: MapAnything: Universal Feed-Forward Metric 3D Reconstruction (2509.13414)
prithivMLmodsΒ 
posted an update about 1 month ago
view post
Post
3132
Introducing QIE-Bbox-Studio! πŸ”₯πŸ€—

The QIE-Bbox-Studio demo is now live β€” more precise and packed with more options. Users can manipulate images with object removal, design addition, and even move objects from one place to another, all in just 4-step fast inference.

πŸ€— Demo: prithivMLmods/QIE-Bbox-Studio
πŸ”— GitHub: https://github.com/PRITHIVSAKTHIUR/QIE-Bbox-Studio

πŸš€ Models [LoRA] :

● QIE-2511-Object-Mover-Bbox: prithivMLmods/QIE-2511-Object-Mover-Bbox
● QIE-2511-Object-Remover-Bbox-v3: prithivMLmods/QIE-2511-Object-Remover-Bbox-v3
● QIE-2511-Outfit-Design-Layout: prithivMLmods/QIE-2511-Outfit-Design-Layout
● QIE-2509-Object-Remover-Bbox-v3: prithivMLmods/QIE-2509-Object-Remover-Bbox-v3
● QIE-2509-Object-Mover-Bbox: prithivMLmods/QIE-2509-Object-Mover-Bbox

πŸš€ Collection:

● Qwen Image Edit [Layout Bbox]: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-layout-bbox

To learn more, visit the app page or the respective model pages.
prithivMLmodsΒ 
posted an update about 2 months ago
view post
Post
5095
QIE-2509-Object-Remover-Bbox-v3 is a more stable version of the Qwen Image Edit visual grounding–based object removal model. The app was previously featured in HF Spaces of the Week and is now updated with the latest Bbox-v3 LoRA adapter.

πŸ€— Demo: prithivMLmods/QIE-Object-Remover-Bbox
πŸ€— LoRA: prithivMLmods/QIE-2509-Object-Remover-Bbox-v3
πŸ€— Collection: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-layout-bbox

To learn more, visit the app page or the respective model pages.
  • 2 replies
Β·
sweatSmileΒ 
posted an update about 2 months ago
view post
Post
421
Just published a hands-on guide on building a Kubernetes cluster from scratch on AWS EC2 using kubeadm, no managed services, no shortcuts.

If you want to truly understand how the control plane and workers communicate, how pod networking works with Flannel, and how to lock down access with security groups ,then this is the kind of exercise that makes it click.

The guide covers a full 3-node setup (1 control plane + 2 workers) on Amazon Linux 2023, from instance provisioning all the way to deploying your first workload.



Read it here πŸ‘‰ https://www.amitchoubey.dev/posts/kubernetes-cluster-aws-ec2-kubeadm/
prithivMLmodsΒ 
posted an update about 2 months ago
view post
Post
5047
The Qwen3.5 Multimodal Understanding Demo, powered by Qwen3.5-2B, is now available on HF Spaces! It is a lightweight model designed for fast image and video reasoning. Built with Gradio, the demo showcases Image QA, Video QA, object detection, and 2D point tracking, along with real-time token streaming.

πŸ€— Demo: prithivMLmods/Qwen-3.5-HF-Demo
βœ… Collection: https://huggingface.co/collections/prithivMLmods/multimodal-implementations
πŸ”— Qwen3.5-2B: Qwen/Qwen3.5-2B

To learn more, visit the app page or the respective model pages.
prithivMLmodsΒ 
posted an update about 2 months ago
view post
Post
4019
QIE-Object-Remover-Bbox Demo removes objects and artifacts from selected regions using bounding box grounding. Built on Qwen-Image-Edit-2509 with Rapid Diffusers acceleration, it delivers fast 4-step inference via the QIE-2509 adapter. πŸ€—πŸ”₯

πŸ”—Demo Space: prithivMLmods/QIE-Object-Remover-Bbox
πŸ”—Qwen-Image-Edit-Rapid-AIO: prithivMLmods/Qwen-Image-Edit-Rapid-AIO-V4
πŸ”—Adapter-(LoRA): prithivMLmods/QIE-2509-Object-Remover-Bbox

πŸ”—Collection: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-layout-bbox

To learn more, visit the app page or the respective model pages.
  • 1 reply
Β·
prithivMLmodsΒ 
posted an update 2 months ago
view post
Post
2566
FireRed-Image-Edit-1.0 (Rapid) Fast Experimental Demo is Out! πŸš€πŸ€—

Demo: prithivMLmods/FireRed-Image-Edit-1.0-Fast

-> Paired the EditPlusPipeline with the Diffusers-compatible transformer weights of Rapid AIO from Qwen-Image-Edit. (experimental)
-> This fusion delivers more accurate instruction following, higher image quality, and consistent visual coherence @ 4-step fast inference.
-> Better maintains text styles with high fidelity, along with high-quality old photo restoration, enhancement, and best-in-class virtual try-on.

TonicΒ 
posted an update 2 months ago
view post
Post
3677
πŸ€” Who would win ?

- a fully subsidized ai lab
OR
- 3 random students named
kurakurai
?

demo : Tonic/fr-on-device

if you like it give the demo a little star and send a shoutout to : @MaxLSB @jddqd and @GAD-cell for absolutely obliterating the pareto frontier of the french language understanding .
  • 4 replies
Β·
prithivMLmodsΒ 
posted an update 2 months ago
TonicΒ 
posted an update 2 months ago
view post
Post
3407
πŸ™‹πŸ»β€β™‚οΈhello my lovelies ,

it is with great pleasure i present to you my working one-click deploy 16GB ram completely free huggingface spaces deployment.

repo : Tonic/hugging-claw (use git clone to inspect)
literally the one-click link : Tonic/hugging-claw

you can also run it locally and see for yourself :

docker run -it -p 7860:7860 --platform=linux/amd64 \
-e HF_TOKEN="YOUR_VALUE_HERE" \
-e OPENCLAW_GATEWAY_TRUSTED_PROXIES="YOUR_VALUE_HERE" \
-e OPENCLAW_GATEWAY_PASSWORD="YOUR_VALUE_HERE" \
-e OPENCLAW_CONTROL_UI_ALLOWED_ORIGINS="YOUR_VALUE_HERE" \
registry.hf.space/tonic-hugging-claw:latest


just a few quite minor details i'll take care of but i wanted to share here first
  • 2 replies
Β·
prithivMLmodsΒ 
posted an update 2 months ago
view post
Post
2619