Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

danielhanchenย 
posted an update 2 days ago
SeanLee97ย 
posted an update 3 days ago
view post
Post
7867
Our lab recently released a paper where we introduce ShadowPEFT, a new Parameter-Efficient Fine-Tuning (PEFT) paradigm tailored for edge computing scenarios.

Unlike traditional approaches such as LoRA and its variants, which inject trainable parameters directly into the weights of Transformer, requiring tight coupling with the backbone.

ShadowPEFT instead enhances the frozen large base model by adding a lightweight, centralized, pretrainable, and detachable Shadow network.
This shadow network operates in parallel with the base model, delivering learned corrections to each decoder layer. Because the shadow module is architecturally decoupled from the backbone, it can be independently trained, stored, and deployed, benefiting edge computing scenarios and edge-cloud collaboration computing.

- HF Paper: ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning (2604.19254)
- GitHub: https://github.com/ShadowLLM/shadow-peft
- HF Collection: https://huggingface.co/collections/shadow-llm/shadow-peft-models
  • 7 replies
ยท
imnotkittyย 
posted an update 1 day ago
view post
Post
2784
tencent/Hy3-preview is out: an open-weights MoE reasoning model.

โœ… 295B total / 21B active / 256K context
โœ… Fused fast-and-slow thinking in a single model
โœ… First model trained on Hunyuan's rebuilt pretraining + RL infra (Feb โ†’ Apr)

Benchmarks:
๐Ÿ‘‰ SWE-Bench Verified, Terminal-Bench 2.0, BrowseComp, WideSearch โ€” competitive results, particularly strong on agentic tool use
๐Ÿ‘‰ Top score on Tsinghua's 2026 Spring math PhD qualifying exam
๐Ÿ‘‰ Strong context-learning and instruction-following on Tencent's CL-bench / CL-bench-Life

More details can be found in my article: https://huggingface.co/blog/imnotkitty/hy3-preview
  • 2 replies
ยท
SeaWolf-AIย 
posted an update about 4 hours ago
view post
Post
347
๐Ÿงฌ Introducing Darwin-9B-NEG โ€” the first model with Native Entropy Gating (NEG)

๐Ÿ”— Try it now: FINAL-Bench/Darwin-9B-NEG

We're thrilled to release Darwin-9B-NEG, a 9B-parameter reasoning model
that embeds an architecturally-internalised sense of self-confidence directly
into the transformer โ€” our proprietary Native Entropy Gating (NEG) technology.

๐Ÿ“Š GPQA Diamond (198 PhD-level questions):

โ–ธ Baseline Darwin-9B (no NEG) โ†’ 51.01 %
โ–ธ Pure NEG (greedy ยท 1ร— cost) โ†’ 63.64 % ๐Ÿ”ฅ +12.63 %p
โ–ธ + Permutation (4ร— cost) โ†’ 76.26 %
โ–ธ + Ensemble Refinement (~20ร—) โ†’ 84.34 % ๐Ÿ†

With only 9 billion parameters and 1ร— inference cost, Pure NEG jumps
+12.63 %p over the same model without NEG. Going all-in with ensemble
refinement pushes it to 84.34 % โ€” surpassing the published Qwen3.5-9B
leaderboard score (81.7 %) by +2.64 %p.

๐Ÿ”ฌ What makes NEG different from Multi-Turn Iteration (MTI)?

Classical MTI needs 3-8ร— extra inference passes. NEG instead lives
INSIDE the single decoding loop. Two tiny modules ride with the
transformer: NEG-Head predicts per-token entropy from the last hidden
state, and NEG-Gate conditionally restricts the top-k choice when
confidence is low. The gate activates in only 4.36 % of tokens โ€”
essentially free at inference time.

โœจ Key differentiators
โ€ข Architecturally internalised โ€” model file *is* the feature
โ€ข 1ร— inference cost (vs. 3-8ร— for MTI)
โ€ข Drop-in with vLLM / SGLang / TGI / transformers โ€” no extra engine
โ€ข +12.63 %p reasoning at zero latency overhead
โ€ข Single-file deployment, Apache 2.0 licensed

๐Ÿงฌ Lineage
Qwen/Qwen3.5-9B โ†’ Darwin-9B-Opus (V7 evolutionary merge) โ†’ Darwin-9B-NEG (V8 + NEG training)

#Darwin #NEG #NativeEntropyGating #GPQA #Reasoning #LLM #OpenSource #Apache2
Benedictatย 
posted an update 1 day ago
view post
Post
2510
Built a WeChat Mini Program in 20 minutes flat with Hy3 Preview + WorkBuddyโ€ฆ

and I didnโ€™t type a single line of code. Not even a semicolon.

This Coding Agent is on steroids. Its comprehension in long back-and-forths is night and day better, and that 256K context window swallows the entire project structure whole.

Tell it what you want, and it actually gets the full picture no confused blank stares from the AI.

And weโ€™re not messing around with dinky little code snippets here. It spits out a fully functional project

app.json, every pageโ€™s wxml/wxss/js/json, even Mock data pre-packed. Import it into WeChat Dev Tools and it runs on the first try

Only one tiny visual nitpick, zero logic bugs. Point out the flaw, and it fixes it instantly no new bugs, no passive-aggressive code breaks, no headaches

The entire vibe Tell it your idea โ†’ Get a complete working project โ†’ Mention a tiny flaw โ†’ AI polishes it.

No coding, no endless edits, no soul-crushing debugging that makes you want to throw your laptop. Absolute game-changer
Ujjwal-Tyagiย 
posted an update 3 days ago
view post
Post
3864
We are hiring at Shirova AI. We need AI researchers and engineers to work in our research lab. Shirova AI is a research lab in India, so we can help our researchers move to nearby workspaces or let them work from home without ever coming to the lab. We're building our founding team, so the pay will be good. You can learn, so don't hesitate to mail us at: careers@shirova.com
kelsendย 
posted an update 1 day ago
view post
Post
2530
The rebuilt Hunyuan HY3 Preview is here!

I tested it on all the tricky scenarios where most LLMs usually face-plantโ€”and guess what? It didnโ€™t flop.

295B total params, 21B active params, 256K context window. Built on MoE architecture, it delivers trillion-parameter-level performance with a much smaller footprint. Long-context capabilities get a massive upgrade.

Agent abilities stand out this time: tool calling, workflow orchestration, and autonomous planning are far more stable in real business scenarios. AI PPT generation in Tencent Docs is also significantly smoother and more reliable.

Real-world tests on WorkBuddy show first-token latency down 54%, success rate over 99.99%, and an Agent workflow that ran continuously for 495 steps.

Its Coding Agent achieved top-tier results on both SWE-Bench Verified and Terminal-Bench 2.0

Now open-sourced on GitHub, HuggingFace, and ModelScope. Available on TokenHub at just 1.2 RMB per million tokens.
wangbuer999ย 
posted an update 1 day ago
view post
Post
2500
Testing AI controlling AI with Hy3 Preview I barely lifted a finger the whole time.

One-click deployment of Hermes on WorkBuddy took some time with a few rounds of adjustments, and I finally got it up and running smoothly

Only minor issue was setting up Supermemory it was a bit slow on the uptake. I had to go over simple steps several times, guiding it patiently like teaching a kid.

The experience of AI orchestrating AI is absolutely incredible. started running Agents with Hunyuan right after its release, and it actually works perfectly.

295B parameters, 21B active parameters, with direct access to TokenHub now great cost-performance ratio too

Honestly, I used to get stuck on all kinds of environment configurations when deploying Agents locally. Using Hy3 to take command made the whole process way more streamlined.
Tonicย 
posted an update about 24 hours ago
view post
Post
1014
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธ Hey there folks ,

I'm sharing huggingface's largest dataset of annotated statelite images today.

check it out here : NuTonic/sat-image-boundingbox-sft-full

I hope you like it , the idea is to be able to use this with small vision models ๐Ÿš€
anakin87ย 
posted an update 1 day ago
view post
Post
1494
A small model that struggled against a random opponent now beats GPT-5-mini at tic-tac-toe

I took LiquidAI/LFM2-2.6B and trained it through play.

๐Ÿง‘โ€๐Ÿณ Here's how:

1๏ธโƒฃ Build a solid RL env with Verifiers (Prime Intellect)
2๏ธโƒฃ Generate synthetic data: <200 games sampled from GPT-5-mini playing in the env
3๏ธโƒฃ SFT warm-up to teach format
4๏ธโƒฃ Group-based RL (CISPO) against opponents making 20-70% random moves
5๏ธโƒฃ RL again with stronger opponents (0-25% random moves) + 1.25 temperature to push exploration and shake off suboptimal strategies

Done! Beats GPT-5-mini ๐Ÿ†

---

๐ŸŽฎ Play against the model: anakin87/LFM2-2.6B-mr-tictactoe

๐Ÿค— Model: anakin87/LFM2-2.6B-mr-tictactoe

๐Ÿ“š Walkthrough/course: https://github.com/anakin87/llm-rl-environments-lil-course

๐Ÿค— Dataset and checkpoints: https://huggingface.co/collections/anakin87/lfm2-26b-mr-tic-tac-toe