Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Building on HF
42.5
TFLOPS
5
19
92
Tyler Williams
PRO
unmodeled-tyler
Follow
Ninaxx's profile picture
jorgemunozl's profile picture
victor's profile picture
102 followers
·
43 following
https://quantaintellect.com
unmodeledtyler
unmodeled-tyler
unmodeledtyler
AI & ML interests
AI research engineer & solo operator of VANTA Research/Quanta Intellect
Recent Activity
updated
a collection
4 days ago
My Current Open Source Daily Drivers
replied
to
RakshitAralimatti
's
post
5 days ago
🔥 GLM-5.1 (zai-org/GLM-5.1) — Quietly One of the Best flagship model for agentic engineering and Coding tasks Right Now threw some LangGraph agent code at it, a messy RAG pipeline, some async Python stuff and it just handled it. no drama, no hallucinated methods, actually usable output on the first try. open source closing the gap this fast is genuinely exciting. go check zai-org/GLM-5.1 on HF if you haven't already Good work @zai-org-3
reacted
to
anakin87
's
post
with ❤️
5 days ago
📣 I just published a free course on Reinforcement Learning Environments for Language Models! 📌 COURSE: https://github.com/anakin87/llm-rl-environments-lil-course Over the past year, we've seen a shift in LLM Post-Training. Previously, Supervised Fine-Tuning was the most important part: making models imitate curated Question-Answer pairs. Now we also have Reinforcement Learning with Verifiable Rewards. With techniques like GRPO, models can learn through trial and error in dynamic environments. They can climb to new heights without relying on expensively prepared data. But what actually are these environments in practice❓ And how do you build them effectively❓ Fascinated by these concepts, I spent time exploring this space through experiments, post-training Small Language Models. I've packaged everything I learned into this short course. What you'll learn 🔹 Agents, Environments, and LLMs: how to map Reinforcement Learning concepts to the LLM domain 🔹 How to use Verifiers (open-source library by Prime Intellect) to build RL environments as software artifacts 🔹 Common patterns: How to build single-turn, multi-turn, and tool-use environments 🔹 Hands-on: turn a small language model (LFM2-2.6B by LiquidAI) into a Tic Tac Toe master 🔸 Build the game Environment 🔸 Use it to generate synthetic data for SFT warm-up 🔸 Group-based Reinforcement Learning If you're interested in building "little worlds" where LLMs can learn, this course is for you. --- 🤗🕹️ Play against the trained model: https://huggingface.co/spaces/anakin87/LFM2-2.6B-mr-tictactoe 📚 HF collection (datasets + models): https://huggingface.co/collections/anakin87/lfm2-26b-mr-tic-tac-toe
View all activity
Organizations
unmodeled-tyler
's datasets
1
Sort: Recently updated
unmodeled-tyler/vessel-browser-tool-loop
Viewer
•
Updated
23 days ago
•
1
•
18