Collections
Discover the best community collections!
Collections including paper arxiv:2404.19756
-
Be-Your-Outpainter: Mastering Video Outpainting through Input-Specific Adaptation
Paper • 2403.13745 • Published • 11 -
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 56 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 116
-
Just How Flexible are Neural Networks in Practice?
Paper • 2406.11463 • Published • 7 -
Not All Language Model Features Are Linear
Paper • 2405.14860 • Published • 40 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 116 -
An Interactive Agent Foundation Model
Paper • 2402.05929 • Published • 30
-
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
Paper • 2405.21060 • Published • 68 -
Your Transformer is Secretly Linear
Paper • 2405.12250 • Published • 157 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 116 -
Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography
Paper • 2501.08970 • Published • 6
-
mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Paper • 2406.11839 • Published • 40 -
Pandora: Towards General World Model with Natural Language Actions and Video States
Paper • 2406.09455 • Published • 16 -
WPO: Enhancing RLHF with Weighted Preference Optimization
Paper • 2406.11827 • Published • 17 -
In-Context Editing: Learning Knowledge from Self-Induced Distributions
Paper • 2406.11194 • Published • 20
-
Be-Your-Outpainter: Mastering Video Outpainting through Input-Specific Adaptation
Paper • 2403.13745 • Published • 11 -
StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Paper • 2405.01434 • Published • 56 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 116
-
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
Paper • 2405.21060 • Published • 68 -
Your Transformer is Secretly Linear
Paper • 2405.12250 • Published • 157 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 116 -
Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography
Paper • 2501.08970 • Published • 6
-
Just How Flexible are Neural Networks in Practice?
Paper • 2406.11463 • Published • 7 -
Not All Language Model Features Are Linear
Paper • 2405.14860 • Published • 40 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 116 -
An Interactive Agent Foundation Model
Paper • 2402.05929 • Published • 30
-
mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Paper • 2406.11839 • Published • 40 -
Pandora: Towards General World Model with Natural Language Actions and Video States
Paper • 2406.09455 • Published • 16 -
WPO: Enhancing RLHF with Weighted Preference Optimization
Paper • 2406.11827 • Published • 17 -
In-Context Editing: Learning Knowledge from Self-Induced Distributions
Paper • 2406.11194 • Published • 20