Datasets:
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Michal Valko - Research Papers Collection
This dataset contains references to research papers by Michal Valko and collaborators.
Papers
- RL-finetuning LLMs from on- and off-policy data with a single algorithm
- Preference optimization with multi-sample comparisons
- Optimal design for reward modeling in RLHF
- A new bound on the cumulant generating function of Dirichlet processes
- Understanding the performance gap between online and offline alignment algorithms
- Sharp deviations bounds for Dirichlet weighted sums with application to analysis of Bayesian algorithms
- KL-entropy-regularized RL with a generative model is minimax optimal
- Accelerating Nash learning from human feedback via Mirror Prox
- The Llama 3 herd of models
- Metacognitive capabilities of LLMs: An exploration in mathematical problem solving
- Local and adaptive mirror descents in extensive-form games
- Nash learning from human feedback
- Human alignment of large language models through online preference optimisation
- Generalized preference optimization: A unified approach to offline alignment
- Decoding-time realignment of language models
- A general theoretical paradigm to understand learning from human preferences
- Unlocking the power of representations in long-term novelty-based exploration
- Demonstration-regularized RL
- Model-free posterior sampling via learning rate randomization
- Curiosity in hindsight: Intrinsic exploration in stochastic environments
- VA-learning as a more efficient alternative to Q-learning
- Fast rates for maximum entropy exploration
- Adapting to game trees in zero-sum imperfect information games
- Understanding self-predictive learning for reinforcement learning
- DoMo-AC: Doubly multi-step off-policy actor-critic algorithm
- Regularization and variance-weighted regression achieves minimax optimality in linear MDPs: Theory and practice
- Quantile credit assignment
- Half-Hop: A graph upsampling approach for slowing down message passing
- BYOL-Explore: Exploration by bootstrapped prediction
- Optimistic posterior sampling for reinforcement learning with few samples and tight guarantees
- From Dirichlet to Rubin: Optimistic exploration in RL without bonuses
- Retrieval-augmented reinforcement learning
- Scaling Gaussian process optimization by evaluating a few unique candidates multiple times
- Large-scale representation learning on graphs via bootstrapping
- Adaptive multi-goal exploration
- Marginalized operators for off-policy reinforcement learning
- Drop, Swap, and Generate: A self-supervised approach for generating neural activity
- Stochastic shortest path: minimax, parameter-free and towards horizon-free regret
- A provably efficient sample collection strategy for reinforcement learning
- Model-free learning for two-player zero-sum partially observable Markov games with perfect recall
- Unifying gradient estimators for meta-reinforcement learning via off-policy evaluation
- Broaden your views for self-supervised video learning
- UCB Momentum Q-learning: Correcting the bias without forgetting
- Fast active learning for pure exploration in reinforcement learning
- Revisiting Peng's Q(λ) for modern reinforcement learning
- Taylor expansion of discount factors
- Online A-optimal design and active linear regression
- Kernel-based reinforcement Learning: A finite-time analysis
- Game plan: What AI can do for football, and what football can do for AI,
- A kernel-based approach to non-stationary reinforcement learning in metric spaces
- Episodic reinforcement learning in finite MDPs: Minimax lower bounds revisited
- Adaptive reward-free exploration
- Fast sampling from β-ensembles
- Mine Your Own vieW: Self-supervised learning through across-sample prediction,
- Bootstrap Your Own Latent: A new approach to self-supervised learning
- BYOL works even without batch statistics
- Improved sample complexity for incremental autonomous exploration in MDPs
- Sampling from a k-DPP without looking at all items
- Statistical efficiency of Thompson sampling for combinatorial semi-bandits
- Planning in Markov decision processes with gap-dependent sample complexity
- Monte-Carlo tree search as regularized policy optimization
- Taylor expansion policy optimization
- Gamification of pure exploration for linear bandits
- No-regret exploration in goal-oriented reinforcement learning
- Improved sleeping bandits with stochastic action sets and adversarial rewards
- Stochastic bandits with arm-dependent delays
- Near-linear time Gaussian process optimization with adaptive batching and resparsification
- Fixed-confidence guarantees for Bayesian best-arm identification
- Multiagent evaluation under incomplete information
- Exact sampling of determinantal point processes with sublinear time preprocessing
- Exploiting structure of uncertainty for efficient matroid semi-bandits
- DPPy: Sampling determinantal point processes with Python
- Rotting bandits are not harder than stochastic ones
- Finding the bandit in a graph: Sequential search-and-stop
- Optimistic optimization of a Brownian
- Second-order kernel online convex optimization with adaptive sketching
- Zonotope hit-and-run for efficient sampling from projection DPPs
- Distributed adaptive sampling for kernel matrix approximation
- Simple regret for infinitely many armed bandits
- Cheap Bandits
- Geometric entropic exploration,
- On the approximation relationship between optimizing ratio of submodular (RS) and difference of submodular (DS) functions,
- Learning to Act Greedily: Polymatroid Semi-Bandits
Citation
If you use any of these papers, please cite the original work.
Contact
- Downloads last month
- 9