Map Home
Type / to search with text or keywords
/
keyboard shortcut: Type "/" on your keyboard for a quick search
Search button
Loading...
Loading...
Scroll left
Grants
Citations
H-Index
Patents
News
Books
Scroll right
Collapse sidebar
Data issues & feedback
Adjust height of sidebar
KMap
People
Adjust height of sidebar
KMap
Profile
Kwang-Sung Jun
Assistant Professor, Computer Science | Member of the Graduate Faculty
Computer Science
Full Page
Overview
Research
More
Collaboration
(19)
Hao Zhang
Mutual work: 3 Proposals
Collaboration Details
Clayton Morrison
Mutual work: 2 Proposals
Collaboration Details
Larry Head
Mutual work: 1 Proposal
Collaboration Details
Michael Chertkov
Mutual work: 1 Proposal
Collaboration Details
Mihai Surdeanu
Mutual work: 2 Proposals
Collaboration Details
Page 1 of 4
Previous page
Next page
Grants
(1)
CIF: Small: Theory and Algorithms for Efficient and Large-Scale Monte Carlo Tree Search
Active
·
2023
·
$599.2K
·
External
Principal Investigator (PI)
monte carlo,
algorithms,
efficiency,
theory,
large-scale
Publications
(39)
Recent
Tighter PAC-Bayes Bounds Through Coin-Betting
2023
bayesian learning,
pac learning,
statistical bounds,
online learning,
probability theory
Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded Rewards
2023
bandit algorithms,
machine learning,
reinforcement learning,
probability theory,
statistics
Revisiting Simple Regret: Fast Rates for Returning a Good Arm
2023
multi-armed bandits,
regret minimization,
machine learning,
online learning,
algorithm analysis
Popart: Efficient sparse regression and experimental design for optimal sparse linear bandits
2022
sparse regression,
experimental design,
linear bandits,
efficient algorithms,
optimization
Jointly efficient and optimal algorithms for logistic bandits
2022
efficient algorithms,
optimal algorithms,
logistics,
bandits,
joint optimization
An experimental design approach for regret minimization in logistic bandits
2022
experimental design,
regret minimization,
logistic bandits,
optimization algorithms,
decision-making strategies
Norm-Agnostic Linear Bandits
2022
bandit algorithms,
online learning,
statistical inference,
reinforcement learning,
optimization
Revisiting Simple Regret Minimization in Multi-Armed Bandits
2022
reinforcement learning,
optimization,
decision making,
machine learning,
algorithm design
Maillard sampling: Boltzmann exploration done optimally
2022
food chemistry,
exploration algorithms,
optimal sampling,
flavor analysis,
chemical reactions
Improved regret analysis for variance-adaptive linear bandits and horizon-free linear mixture mdps
2022
bandit algorithms,
reinforcement learning,
regret analysis,
linear models,
mdps