Ondrej Bohdal
I'm a senior machine learning researcher at Samsung Research, where I primarily focus on large language models. Before joining Samsung, I was a postdoctoral researcher at the
University of Edinburgh, working on topics such as multimodal large language models, diffusion models, fairness, uncertainty calibration and out-of-distribution generalization.
I did my PhD on Meta-Learning Algorithms and Applications at the
University of Edinburgh, advised by Timothy Hospedales.
I was a research intern at Samsung AI Center, Cambridge and Amazon Web Services, Berlin, and also did part of my
studies at the Alan Turing Institute in London.
Email  / 
Google Scholar  / 
Twitter  / 
Github  / 
LinkedIn  / 
London, UK
|
|
Research
I've worked on diverse topics within deep learning, including meta-learning,
data efficiency, domain adaptation, out-of-distribution generalization, uncertainty calibration, fairness,
multimodal large language models, diffusion models and hyperparameter optimization.
I work with images (computer vision) and text (natural language processing).
|
MemControl: Mitigating Memorization in Medical Diffusion Models via Automated Parameter Selection
Raman Dutt, Ondrej Bohdal, Pedro Sanchez, Sotirios A. Tsaftaris, Timothy Hospedales
WACV, 2025
paper
|
On the Limitations of General Purpose Domain Generalisation Methods
Henry Gouk, Ondrej Bohdal, Da Li, Timothy Hospedales
Under review, 2024
paper
|
Memorized Images in Diffusion Models share a Subspace that can be Located and Deleted
Ruchika Chavhan, Ondrej Bohdal, Yongshuo Zong, Da Li, Timothy Hospedales
ICML GenLaw workshop and under review, 2024
paper
|
VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning
Yongshuo Zong*, Ondrej Bohdal*, Timothy Hospedales
* Joint first authors
Under review, 2024
paper
/
project page
/
code
/
data
|
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models
Yongshuo Zong, Ondrej Bohdal, Tingyang Yu, Yongxin Yang, Timothy Hospedales
ICML, 2024
paper
/
project page
/
code
/
data
|
Navigating Noise: A Study of How Noise Influences Generalisation and Calibration of Neural Networks
Martin Ferianc*, Ondrej Bohdal*, Timothy Hospedales, Miguel Rodrigues           
* Joint first authors
TMLR, 2024
paper
/
code
/
video
|
FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in Medical Image Analysis
Raman Dutt, Ondrej Bohdal, Sotirios A. Tsaftaris, Timothy Hospedales
ICLR, 2024
paper
/
code
|
Feed-Forward Latent Domain Adaptation
Ondrej Bohdal, Da Li, Shell Xu Hu, Timothy Hospedales
WACV, 2024
paper
/
project page
/
video
/
slides
|
Meta-Calibration: Learning of Model Calibration Using Differentiable Expected
Calibration Error
Ondrej Bohdal, Yongxin Yang, Timothy Hospedales
TMLR, 2023
paper
/
code
|
Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn
Ondrej Bohdal*, Yinbing Tian*, Yongshuo Zong, Ruchika Chavhan, Da Li, Henry Gouk, Li Guo, Timothy Hospedales
* Joint first authors
CVPR, 2023
paper
/
project page
/
code
/
video
/
slides
/
poster
|
PASHA: Efficient HPO and NAS with Progressive Resource Allocation
Ondrej Bohdal, Lukas Balles, Martin Wistuba, Beyza Ermis, Cédric Archambeau, Giovanni Zappella
ICLR, 2023
HPO: hyperparameter optimization, NAS: neural architecture search
paper
/
code
/
tutorial
/
video
/
slides
/
poster
|
Label Calibration for Semantic Segmentation Under Domain Shift
Ondrej Bohdal, Da Li, Timothy Hospedales
ICLR Trustworthy ML workshop, 2023
paper
|
Fairness in AI and Its Long-Term Implications on Society
Ondrej Bohdal, Timothy Hospedales, Philip H.S. Torr, Fazl Barez
Stanford Existential Risks Conference, 2023
paper
|
Feed-Forward Source-Free Domain Adaptation via Class Prototypes
Ondrej Bohdal, Da Li, Timothy Hospedales
ECCV OOD-CV workshop, 2022
paper
|
EvoGrad: Efficient Gradient‑Based Meta‑Learning and Hyperparameter Optimization
Ondrej Bohdal, Yongxin Yang, Timothy Hospedales
NeurIPS, 2021
paper
/
code
/
blog
/
video
/
slides
/
poster
|
A Channel Coding Benchmark for Meta‑Learning
Rui Li, Ondrej Bohdal, Rajesh Mishra, Hyeji Kim, Da Li, Nicholas Lane, Timothy Hospedales
NeurIPS (datasets and benchmarks track), 2021
paper
/
code
/
blog
/
video
|
Flexible Dataset Distillation: Learn Labels Instead of Images
Ondrej Bohdal, Yongxin Yang, Timothy Hospedales
NeurIPS MetaLearn workshop, 2020
paper
/
code
/
video
|
Semantic Segmentation of 3D Point Clouds
Data study group at the Alan Turing Institute, 2020
report
|
|
Senior Researcher
Samsung Research, London (Staines-upon-Thames), UK
May 2024 - Current
Research in large language models
Managers: Umberto Michieli and Mete Ozay
|
|
Postdoctoral Research Associate
The University of Edinburgh, Edinburgh, UK
May 2023 - May 2024
Research in multimodal large language models, diffusion models, fairness, uncertainty calibration
Supervised by Timothy Hospedales
|
|
Research Intern (Part-Time)
Samsung AI Center, Cambridge, UK
Nov 2021 - Apr 2022
Research in source-free domain adaptation
Hosted by Da Li
|
|
Enrichment Scheme PhD Student
Alan Turing Institute, London, UK
Jan 2022 - Mar 2022
Enrichment scheme placement at the Alan Turing Institute
Participated in an online Engage @ Turing scheme since 2020
|
|
Applied Scientist Intern
Amazon Web Services, Berlin, Germany
Jul 2021 - Oct 2021
Research in hyperparameter optimization and neural architecture search
Hosted by Giovanni Zappella
|
|
Teaching Fellow
Cambridge Spark, UK
Jul 2020 - Jun 2024
Content development, teaching and technical mentoring
|
|
Teaching Support Provider
The University of Edinburgh, Edinburgh, UK
Oct 2018 - May 2023
Introductory Applied Machine Learning: Tutor, lab demonstrator and marker
Machine Learning Practical: Tutor and lab demonstrator
|
|
Software Development Engineer Intern
Amazon, Edinburgh, UK
Apr 2018 - Aug 2018
|
|
Technology Summer Analyst
JPMorgan Chase & Co., Glasgow, UK
Jun 2017 - Aug 2017
|
|
Software Engineering Intern
Metaswitch (now part of Microsoft), Edinburgh, UK
May 2016 - Aug 2016
|
|
PhD & MSc(R) in Data Science
The University of Edinburgh, Edinburgh, UK
Sep 2018 - Feb 2024
|
|
BSc (Hons) Artificial Intelligence and Mathematics
The University of Edinburgh, Edinburgh, UK
Sep 2015 - May 2018
- Final result: First-Class Honours (90%)
- Awarded Howe Prize for top performance in UG4 Artificial Intelligence and Class
Prize for top performance in BSc (Hons)
AI and Mathematics
- Honours project: Penalizing Confident Neural Networks (supervised by Prof. Steve Renals)
- Tuition fees fully funded by SAAS and also
received a scholarship from Jan Hus Educational
Foundation
- Direct entry to the second year
|
|
International Baccalaureate Diploma Programme
Jur Hronec Grammar School, Bratislava, Slovakia
Sep 2013 - May 2015
- Final result: 44/45 (within the best 1% in the world)
- Courses: Mathematics, Computer Science, Physics, English,
Economics, Slovak Literature
- Extended essay: Prime Generating Polynomials (Mathematics)
|
Misc
I often competed and won prizes in various hackathons, including Algothon (quant
finance hackathon), QuHackEd (quantum computing hackathon), Data Open (datathon
organized by Citadel - I even participated in the Championship),
Hack the Burgh and many others. An overview of some of the projects I worked on is
available on my Devpost profile, and
there are also articles about my teams here and
here.
During my high school I very successfully participated in many competitions in
Mathematics, Physics and Informatics, in particular the subject Olympiads, correspondence seminars and various team competitions. Most notably I represented Slovakia at the Middle European Mathematical Olympiad in Dresden, Germany in 2014.
|
|