Nurbek Tastan
PhD Candidate in Machine Learning, MBZUAI.
Abu Dhabi, UAE
I am Nurbek Tastan, a PhD candidate in Machine Learning at MBZUAI, affiliated with the SPriNT-AI lab. I conduct my research under the guidance of Dr. Karthik Nandakumar and Dr. Samuel Horvath.
My research focuses on trustworthy and efficient machine learning, with particular emphasis on incentivization, privacy, robustness, and efficiency in collaborative settings. I also develop methods to make large-scale models, including large language models, more efficiently fine-tunable while also preserving utility and confidentiality.
Currently expanding my research toward agentic AI and LLMs, with a focus on the trustworthiness and efficiency of such systems.
Passionate about building machine learning systems that are not only performant, but also equitable, privacy-aware, and computationally viable for real-world deployment.
news
| Feb, 2026 | Our paper SPDMark was accepted at CVPR 2026. |
|---|---|
| Jan, 2026 | Three of our papers were accepted at ICLR 2026: SelfOrg, LoFT, and MOLM. |
| Oct, 2025 | Our paper A Framework for Double-Blind Federated Adaptation of Foundation Models was accepted at ICCV 2025. |
| Jul, 2025 | Our paper Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks was accepted at ICML 2025. |
| Jul, 2025 | Our paper CYCle: Choosing Your Collaborators Wisely to Enhance Collaborative Fairness in Decentralized Learning was accepted at TMLR. |
selected publications
- Stochastic Self-Organization in Multi-Agent SystemsIn The Fourteenth International Conference on Learning Representations, 2026
- LoFT: Low-Rank Adaptation That Behaves Like Full Fine-TuningIn The Fourteenth International Conference on Learning Representations, 2026
- CYCle: Choosing Your Collaborators Wisely to Enhance Collaborative Fairness in Decentralized LearningTransactions on Machine Learning Research, 2025
- Aequa: Fair Model Rewards in Collaborative Learning via Slimmable NetworksIn Proceedings of the Forty-second International Conference on Machine Learning (ICML), 2025