Cansu Sancaktar
I’m a PhD student working on intrinsically motivated RL and open-endedness, advised by Georg Martius. My work focuses on building agents that decide what’s worth learning and explore their environment efficiently and autonomously, much like children at play.
I'm excited about the continual acquisition of novel skills with minimal external supervision, progressing toward scalable, self-improving systems.
You can reach out to me at: firstname [dot] lastname [at] gmail [dot] com.
CV  / 
Google Scholar  / 
Twitter  / 
Github
|
|
News
- 05/2025 - Our paper "SENSEI: Semantic Exploration Guided by Foundation Models to Learn Versatile World Models" has been accepted to ICML 2025!
- 04/2025 - Started my research internship at the CodeGen team at FAIR Meta Paris, advised by Taco Cohen.
- 07/2024 - IMOL Workshop has been accepted to NeurIPS 2024 in Vancouver!
- 06/2024 - Moving to Amsterdam for my internship at Qualcomm in the Embodied AI Team 🤖
show more
- 09/2023 - Our paper "Regularity as Intrinsic Reward" has been accepted to NeurIPS 2023!
- 07/2023 - IMOL Workshop has been accepted to NeurIPS 2023 in New Orleans 🎷
|
Publications & Preprints
Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World
Nico Gürtler, Felix Widmaier, Cansu Sancaktar,
Sebastian Blaes, Pavel Kolev, Stefan Bauer, Manuel Wüthrich, Markus Wulfmeier, Martin Riedmiller, Arthur Allshire, Qiang Wang, Robert McCarthy, Hangyeol Kim, Jongchan Baek, Wookyong Kwon, Shanliang Qian, Yasunori Toshimitsu, Mike Yan Michelis, Amirhossein Kazemipour, Arman Raayatsanati, Hehui Zheng, Barnabas Gavin Cangan, Bernhard Schölkopf
... [show all authors]
, Georg Martius
NeurIPS 2022 Competition Track
paper
|
|