About Me

‘Where there is a will, there is a way.’ —— To those who have dreams

I am a CS PhD student at Boston College, advised by Prof. Momchil Tomov (who completed his undergraduate studies at Princeton University and his PhD and postdoctoral research at Harvard University). My research focuses on the computational principles of model learning and decision-making in naturalistic environments. My primary research interests lie at the intersection of cognitive neuroscience and artificial intelligence, exploring how the brain supports adaptive decision-making in complex, dynamic tasks like video games. Specifically, I investigate the neural architecture of theory-based reinforcement learning, the dynamic interplay between model-based planning and model-free function approximation, and how mental models of the world are represented and learned.

Prior to joining Boston College, I completed my MSc and BSc in Artificial Intelligence at the University of Sussex, graduating with Distinction and First-class honors respectively. During my academic journey, I undertook cross-disciplinary modules that provided me with unique insights into neuroscience, brain and behavior, and artificial life. I previously worked as an AI research engineer at A*STAR in Singapore, focusing on computational modeling of the nervous systems of C. elegans and zebrafish. Before that, I completed an internship at hyperTunnel, an AI startup, gaining practical knowledge in multi-agent optimization and solution architecture. Additionally, I was a research intern at the University of Rochester, working on cross-talker accented speech generalization. I am passionate about using state-of-the-art AI technologies to understand the mechanisms of complex human behaviors. Please feel free to contact me if you would like to discuss my research in cognitive science, neuroscience, or artificial intelligence.

What I have Learned:

Artificial IntelligenceData Science and AlgorithmsEngineering
Neural NetworksMulti Adaptive Regression SplinesAutonomous Vehicles
Deep LearningLOESS/LOWESSAdaptive Control Systems
Global OptimizationGradient Boosting MachinesImage Processing
Computer VisionEvolutionary AlgorithmsIndustrial Automation
Reinforcement LearningBayesian NetworksRobotics optimization
Language ModelsEnsemble Methods(Bagging…)Swarm Intelligence
Bio-inspired AIVariational Autoencoders 
Reasoning and KnowledgeCausal Impacts 
Reinforcement LearningGraph Neural Networks 
Free Energy PrincipleXGBoost, LightGBM, CatBoost 

MISCELLANEOUS

Programming LanguagesHobbiesMy usual haunts
Python/Torch/TensorflowBakingLabs
Matlab/SimulinkBartendingGardens
Html/Css/JavascriptTennisKitchen
C/C++/C#/.Net/Unity3DSwimmingRehearsal Studio
Git/Docker/Bash/ShellCelloTennis Court
Java/SpringBootGamingSeaside

For more info

More info about some of my past projects or research experiences? You can find them at Shawcase. Want to get in touch with me? You can find the way you want at Contact Me.