Luis M. Pimentel

I am currently a first-year graduate student at the Georgia Institute of Technology working towards my Master's in Electrical and Computer Engineering. I am also currently a Graduate Researh & Development Intern at Sandia National Laboratories working on robotics technologies and research applications in Sandia's AutonomyNM laboratory [Summer 2019, Summer 2020-present] .

I received my Bachelor's in Computer Engineering and Minor in Robotics from Georgia Tech in 2021. In my undergraduate career, I specialized in robotics coursework related to control systems, autonomy, perception, machine learning/deep learning, and have worked on multiple robotics platforms including autonomous racecars, surface vehicles, and multi-copters. Furthermore, I've had the opportunity to contribute to multiple robotics research projects in laboratories including AutoRally [Summer 2016, Fall 2017-Spring 2018], VIP: Active Safety for Autonomous and Semi-Autonomous Vehicles [Fall 2018-Spring 2019], and The DREAM Lab [Fall 2019]. I am also a former Undergraduate Research Intern at the Georgia Tech Research Institute [Summer 2018].

In my free time I enjoy reading, listening to podcasts, and ocassionally getting punched in the face.

Email  /  CV  /  LinkedIn  /  Scholar  /  GitHub  / 

profile photo Georgia Institute of Technology Sandia National Laboratories


  • [August 2022] Excited to join the Cognitive Optimization and Relational (CORE) Robotics Lab as a Graduate Research Assistant.
  • [June 2022] Our paper Scaling Multi-Agent Reinforcement Learning via State Upsampling has been accepted to the Robotics: Science and Systems Workshop on Scaling Robot Learning (RSS22-SRL).
  • [January 2022] Began pursuing Master's degree in Electrical and Computer Engineering Georgia Tech.
  • [December 2021] Received my Bachelor's degree in Computer Engineering w/ Robotics Minor from Georgia Tech.


My current research interests are in the areas of robotics and reinforcement learning. My aspiration is to build autonomous robots that, through intelligence, can understand and act within their real-world, complex environments.

project image

Scaling Multi-Agent Reinforcement Learning via State Upsampling

Luis Pimentel*, Rohan Paleja*, Zheyuan Wang, Esmaeil Seraj, James Pagan, and Matthew Gombolay
In Proc. RSS 2022 Workshop on Scaling Robot Learning (RSS22-SRL).
paper / poster /

We consider the problem of scaling Multi-Agent Reinforcement Learning (MARL) algorithms toward larger environments and team sizes. While it is possible to learn a MARL-synthesized policy on these larger problems from scratch, training is difficult as the joint state-action space is much larger. In this paper, we propose a transfer learning method that accelerates the training performance in such high-dimensional tasks with increased complexity. Our method upsamples an agent’s state representation in a smaller, less challenging, source task in order to pre-train a target policy for a larger, more challenging, target task. By transferring the policy after pre-training and continuing MARL in the target domain, the information learned within the source task enables higher performance within the target task in significantly less time than training from scratch.

Design and source code for this site can be found here.