It's a pleasure to "e-meet" you. I am currently pursuing my Masters in Robotics at the Robotics Institute @ Carnegie Mellon University. So far my work at CMU has led me to work on off-board autonomy using vehicle-to-infrastructure sensors in GPS denied environments funded by Nissan's Innovation Lab (Silicon Valley) , monocular 6D pose estimation for road scenes in urban settings as part of an independent research (beating SOTA benchmarks in KITTI dataset by ~+3% mAP @R40), and developing a method for optimal grasp pose estimation for a cluttered environment using keypoint estimation for a Franka Arm. I am also working on two independent year-long projects, working on an iPhone app for mapping the grocery store (with dead-reckoning )for accurate inventory counting leveraging onboard sensors and toolkits for imaging and SLAM. I am also working on leveraging Vision Language Models (VLMs) to develop Vision Language Manipulation and Navigation agents in a dynamic environment. Previously, I was working as a Lead Research Engineer @ Retrocausal, developing state-of-the-art vision systems for visually mistake proofing assembly lines by increasing productivity of frontline assembly line workers. Although my previous work at Retrocausal was on Human Acitivty Recognition using semi-supervised and unsupervised methods (funded by NASA's Human Research Program and NASA Exploratory Medical Capability team (ExMC)), but my primary research interest is utilizing robotics, computer vision and machine learning with a focus on embodied intelligence for emulating (and ultimately exceeding) human-level dexterity for non-prehensile multi-step manipulations in an unstructured cluttered workspace along with employing methods for distilling experiences from learned models and mapping them to newer tasks (analogous to human cognition of life-long learning). Despite having scarce resources and no robotic platforms to work with at an undergrad-level, I didn't let it trammel my pursit for my passion, and thus I developed three robotic platforms working as an undergrad research student at the Autonomous Systems Lab (now a part of National Center for Robotics and Automation @ NEDUET), which includes iForce (Pakistan's First Humanoid Robot): A multipurpose humanoid robot for grasping and manipulation (for experimenting with monocular 6D pose estimation for robust grasping), Sylvester: A non-anthropomorphic socially interactive and assistive mobile robot (an educational aid tool for students in remote tribal/rural areas of Pakistan and students diagnosed with Autism) and STR-1: An Unmanned Ground Vehicle (UGV) for farm monitoring. For this purpose I was able to secure fundings from Higher Education Commission, Pakistan and Ministry of IT, Pakistan. I have co-authored research papers and technical articles (google scholar link) in leading international research conferences and peer-reviewed journals including CVPR and ISMAR, in addition to being a co-inventor on three (3) USPTO patent applications with my current employer (Retrocausal). I have won several National and International Competitions in the domains of Robotics, Computer Vision and Reinforcement Learning.