Position:
Robotic Software Research & Development Engineer
Company:
Creative Algorithms and Sensor Evolution Laboratory
Duration:
May 2021 - November 2023
From May 2021 to November 2023, I worked as a Robotic Software Research and Development Engineer at Creative Algorithms and Sensor Evolution Laboratory (CASELab), South Korea. My responsibilities were three-fold:
Developing AI vision-based perception models for robotic applications using deep learning,
Optimizing models for edge deployment, particularly on the NVIDIA Jetson Xavier NX platform,
Publishing research outcomes in SCIE-indexed journals, as well as international and national conferences and workshops.
Over the course of 2 years and 7 months at CASELab, I gained extensive industrial experience in deep learning-based perception for human-robot interaction (HRI). My work included projects such as:
Semantic perception for a service robots
Multi-floor robot navigation using elevator button recognition,
Human-following robot
During this time, I contributed to a wide range of computer vision and AI tasks—including object detection, classification, recognition, tracking, and segmentation—as well as face detection and recognition, gesture recognition, and voice command recognition, with a primary focus on robotic applications. I also worked on deploying these models efficiently on edge devices for real-time robotic perception.
Position:
Team Leader
(Infra Team/ Autonomous Driving)
Company:
Youngshin Co Ltd.
Duration:
December 2023 - Present
Since December 2023, I have been working as the Team Leader for Autonomous Driving project at Youngshin Corporation, South Korea. My key responsibilities include:
Developing AI-based vision and perception models for autonomous driving using deep learning,
Working on multi sensor fusion,
Designing and developing AI solutions in line with the company’s patent applications.
At Youngshin, I have further expanded my expertise sensor fusion, camera and LiDAR calibration, projection, 3D vision. These skills have been applied to real-world autonomous driving systems for object recognition and tracking, as well as lane generation and vehicle position estimation, enabling robust perception for intelligent navigation.
During my work in Youngin, I start to write blogs related to the new technologies that I experienced to work with for LiDAR and different types of cameras for sensor fusion based objects recognition and tracking in ADAS.
Sensor Fusion (LiDAR and Camera) based Marker Recognition and Tracking
Lane Line Generation & Vehicle Position Estimation using Sensor Fusion based Marker Recognition and Tracking
Vehicle Position: First Lane
Vehicle Position: Lane Crossing
Vehicle Position: Second Lane
Vehicle Position: First Lane
Vehicle Position: Lane Crossing
Vehicle Position: Second Lane