My research interests encompass motion optimization and kinematic calibration for robot arms within manufacturing contexts. My research incorporates concepts from robotics, optimization, and control. Additionally, I have experience in semi-autonomous vehicles for search and exploration, utilizing SLAM, deep learning, and reinforcement learning methodologies.
Developed a kinematic calibration framework for robot arms, leveraging motion capture systems to achieve sub-millimeter accuracy in TCP position estimation, significantly surpassing baseline performance by sixfold.
Developed a scan-n-print framework for wire arc additive manufacturing, resulting in a 66% smoother deposition than the baseline, while integrating multiple sensor interfaces to enhance manufacturing cell capabilities.
Developed a full stack robot arm motion optimization framework achieving sub-millimeter accuracy in tracking desired curves in space with the robot arm, surpassing baseline speeds by up to 7.8 times. Created a user-friendly Python interface for FANUC robots, facilitating seamless programming. Successfully implemented and showcased the developed system on GE's dual arm testbed, highlighting practical applications and versatility.
Designed a control strategy for a 10-degree-of-freedom mobile manipulator to express internal information, focusing on emotions, through motion behaviors. Utilized crowdsourcing to learn the best parameter combinations for expressing different emotions.
Developed assistive guiding robot to aid individuals with visual impairments in navigating virtual trails using UWB technology. Incorporated semantic sound feedback to intuitively inform users about points-of-interest, improving overall navigation experience for the visually impaired.
I am the Team Lead of Team NCTU from January, 2019. Team NCTU have been competing in SubT Challenge in Tunnel Circuit and Urban Circuit, where we got #7 and #8 among all team and #2 among all self-funded team.
Recent advance of technologies of autonomous robots shows their impact in blind navigation. Our lab have long been developing assistive technologies from wearable devices to robotics guide dogs. In this position paper, we aim to share some of our experience in blind navigation technologies with autonomous robots. The position paper is accepted to Hacking Blind Navigation Workshop in CHI 2019.
Duckietown has been a great platform to learn and conducting reseach in robotics and autonomous driving car. To keep advancing the platform, the ability of localizing those Duckiebot is crucial. Here, we present the watchtower solution. We chose watchtower is a tower-like infrastructure inside Duckietown. They survellent over the whole city and try to localize Duckiebots using the Apriltag on top. Every watchtower is equiped with a RPI 3B+. They detect Apriltag and send the localization results back to the server computer where optimization is performed. The reason we chose watchtower over over-head cameras is that we would like these little guy to be a part of the town.
I have studied and also some side project with DL and DRL
Some of my works are realted to SLAM. I also use libraried like iSAM, GTSAM.
I work with ROS in daily basis
Mostly I code in Python. I also have plenty experience with C++.