Technical Name Intelligent Assistive Robotic Arm for Electric Wheelchairs
Project Operator Institute of Electrical and Control Engineering, NYCU
Project Host 林顯易
Summary
This intelligent assistive robotic arm, mounted on an electric wheelchair, integrates multimodal AI—speech recognition, LLM-based semantic analysis, image recognition, and grasping point prediction. It understands natural language commands to pick up convenience store items like rice balls and drinks. Running all models locally ensures low latency and high privacy. The system supports scenario-specific fine-tuning, making it adaptable for use in healthcare, industry, and other environments.
Scientific Breakthrough
The breakthrough of this technology lies in integrating a language model to understand natural language and automatically plan actions, combined with an image recognition model capable of few-shot prompt learning that enables new objects to be recognized with just one labeled image. It also incorporates a highly generalizable grasp prediction model that can stably perform grasping tasks even on previously unseen objects.
Industrial Applicability
This technology holds strong industrial application potential and is suitable for fields such as smart healthcare, assistive medical devices, retail services, smart manufacturing, and logistics. It enables individuals with mobility impairments to retrieve objects independently, enhancing daily autonomy. It can also be deployed in factories for human-robot collaboration, object recognition, and automated grasping, effectively improving operational efficiency, reducing labor costs, and ensuring operator safety.
  • Contact
  • Hsien-I Lin
other people also saw