Prev-Talk | Next-Talk | All-Talks | Talks-Sorted | MOOS-DAWG'24
Talk-03: Duckiepond 3.0: An Intuitive Human-Robot Interface for MOOS-IvP Applications Using Mixed Reality and Large Language Models (DemoDay 2024)
Jie-Xin Liu, Yu-Wei Zhang, Tzu-Chi Chen, Cheng-Yu Lai, Tien-Chia Chang, and Hsueh-Cheng Wang
All authors are with Department of Electrical and Computer Engineering, National Yang Ming Chiao Tung University (NYCU), Taiwan
Duckiepond 3.0 builds on our previous USV mission experiences to develop a cross-platform system that integrates a Unity frontend with a MOOS-IvP backend. This system connects maritime MOOS-IvP missions through immersive mixed reality interfaces, specifically using the Quest Pro headset—a consumer-grade device that ensures easy replication and setup. Advancements in large language models (LLMs) have enabled their incorporation into high-level task planning by human operators. We explored the impact of prompt engineering on MOOS behavior generation for the pHelmIvp Application.
Our key findings are as follows: Using few-shot examples improves the performance of LLMs in behavior generation, reducing syntactic error rates. Including intermediate tasks in a specifically designed format (such as JSON) or using LangChain, a framework that abstracts complex tasks, significantly enhances the syntactic and semantic performance of LLM-generated behaviors. We tested various LLM backbones and temperature settings for generating complex tasks, discovering that the ChatGPT-4o backbone with zero temperature outperformed other configurations in both zero and non-zero temperature settings. The proposed interface and LLM-generated behaviors were successfully executed in both virtual and real environments, demonstrating the effectiveness of our approach.
Categories:
- Simulation
- USVs