Robotic control and interaction is often a complex and lengthy process utilising many different tools to help and often hinder human users ability to communicate commands to a robotic system.
Recent research has been directed into the development of autonomous systems capable of navigation of autonomous vehicles in different environments. One of the major problems in the development of self-navigating autonomous systems is the establishment of positional information for the said system
The goal of this research is to develop an extensible, but intuitive interaction method that combines mobile device visual localization and generation of simple robotic instructions (for the Lego NXT platform) into one easy step. Allowing users to focus on the task and not the tool.