The Great Robot Project: Planning the Software

Initial high-level overview of the project from the software side.  What will be getting done, what choices were made, and why I made them.

Things are starting to feel like they’re happening, as I’ve got enough milling around in my head now to start thinking about the software that’ll make all of the hardware do… stuff.

As far as software goes, I’m looking at a stack running on the Beaglebone Black.  This limits me to whatever can happily run within 512MB RAM, but considering there’s no GUI and other associated rubbish to worry about that shouldn’t be a huge problem.

From the bottom up, we’re starting with a Ubuntu Linux OS to sit everything on.   The BBB comes with Angstrom, but I don’t want to waste time compiling drivers and working out how to patch the kernel with various things so I’m going with what’s familiar – at least to start with.  Most of the standard Linux daemons will be disabled, with the exception of an SSH server for remote access.

Sitting atop that will be two main pieces of software.  A lightweight webserver (I’m thinking lighttpd for a small memory footprint – although building the webserver into the Java app mentioned next is another possibility later on) to allow me to view a status page (and possibly issue commands, view webcam output, etc.) through a web browser, and the main “brain” of the robot, which I’ll likely think up a cool codename for later on.

That “brain” will be a Java application running a number of concurrent threads, communicating through a shared message queue.  It probably doesn’t need threading, but it’ll make a nice learning experience.

  • The main, or “decision” thread.  This is basically a loop that looks at everything the robot currently knows, and uses it to decide what’s happening next.  It looks at the message queue for input and writes out commands to the motivation thread.
  • The “motivation” thread.  This looks for directional commands in the message queue and feeds them into the attached Arduino as required.  If I attach lights at some point, these would probably go here too.
  • The “sensory” thread.  This one monitors all of the various attached sensors other than the webcam, and feeds their input into the message queue.  Will likely also be responsible for monitoring the touchpad input for direct commands.
  • The “vision” thread.  Takes webcam input and writes the current image as a jpeg for the webserver to serve up.  If there’s any image analysis later, that will happen here.  Keeping separate to the main sensory thread as image analysis could be rather hefty and I’d rather it didn’t walk into walls because it was too busy looking where it was going to read the ultrasonic sensor…

Other thoughts:

  • If there’s enough processing power to handle it, streaming the webcam output would be nice.
  • Audio output.  Streaming an audio file (yes, this thing could easily end up as a self-propelled audio player…) is probably best handled by executing a media player command and tracking the pid for termination if/when required.  If things go really well, then playing video on the LCD panel might be possible, too…
    On the other hand, robot-specific audio such as spoken output would be better handled in an additional speech thread.  It appears FreeTTS will do most of what I’m looking for there.

 

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.