Travis Rich -- HTMAA

week 1 Proposal
week 2 Press Fit
week 3 FabISP
week 4 3D
week 5 PCB
week 6 Casting
week 7 ATtiny
week 8 Big
week 9 Input
week 10 Epoxy
week 11 Output
week 12 Interface
week 13 Network
week 14 Machine
week 15 Final

Week 12 -- Interfaces

11/27/12

Introduction

This week we're learning to write applications and user interfaces. I decided to use the light sensor board I built in the past as the hardware. I decided to build out visualizations that depict the light intensity measured by the board in a friendlier way than a simple bar graph. I'll use buttons to switch between the different visualizations, allowing me to play with making buttons as well.

Serial Interface

I began writing this week's assignment in python, using Neil's light input code as a baseline. I decided that using TKinter was a bit too widgety for what I wanted to build this week, so I started to play with the idea of using Processing to parse the serial port input and for the visualization. The trickiest part here was porting Neil's code (or rather the algorithm it steps through) for reading serial port data into Processing. Processing has an extra layer of abstraction that tricked me for a little bit as I spent time expecting to get raw binary values. In reality, processing outputs the decimal values by default. Once I figured this out, it was relatively straight forward to perform header detection and to grab the byte values representing the light intensity from the board. I did have one other misstep that caused the program to run very slowly - I forgot to clear the serial port buffer after each read. This led to the code getting backed up with old serial data, which resulted in the application appearing laggy.

UI

I wrote two visualtions, a tree and Timmy. The tree's growth (how full it is) is proportional to the amount of incident light. More light = healthier, fuller tree. The second visualtion is Timmy. Timmy is terrified of the dark. His reaction changes in response to how much ambient light he (the hardware board) can sense. Implementing the buttons in processing is a bit of a trick. Rather than have a button element, you define the regions of the button and on each update check the position of the mouse. If you mouse is within the defined coordinates and a mouseClick is detected, you implement actions that would be expected of a click. In this case, each click changes which display mode the interface is in, and updates the buttons' color to reflect state.

Note that the below video has inverted colors on the monitor to make it easier to see (the black on white background was a bit washed out on camera).