// Cambridge, MA

FINAL PROJECT

1) what does it do?

it uses a camera to capture your facial expression, a model to predict your mood as either happy, sad, or neutral, and then turns the LED strip to the corresponding color (green, red, or white, respectively)

2) who's done what beforeheand?

the only thing i didn't design or modify myself was the training dataset for emotion recognition. that can be found here: https://www.kaggle.com/datasets/msambare/fer2013

3) what did you design?

everything else! on the physical side, i made the pcb, the label, and the lamp structure myself from scratch. on the software side, i synthesized some of the top-upvoted work on this dataset for kaggle and modified it heavily for my purposes, and then wrote the code to integrate it with the microcontroller camera and LEDs.

4) what materials and components were used?

the pcb is made from the standard copper we had in inventory, and has on it a xiao esp32s3 board with its default camera module + a connector and two wires also from inventory leading to the led, and power coming from a battery pack i found in the inventory.
the leds are 1 meter of adafruit neopixel strip LEDs. the part number is 1138.
the lamp structure is made of black PLA. the label is made of acrylic.
everything is affixed together by some combination of hot glue / tape.

5) where did they come from?

i tried to use what we had on hand in the lab to the maximum extent possible. the only item i didn't find either in inventory or laying around the lab (with permission to use it) were the adafruit LEDs. i got those from Amazon.

6) how much did they cost?

the LEDs cost $25 for 1m.

7) what parts and systems were made?

the distinct parts that were made for this project are: the lamp strucutre; the label; the pcb; and the software module.

8) what processes were used?

the lamp structure was designed in fusion360 and 3D printed on a Prusa. the label was designed in Illustrator and laser cut on the harvard lab's fusion laser cutter. the pcb was designed in Eagle and milled on a Roland SRM20 and soldered together. i attempted to write the software module using tinyML and Edge Impulse (see Wild Card Week), but ultimately could not get a small enough model to run well enough, and ended up using Python for image analysis ML and then Arduino for the board firmware (image capture and LED logic).

9) what questions were answered?

the biggest question i was answering with this project was a personal one: could i do this?
every single one of the processes and methods i used in this project was foreign to me before this course. i have learned a ton and am really proud to have a completed project.

10) what worked? what didn't?

everything ultimately worked except for my attempts to make the system run on embedded AI. i think given another couple days to familiarize myself with tinyML and model-making, i could have gotten it done.

11) how was it evaluated?

i had two key metrics with this project. first, did i work as hard as i could to get to my goal? and second, did my project make my classmates smile?
on the first one, i give myself a 10/10. as for the second, i will have to await class tomorrow to collect my data.

12) what are the implications?

i am better at this stuff than i thought! the biggest implication of this project is that it reignited my dormant yet primal urge to TINKER. excited to see what shenanigans i create in the future with my newfound skills.
--
being exhaustive to ensure that i met project requirements...

additive and subtractive fabrication?

additive = 3D printing, subtractive = laser cutting

electronics design and production

pcb was designed in Eagle and produced on Roland

embedded microcontroller design, interfacing, and programming?

it uses a xiao esp32s3 interfacing with my computer over pySerial, and is programmed to write images and read the resulting analysis (and then color the LEDs accordingly)

system integration and packaging?

it's not the prettiest, but it is integrated and packaged all together!