Intro

Hey, it's Sun. Welcome to my course blog for How To Make (Almost) Anything (HTMAA), fall 2025.

I'll publish weekly articles on the things I've learned and report progress towards the final project.

I joined the Tangible Media Group at the MIT Media Lab in 2025. My research interest is in the social, cultural, and philosophical aspects of generative AI. I've previously worked as a software engineer at Microsoft and enjoy building tools for the open source community.

Weekly updates

Final project

I want to combine my background in AI application development with TMG's focus in Tangible Interface to build a voice-driven AI programming system inspired by telephone switchboard operators.

The inspiration

I’m motivated by how AI-routed phone systems have eroded the empathy and connection once provided by human operators. I want to revive that craft by building a switchboard that puts a person back in the loop as a thoughtful listener and connector.

Switchboard operator Jersey Telecom Switchboard and Operator (source)

The idea

A physical AI agent network implemented as a hardware grid with voice-based interaction and programming capabilities. The system combines push-to-talk interfaces with node-based generative AI computation, allowing users to dynamically program and interact with AI agents through voice commands. I want to call this system Field Programmable Generative AI (FPGAI)

Concept sketch My initial sketch

Next, I want to visualize the idea with gen AI. I'm entirely new to 3D modeling and rendering, so the fastest route to gain intuition on the form of the design is naturally using AI.

I crafted the prompt based on what I was imagining. The latest gemini model got this for me in one shot.

Base Device base (prompt)

Next, let's visualize the hand-held device. I want to model it after a CB radio speaker mic. Inspired by this project

Hand unit Hand unit (prompt)

Finally, let's put them together and add some context. I haven't decided the exact size for each component yet. I think that will have to wait until I figured out the electronics first.

In use In use (prompt)

The implementation

While it's still too early to fully specify the project, I have the following high level design.

Main Board

Speaker-Microphone Units

Operating Modes

Interaction Mode

Programming Mode

After the conceptual exploration from week 1, I switched focus to the electronics. I hope the electronics design can help inform the exterior of the system.

I started off with off-the-shelf components and iterated the idea to build more from sractch.

Proof of concept with off-the-shelf components

I can prototype almost the entire experience with cheap off-the-shelf products:

  1. Push-to-talk with a secondhand CB radio hand unit
  2. Audio cable adapters to 3.5mm TRRS
  3. USB hub for multiple inputs

Prototype 1 Prototype using consumer electronics

What's missing:

  1. No effort involved. This will result in a failing grade. It's only good for prototyping
  2. Can't guarantee the compatibility of the hand unit with the 3.5mm TRRS jack
  3. Can't prototype the visual feedback feature, where the 3.5mm jack shows "ready" state to the user via an LED

Bring intelligence to the main body

Iterating on the idea, I could use a Raspberry Pi with a primitive USB hub as the main processor. The Pi may still use a nearby laptop for LLM and speech-to-text, text-to-speech, but it's also possible to bring the entire AI/ML stack onto the device, reducing the need for networking.

Prototype 2 Moving compute to Raspberry Pi

I still need to figure out how the Pi can use the LEDs to display system state. Besides, I need to program some microcontroller to meet the requirements of this class. Can we go one level deeper?

Move audio processing to hand unit

To make the project more challenging, I can use an ESP32-based audio system to pick up speech and play back AI voice. We can wirelessly connect the ESP32 with a nearby laptop, where the voice-driven AI interactions will take place.

The main body still needs a controller to send the following information to the nearby laptop:

  1. Detect which socket is plugged in
  2. Control the LED status lights

Prototype 3 Audio processing in hand unit

The audio cable in this design does not really pass audio. It is solely used for detecting the state of plugged/unplugged. I need to figure out how to rig the 3.5mm jack to achieve this.

Build my own speaker/microphone

The next level is replacing the ESP32-based audio kit with a custom PCB, with speaker and microphone manually soldered. This will probably be the upper bound of the level of complexity I can handle.

Prototype 4 Build microphone and speaker on custom PCB

My next step is taking the idea to a TA for advice. This is my first time designing with electronics, so I do anticipate big revisions. Stay tuned.

Networking

Learning about embedded programming validated the design above. After getting hands-on experience building an echo server with ESP32, I now feel confident that I can relay data between the ESP32 hand unit and a nearby laptop using either a Wi-Fi or a serial connection. Next, I can explore several things in parallel:

Electronics design update

I consulted with our TA Quentin Bolsee regarding electronics design and received valuable help on input/output devices. I also conducted additional research using YouTube tutorials from atomic14, which enabled me to fully spec out the electronics for both components.

The hand unit will be built around a Xiao ESP32-C3 microcontroller, with upgrade options to ESP32-C6 if WiFi performance becomes a bottleneck, or to WROOM-32E if more GPIO pins are needed. For audio processing, I've selected the ICS-43434 I2S MEMS Microphone for input and the MAX98357A I2S Class D Amplifier paired with an 8-ohm speaker for output. However, the amplifier's specified response frequency of 600hz - 4000hz may not be ideal for voice applications, so I might need to find an alternative. The physical interface will include two buttons (single button for push-to-talk, both buttons for broadcast) and two switches (Power On/Off and Mode switch for interaction/programming). Power will come from a 3.7V LiPo battery, with a potential upgrade to a 3AA battery pack plus voltage regulator for easier replacement and a more vintage feel, though I need to consult with an electronics expert about the implementation details. Connectivity will be handled through a 3.5mm TRRS jack.

The main unit uses a simpler design with a Xiao ESP32-C3 microcontroller controlling 4 LEDs and 4 3.5mm TRRS jacks for the 2x2 grid configuration.

High-level schematic High-level design for the electronic components

For connection detection, I want to eventually support multiple hand units speaking simultaneously, which requires tracking which hand unit is plugged into which jack. Traditional physical TRS plug detection doesn't differentiate between different plugs, so I propose using TRRS jacks as a clever hack. By treating high/low voltage as 1/0 bits and using the sleeve as ground while the other 3 connections serve as signal lines, I can create 2^3 = 8 unique values. This allows each jack in the 2x2 grid to be uniquely identified by a 3-bit code. The main unit will be responsible for pulling up/down the 3 signal lines on the jacks, while the hand unit decodes the 3-bit code and sends it to the laptop along with its own unique ID.

TRRS socket TRRS socket has 4 pins

This design enables all necessary communication between the PC, hand unit, and main unit: hand unit plug-in messages with 3-bit codes and wireless IDs, audio streaming from hand units to PC using wireless IDs, audio streaming from PC to hand units using wireless IDs, and LED state updates from PC to main unit using 3-bit codes to identify specific jacks.

Parts list

Component Quantity Availability Notes
Xiao ESP32-C3 2 Out of stock 1* for hand units, 1 for main unit
ICS-43434 I2S MEMS Microphone 1* Stocked
MAX98357A I2S Class D Amplifier 1* Stocked
PSR-57N08A01-AQ 8-ohm speaker 1* Stocked
3.5mm TRRS jack 5 Need to order 1* hand unit + 4 main
TRRS audio cable 1* Need to order
3.7V LiPo battery 1* Need to order
Button 2* Stocked
Slide switch 2* Stocked
LED 4 Stocked
3AA battery pack + voltage regulator 1 Optional Alternative power solution

*For a single hand unit. Need more for additional units

With this design update, it became clear that the main unit is essentially a "dumb" device that encodes the TRRS socket and displays which AI agent is speaker and doesn't care about audio processing at all.

I have also gained insights how the physical contraints for the housing. The hand unit needs to mainly account for battery and speaker size. The PCB size and shape can be more flexible. The main unit needs to account for the 4 TRRS jacks.

Here are new and remaining questions which I plan to resolve by going to TAs as well as attending future lectures.

  1. PCB design. atomic14's design is a good reference but I don't know how I can design my own.
  2. Packaging design. How do I hold the components in place? especially the 3.5mm TRRS jacks which will receive physical stress.
  3. Physical interaction. How do I put buttons and sliding switch on the hand unit? I want a good tactile feel.
  4. LED lighting. How do make a ring that lights up around the TRRS socket?
  5. The CBA electronics shop inventory doesn't match what the website says. For example, the ESP32s are out of stock but the website didn't reflect that.

And here are the things I can prototype now:

  1. Play voice from ESP32 over WiFi
  2. Capture sound from ESP32 over WiFi
  3. Address and light-up 4 LEDs with ESP32
  4. Encode and decode TRRS identities between two ESP32 boards
  5. Design a case roughly based on atomic14's PCB foot-print.

Sound output

With the help from our TA Quentin Bolsee, I installed the official ESP32 board manager following its documentation. Then I installed the specific library for Arduino ESP32 Nano from the board manager.

I used the official example code to play a square wave tone, with a few lines of modification to set the right output pin. Here is the full source.

#define I2S_BCLK D7
#define I2S_LRC  D8
#define I2S_DIN  D9

Sound output from ESP32 using MAX98357A amplifier

I found a powerful library for audio processing by Phil Schatzmann, called Arduino Audio Toolkit. After studying his examples, I was able to get my computer to send live microphone audio to the ESP32 over WiFi, and play it back immediately. The latency is about 1 second, which concerns me but isn't a deal breaker.

This POC validated the idea that we can shift all the computation to a PC nearby and let ESP32 handle audio input/output.

Latency test result: 1 second delay

PCB Design

I designed both the hand-held device (Operator) and the main body (Switchboard) as part of this week's PCB design exercise. See details in the weekly post.

PCB Production

I milled boards for both the Operator and the Switchboard using the Carvera Desktop CNC Machine. See details in the weekly post.