So intent and actuality were quite different this week. Again, at this stage I'm only working toward my final project. I did want one version where there would be a dashboard on the laptop (or maybe even phone?) where a user could see a couple of things:
Out of pure curiosity, I had Claude generate what code for this would look like, with the intent to implement if everything else was working perfectly. However. Spiral development didn't let me get to that stage. You can still see the code at the end of this page.
In actuality, I ended up both not having time and actually being quite satisfied with the lack of a computer interface. Music practice is best done without distraction (and I am already very easily distracted just because my pitch setter is on my phone). That said, I do miss the data logging aspect.
Eventually, I decided that for this assignment, the “application” is the embedded program running on the XIAO RP2040 that interfaces the user with both the input and output devices I designed. The application continuously listens for live audio input from the microphone, interprets the signal in the context of a chosen raga, and communicates feedback to the user through LEDs and a buzzer. Rather than relying on screens or numerical displays, the interface is intentionally embodied and intuitive: the user interacts with the system by singing, and the system responds through color, timing, and sound. This creates a closed feedback loop suitable for musical practice, where attention should remain on listening rather than reading.
Mic input
↓
Voice activity detection (RMS threshold)
↓
Pitch detection + confidence check
↓
Note classification (correct / near / incorrect)
↓
State smoothing & timing logic
↓
LED + buzzer output
// High-level application loop (simplified)
void loop() {
float rms = getAudioRMS();
if (rms > VOICE_THRESHOLD) {
PitchResult pitch = detectPitch();
if (pitch.isValid) {
NoteState state = classifyNote(pitch.frequency);
updateOutputs(state);
}
} else {
clearOutputs();
}
}
This code represents the intended structure of the application, integrating audio input, classification logic, and output feedback. Individual components were tested independently during earlier weeks and brought together during the final project.