# Update week 7 assignment guidelines _Exported on 10/29/2025 at 06:39:40 EDT from Cursor (1.7.54)_ --- **User** General guidlines: — for each image or video add a caption and description. — make all the text professional and succinct without losing key details — make the information as organized and as aesthetically pleasing as possible — don't just take my notes from the prompt, refine it before putting it on the page Tasks: update week7.html in group assignment section in three subsections 1. loadcell direct analog output and amplifier board digital output a. direct analog output subsubsection show loadcell wheatstone bridge photo, we're probing between V1 and V2 on button press (sense lines green and white), then show loadcell-analog-output-before-amplifier.mp4. explain how everything works succinctly. b. amplifier board digital output b1. clock signal show loadcell-clock-signal-hx711-sck.mp4. comment clockline amplitude is low. explain how everything works succinctly. we're probing SCK off the amplifier board to amplifier board ground b2. dataline show loadcell-digital-output-hx711-dt.mp4. comment its shown a 5V due to the noise. explain how everything works succinctly. we're probing DT off the amplifier board to amplifier board ground 2. microphone digital output with 16khz clock signal show microphoe-probe.jpeg, then microphone-clockline.jpeg, then microphone-digitaloutput.jpeg, include microphone-clockline.mp4. explain how everything works succinctly. 3. ir phototransistor analog output show irphototransistor-probes.jpeg, then irdiode-powersupply.jpeg and irdiode-power.jpeg side by side, then irphototransisotr-analog-output.mp4. explain how everything works succinctly. --- **Cursor** I'll first scan `week7.html` to find the group assignment section structure, and in parallel check that all referenced media files exist and note their exact paths. Then I'll add the three subsections with captions and succinct descriptions. I'll add the Group Assignment section with the requested three subsections, captions, and succinct explanations into `week7.html`, keeping styles consistent with existing cards. ```html
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
We characterized raw analog outputs and post-amplification digital signals across three sensors, capturing scope traces and annotated probes for clarity.
Load cell Wheatstone bridge; probed differential sense nodes V1 (green) and V2 (white) on button press
Raw differential signal at the bridge output shows small millivolt-level changes with load; we probed between V1 and V2 referenced to the bridge, confirming expected polarity on press.
HX711 SCK observed at the amplifier board, probed SCK to board ground. Clock amplitude appears low due to probe loading and wiring; edges align with HX711 data framing.
HX711 DT line, probed DT to board ground. Displayed as ~5 V due to scope noise and scale, but transitions correspond to clocked data bits from the ADC.
Probing points for microphone module (clock and data referenced to module ground)
Clock line near 16 kHz; stable square wave drives synchronous sampling
Microphone digital output shows data transitions aligned to clock edges
Clock-and-data capture demonstrates synchronous digital mic behavior: a ~16 kHz clock with data toggling on defined edges for bit capture.
Probe across the phototransistor sense node and reference; light on the diode modulates collector current
IR emitter powered and positioned; varying illumination changes phototransistor current and sensed voltage.
Analog output responds smoothly to IR intensity; increasing light lowers the sensed voltage across the load as the phototransistor conducts more.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
We characterized raw analog outputs and post-amplification digital signals across three sensors, capturing scope traces and annotated probes for clarity.
- - -
- Load cell Wheatstone bridge; probed differential sense nodes V1 (green) and V2 (white) on button press
-Raw differential signal at the bridge output shows small millivolt-level changes with load; we probed between V1 and V2 referenced to the bridge, confirming expected polarity on press.
-HX711 SCK observed at the amplifier board, probed SCK to board ground. Clock amplitude appears low due to probe loading and wiring; edges align with HX711 data framing.
-HX711 DT line, probed DT to board ground. Displayed as ~5 V due to scope noise and scale, but transitions correspond to clocked data bits from the ADC.
-
- Probing points for microphone module (clock and data referenced to module ground)
-
- Clock line near 16 kHz; stable square wave drives synchronous sampling
-
- Microphone digital output shows data transitions aligned to clock edges
-Clock-and-data capture demonstrates synchronous digital mic behavior: a ~16 kHz clock with data toggling on defined edges for bit capture.
-
- Probe across the phototransistor sense node and reference; light on the diode modulates collector current
-
-
- IR emitter powered and positioned; varying illumination changes phototransistor current and sensed voltage.
-Analog output responds smoothly to IR intensity; increasing light lowers the sensed voltage across the load as the phototransistor conducts more.
-Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
+ + +We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
+ + +
+ Probing V1 (green) and V2 (white) on press
+Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
+We observe the SCK clock and DT data relative to the amplifier board ground.
+ + +SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing.
+Scope shows ~5 V due to noise/scale, but transitions are the ADC’s clocked data bits.
+We capture the probe points, the ~16 kHz clock, and the synchronous data output.
+
+ Clock and data probed relative to module ground.
+
+ ~16 kHz square wave provides the sampling reference.
+
+ Data toggles aligned to clock edges for bit capture.
+Combined view of clock and data demonstrates synchronous mic behavior.
+We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
+
+ Sense node vs reference; light modulates collector current.
+
+
+ Emitter powered and positioned to vary illumination on the sensor.
+Analog voltage changes smoothly with IR intensity as the device conducts.
+
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
We characterized raw analog outputs and post-amplification digital signals across three sensors, capturing scope traces and annotated probes for clarity.
Load cell Wheatstone bridge; probed differential sense nodes V1 (green) and V2 (white) on button press
Raw differential signal at the bridge output shows small millivolt-level changes with load; we probed between V1 and V2 referenced to the bridge, confirming expected polarity on press.
HX711 SCK observed at the amplifier board, probed SCK to board ground. Clock amplitude appears low due to probe loading and wiring; edges align with HX711 data framing.
HX711 DT line, probed DT to board ground. Displayed as ~5 V due to scope noise and scale, but transitions correspond to clocked data bits from the ADC.
Probing points for microphone module (clock and data referenced to module ground)
Clock line near 16 kHz; stable square wave drives synchronous sampling
Microphone digital output shows data transitions aligned to clock edges
Clock-and-data capture demonstrates synchronous digital mic behavior: a ~16 kHz clock with data toggling on defined edges for bit capture.
Probe across the phototransistor sense node and reference; light on the diode modulates collector current
IR emitter powered and positioned; varying illumination changes phototransistor current and sensed voltage.
Analog output responds smoothly to IR intensity; increasing light lowers the sensed voltage across the load as the phototransistor conducts more.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
We observe the SCK clock and DT data relative to the amplifier board ground.
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing.
Scope shows ~5 V due to noise/scale, but transitions are the ADC’s clocked data bits.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Clock and data probed relative to module ground.
~16 kHz square wave provides the sampling reference.
Data toggles aligned to clock edges for bit capture.
Combined view of clock and data demonstrates synchronous mic behavior.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Sense node vs reference; light modulates collector current.
Emitter powered and positioned to vary illumination on the sensor.
Analog voltage changes smoothly with IR intensity as the device conducts.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
We observe the SCK clock and DT data relative to the amplifier board ground.
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing.
Scope shows ~5 V due to noise/scale, but transitions are the ADC’s clocked data bits.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Clock and data probed relative to module ground.
~16 kHz square wave provides the sampling reference.
Data toggles aligned to clock edges for bit capture.
Combined view of clock and data demonstrates synchronous mic behavior.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Sense node vs reference; light modulates collector current.
Emitter powered and positioned to vary illumination on the sensor.
Analog voltage changes smoothly with IR intensity as the device conducts.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
Description: The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
Description: We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding.
~16 kHz clock observed as a clean square wave
Description: The mic’s clock provides the sampling reference for synchronous data capture.
Digital output edges align to the clock for bit capture
Description: Data toggles on defined edges enabling reliable sampling and decoding.
Combined capture shows stable clock and synchronous data
Description: Verifies timing relationship and signal integrity for downstream processing.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Description: Illumination modulates collector current, changing the sensed voltage across the load.
IR diode power supply setup
IR diode powered on
Description: The emitter is powered and positioned to control incident IR on the phototransistor for response testing.
Analog output changes with IR intensity
Description: Increasing light increases conduction, lowering the sensed voltage across the load resistor.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
-Description: The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load.
+Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
+The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
@@ -SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting.
-SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
+Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream.
-Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
+
- Probe points for clock and data relative to module ground
-Description: We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding.
-
+ Probe points for clock and data relative to module ground
+We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
+
- ~16 kHz clock observed as a clean square wave
-Description: The mic’s clock provides the sampling reference for synchronous data capture.
-
+ ~16 kHz clock observed as a clean square wave
+The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
+
- Digital output edges align to the clock for bit capture
-Description: Data toggles aligned to clock edges enabling reliable sampling and decoding.
-
+ Digital output edges align to the clock for bit capture
+Data toggles aligned to clock edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
+Combined capture shows stable clock and synchronous data
-Description: Verifies timing relationship and signal integrity for downstream processing.
-
- Probe across sense node and reference
-Description: Illumination modulates collector current, changing the sensed voltage across the load.
-
+ Probe across sense node and reference
+Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
+Description: The emitter is powered and positioned to control incident IR on the phototransistor for response testing.
+The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
@@ -Analog output changes with IR intensity
-Description: Increasing light increases conduction, lowering the sensed voltage across the load resistor.
-Analog output changes with IR intensity
+Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
+
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
Description: We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding.
~16 kHz clock observed as a clean square wave
Description: The mic’s clock provides the sampling reference for synchronous data capture.
Digital output edges align to the clock for bit capture
Description: Data toggles on defined edges enabling reliable sampling and decoding.
Combined capture shows stable clock and synchronous data
Description: Verifies timing relationship and signal integrity for downstream processing.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Description: Illumination modulates collector current, changing the sensed voltage across the load.
IR diode power supply setup
IR diode powered on
Description: The emitter is powered and positioned to control incident IR on the phototransistor for response testing.
Analog output changes with IR intensity
Description: Increasing light increases conduction, lowering the sensed voltage across the load resistor.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
Combined capture shows stable clock and synchronous data
Description: Verifies timing relationship and signal integrity for downstream processing.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Description: Illumination modulates collector current, changing the sensed voltage across the load.
IR diode power supply setup
IR diode powered on
Description: The emitter is powered and positioned to control incident IR on the phototransistor for response testing.
Analog output changes with IR intensity
Description: Increasing light increases conduction, lowering the sensed voltage across the load resistor.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Description: Illumination modulates collector current, changing the sensed voltage across the load.
IR diode power supply setup
IR diode powered on
Description: The emitter is powered and positioned to control incident IR on the phototransistor for response testing.
Analog output changes with IR intensity
Description: Increasing light increases conduction, lowering the sensed voltage across the load resistor.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Input devices training video
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Exploring various input devices and sensors for microcontroller projects, including button interfaces and camera-based edge AI inference systems.
Button LED system and camera edge AI inference
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Key captures from the group and individual assignments with deep links to their sections.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Key captures from the group and individual assignments with deep links to their sections.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Key captures from the group and individual assignments with deep links to their sections.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Key captures from the group and individual assignments with deep links to their sections.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Key captures from the group and individual assignments with deep links to their sections.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Key captures from the group and individual assignments with deep links to their sections.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Key captures from the group and individual assignments with deep links to their sections.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Key captures from the group and individual assignments with deep links to their sections.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices projects.
Transparent documentation of AI assistance used in this week's input devices work.
This week's page was significantly enhanced by Cursor AI to create comprehensive professional documentation for input devices work. The AI assisted with creating week7.html from a template, adding the Edge AI camera inference design files documentation, documenting the individual assignment with button-LED system and camera with Edge AI inference, refining text to be professional and succinct while maintaining key details, and formatting the content with dark background cards and improved aesthetics similar to week6.html.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Key captures from the group and individual assignments with deep links to their sections.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.