# Outline project plan and weekly schedule _Exported on 12/10/2025 at 08:52:12 EST from Cursor (2.1.39)_ --- **User** general guidelines: — for each image or video add a caption and description. — make all the text professional and succinct without losing key details — make the information as organized and as aesthetically pleasing as possible — don't just take my notes from the prompt, refine it before putting it on the page specific tasks: 1. in week14.html, add a new section under 'Remaining Tasks (Snapshot)' with the following information week14 day-by-day system integration plan  theme of the week FINITIATIVE http://wordnet-online.com/finitiative.shtml daily schedule wednesday morning put this plan on website update the weekly assignment sections on the final project page (week 10-week 13) link final project design files update reflections and learnings update picture of prism holder update library for attiny board add final pictures to slack canvases schematic between boards (wired and wireless) — updated system diagram on PPT-like page start CAD model of system place final order wednesday afternoon and evening design boards combine OLED screen/accelerometer with pulse oximeter board in a new board (maybe upgrade to small TFT as minor) speaker board with realtime amplifier and TFT (on a the load cell fixed board) band mold cases 3d print mountable cases for the boards integrate designs into final CAD document design finish CAD model of system thursday fabricate milling boards print mold and cast band print cases solder solder components on milled boards connect boards with wired (and wireless connection codes) finishing mirror on screen (easy way with the film) document fabrication friday programming board level codes server level codes document codes finish anything else saturday demo integrated system test integrated system document testing and evaluation review and complete documentation (list of questions) make the video by collaging documentation sunday fill up any gaps prepare demo on tensegrity table finalize 1-slide work on minors if done, work on if there’s time monday morning transport demo on tensegrity table fill up any gaps work on minors specific tasks to do this week major tasks CAD model of system speaker board with realtime amplifier combine screen/accelerometer with pulse oximeter board in a new board mold and cast band design and fabricate casing (print) schematic between boards (wired and wireless) — updated system diagram on PPT-like page serial connection between pulse oximeter and tiny blinking heart for bpm (bpm from ir, spo2 from delta) combine multiple boards on the same wifi (switching tabs is easy way, board hosting the webpage querying and update or python based server somewhere where everything posting data, raspberry pi) put+program everything together according to the above (with WiFi for now) mirror on screen (easy way with the film) document evaluations and costs (plus the rest of the list here: https://academy.cba.mit.edu/classes/project_presentation/index.html) summary slide and one-minute video for documentation conception construction operation minor tasks program microphone/speaker fix reaction time delay code program LoRa connection fix OLED plus wifi issue upgrade to TFT (SPI is very straightforward, design board with either SPI or OLED connection) fix heart engraving to center engrave k9 glass mirror if it arrives rd and ird isolation slit (maybe wick and then cut) do the calibration curve for the load cell finish cardboard laser cutter origami big mirror frame if there’s time moving base of mirror orders raspberry pi zero (or server host, do some research) presentation aim for 2.5 minutes because Neil will ask questions generally no slides except for 1 open summary slide (have backup slides in case questions come up!) 2. at the top of the plan, embed the finalproject/motivation/positive_possum.jpg image under the title 3. add a link to the system integration plan in the table of content --- **Cursor** Reviewing week14.html to understand its structure. Checking week13.html for structure, then creating week14.html: Searching for "Remaining Tasks" and the table of contents structure: Checking finalproject.html for structure and searching for "Remaining Tasks": Reading the "Remaining Tasks" section in finalproject.html: Creating week14.html based on week13.html, then adding the system integration plan section. Checking the week template structure: Creating week14.html with the system integration plan. Reading more of week13.html to understand the full structure: Creating week14.html with the system integration plan. Building the file structure: Creating week14.html with the system integration plan: ```html
Final Project
Final project integration week focused on bringing together all subsystems into a cohesive MirrorAge system, completing documentation, and preparing for final presentation.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation.
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
Reflections and key learnings from the final project integration week will be documented here.
Acknowledgments for contributions and collaborations during the final project week.
Documentation of AI-assisted tasks and tools used during the final project integration week.
MIT CEE PhD @ AbuGoot and Gladyshev Labs
HTMAA 2025 · Making + AI for Aging & Longevity
This is my home base for How to Make (Almost) Anything. I’m documenting weekly assignments, fabrication experiments, and a final project that bridges making with aging & health analytics.
Edge AI mirror that combines SenseCraft vision, ReactionAge latency testing, grip-force sensing, wearable streaming, and molded packaging into one self-contained assessment system.
Minimal templates for week docs (Markdown → HTML), BOM tables, parametric CAD files, and reproducible build recipes.
Fabrication, sensing, and health tech; plus a few works-in-progress from my broader research.
sdajani [at] mit.edu · Google Scholar
I am a PhD student in the Department of Civil and Environmental Engineering at the Massachusetts Institute of Technology (MIT), with joint affiliations at Brigham and Women's Hospital (BWH, part of Mass General Brigham) and Beth Israel Deaconess Medical Center (BIDMC, part of Beth Israel Lahey Health) at Harvard Medical School (HMS), where I am a member of the Abudayyeh-Gootenberg Lab and the Gladyshev Lab. I am also affiliated with the Mesoscale Nuclear Materials (MNM) Group in the Short Lab within the Department of Nuclear Science and Engineering at MIT. My research integrates the mechanics of materials under extreme conditions with machine learning to advance health, aging, and longevity science.
Session transcripts where ChatGPT helped refine designs, code, and plans for this course.
Initial setup and homepage development
Created a Python script to automatically convert Cursor AI markdown transcripts to styled HTML files for better browser viewing.
Usage: python3 scripts/md_to_html_converter.py input.md output.html "Title" "Description"
Developed systematic prompts for consistent footer updates and content management across all pages in the HTMAA documentation workflow.
Focus: Systematic content updates, footer standardization, and documentation workflow optimization
Created comprehensive prompts for making the entire web repository fully responsive on phones while maintaining desktop-friendly design. Applied mobile-first responsive design principles, responsive tables, flexible layouts, and touch-friendly interactions.
Focus: Mobile-first responsive design, viewport optimization, responsive tables, flexible layouts, and touch-friendly UI
Systematic update of all week pages with helpful documentation links, creation of week7.html for input devices, and comprehensive resource integration across the entire course documentation structure.
Focus: Week 7 creation, helpful documentation links integration, course resource organization, and systematic page updates across all weeks
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation.
View the complete integration plan:
Week 14 Day-by-Day Integration Plan →MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
[Week 10 progress and contributions to final project placeholder]
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
[Week 11 progress and contributions to final project placeholder]
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
[Week 12 progress and contributions to final project placeholder]
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
[Week 13 progress and contributions to final project placeholder]
[Links to CAD/SVG/assets placeholder.]
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation.
View the complete integration plan:
Week 14 Day-by-Day Integration Plan →MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
[Week 10 progress and contributions to final project placeholder]
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
[Week 11 progress and contributions to final project placeholder]
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
[Week 12 progress and contributions to final project placeholder]
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
[Week 13 progress and contributions to final project placeholder]
[Links to CAD/SVG/assets placeholder.]
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Final Project
Final project integration week focused on bringing together all subsystems into a cohesive MirrorAge system, completing documentation, and preparing for final presentation.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation.
Related: See the Development Timeline on the final project page for the overall project schedule.
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
Reflections and key learnings from the final project integration week will be documented here.
Acknowledgments for contributions and collaborations during the final project week.
Documentation of AI-assisted tasks and tools used during the final project integration week.
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
[Week 10 progress and contributions to final project placeholder]
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
[Week 11 progress and contributions to final project placeholder]
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
[Week 12 progress and contributions to final project placeholder]
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
[Week 13 progress and contributions to final project placeholder]
[Links to CAD/SVG/assets placeholder.]
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
[Week 11 progress and contributions to final project placeholder]
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
[Week 12 progress and contributions to final project placeholder]
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
[Week 13 progress and contributions to final project placeholder]
[Links to CAD/SVG/assets placeholder.]
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
[Week 12 progress and contributions to final project placeholder]
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
[Week 13 progress and contributions to final project placeholder]
[Links to CAD/SVG/assets placeholder.]
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
[Week 13 progress and contributions to final project placeholder]
[Links to CAD/SVG/assets placeholder.]
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
[Links to CAD/SVG/assets placeholder.]
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Principles & Cutting · HTMAA 2025
Successfully fabricated and assembled parametric design modular construction kit based on laser cutting origami crease patterns on cardboard.
Applied vinyl stickers and completed takehome assignment
📁 Design Files Available
Download CAD files, view SVG patterns, and access all design assets from this week's assignments
📄 View Design FilesThis week focuses on the fundamentals of digital fabrication: computer-aided design (CAD), computer-aided manufacturing (CAM), laser cutting, and vinyl cutting. We'll learn the complete workflow from design to finished parts.
Comprehensive training notes and media from laser cutter and vinyl cutter sessions.
Additional resources and guides from Anthony Pennes for laser cutting, vinyl cutting, and image compression workflows.
Comprehensive guides covering essential digital fabrication techniques, parameter optimization, and workflow best practices.
Essential techniques for optimizing image file sizes, setting up batch processing tools, and maintaining repository efficiency.
📖 View GuideComplete laser cutting workflow including safety, parameter optimization, joint design, and troubleshooting techniques.
📖 View GuideDetailed vinyl cutting techniques, software setup, machine operation, and weeding processes for professional results.
📖 View GuideNote: These guides were created by Anthony Pennes and are maintained as supplementary resources for the HTMAA course. View original Slack message for context.
Official course resources for project management, computer cutting, and Git recitation materials.
Project Management - MIT Academy
Comprehensive guide to project management including file synchronization, version control (Git, GitHub, GitLab), web development, and documentation workflows. Covers essential tools for managing digital fabrication projects.
Computer Cutting - MIT Academy
Complete resource for computer-controlled cutting including laser cutting, vinyl cutting, and other cutting technologies. Covers safety, machine operation, material selection, and design considerations.
Hands-on Git tutorial covering version control fundamentals, repository management, collaboration workflows, and best practices for digital fabrication projects.
Systematic testing and documentation of laser cutter parameters for optimal cutting performance. Assignment started: September 11, 2025 Assignment completed: September 16, 2025
Workflow Note: The Universal Control Software provides precise control over laser parameters, while Inkscape serves as the design interface for creating and sending cutting jobs. The color-coded line system allows for efficient batch processing of both cutting and engraving operations in a single job.
Finger joint - interlocking design for structural connections
Size of the slot that provides tight enough fit for assembly
Material choice significantly impacts joint quality and assembly precision
Comprehensive analysis of kerf and clearance measurements from systematic laser cutter testing.
View and interact with the complete dataset in Google Sheets:
Systematic measurement approach for characterizing laser cutter kerf and joint clearance using precision calipers.
Peg and hole measurement setup
Top measurement view
Bottom measurement view
Kerf Definition: The width of material removed by the laser beam during cutting. Measured as the difference between prescribed cut dimensions and actual cut dimensions.
| Prescribed Cut (in) | Actual Cut #1 (in) | Actual Cut #2 (in) | Actual Cut #3 (in) | Average (in) | Std Dev (in) | Avg Kerf (in) |
|---|---|---|---|---|---|---|
| 0.136 | 0.142 | 0.1405 | 0.1445 | 0.1423 | 0.0020 | 0.0032 |
| 0.137 | 0.142 | 0.1475 | 0.149 | 0.1462 | 0.0037 | 0.0046 |
| 0.138 | 0.148 | 0.1495 | 0.147 | 0.1482 | 0.0013 | 0.0051 |
| 0.139 | 0.146 | 0.146 | 0.1475 | 0.1465 | 0.0009 | 0.0038 |
| 0.143 | 0.155 | 0.1555 | 0.1545 | 0.1550 | 0.0005 | 0.0060 |
| 0.144 | 0.1535 | 0.153 | 0.15 | 0.1522 | 0.0019 | 0.0041 |
| 0.152 | 0.163 | 0.1655 | 0.165 | 0.1645 | 0.0013 | 0.0063 |
| 0.153 | 0.166 | 0.169 | 0.168 | 0.1677 | 0.0015 | 0.0073 |
Clearance Definition: The dimensional difference between hole and peg sizes. Negative values indicate interference fit (hole smaller than peg), while positive values indicate clearance fit.
| Measurement | Sample #1 (in) | Sample #2 (in) | Sample #3 (in) | Average (in) | Std Dev (in) |
|---|---|---|---|---|---|
| Peg Size | 0.1505 | 0.1535 | 0.1505 | 0.1515 | 0.0017 |
| Hole Size | 0.156 | 0.1575 | 0.1545 | 0.1560 | 0.0015 |
| Clearance | -0.0055 | -0.004 | -0.004 | -0.0045 | 0.0009 |
0.0049 inches (4.9 thousandths)
-0.0045 inches (interference fit)
±0.0017 inches (high precision)
±0.0009 inches (very consistent)
Design and fabricate a modular construction kit using laser cutting and vinyl cutting techniques. Assignment started: September 10, 2025. Assignment completed: September 15, 2025
Initial research and clarification discussions with instructors and classmates to understand assignment requirements and technical constraints.
Special thanks to Erik Demaine for guidance on vinyl cutting crease patterns and fold differentiation techniques, and Anthony Pennes for clarifying assignment requirements, file formats, and material options. Their expertise was invaluable in understanding the technical constraints and design possibilities for this project.
I hand‑folded the crease pattern to validate the assembly. The parts do come together when the folds are reasonably close to the intended lines. A second, refolded gadget produced a noticeably cleaner assembly.
We tried cutting the full crease pattern on one side. It failed when layers with opposite cuts overlapped and had to be push‑folded through during the final folding step — the piece broke.
First assembly of two gadgets. Folds were smooth after separating mountain and valley cuts onto opposite sides to avoid breaking during push‑folds over multiple layers. The shop cardboard was tight in the assembly pocket; thinner Amazon box cardboard folded by hand fit better. Next step: make the laser‑cut pattern more exact.
Left to right: clear mountain cut, flip and cut the opposite side, clear valley cut.
Measurements confirming mountain vs valley sides.
Side profile and insertion step demonstrating clean folds without tearing.
Exploring double‑part strategies: sequential cuts, flips between sides, and opposite‑side cuts.
Temporary rubber band used to hold alignment while completing final folds.
Switched to thinner Amazon cardboard because the shop cardboard was too thick for the assembly pocket. The Amazon box folded by hand fit perfectly and assembled smoothly. Three gadgets worked with the thinner material. Rubber bands are just for compression — they assemble without them, though the bands make it easier to keep them as one unit during insertion into the next gadget.
Scaling up to quadruple cuts: sequential cuts, flips between sides, and opposite-side cutting strategies.
Three views of the double gadget assembled with Amazon cardboard, showing clean folds and proper fit.
Assembly sequence showing compression, support positioning, and insertion process for stable construction.
The culmination: triple gadget assembly demonstrating successful scaling with Amazon cardboard and support techniques.
Applying thick‑origami concepts to avoid layer collisions and enable folding with real material thickness.
From a discussion with Erik Demaine: double each crease into parallel offsets and open vertex regions as polygons to prevent collisions. A uniform spacing works for a first pass, but for correctness the spacing between the doubled creases should depend on how many layers lie between faces in the final folded state. I plan to estimate per‑crease layer counts from an unfolded model and adjust offsets accordingly.
Citation: Jason S. Ku and Erik D. Demaine, “Folding Flat Crease Patterns With Thick Materials”, Journal of Mechanisms and Robotics, 8(3), June 2016, pp. 031003‑1–6. View paper
Input: crease segments with mountain/valley labels; sheet size S; target thickness t
offset = t / S (or a chosen fraction)
for each crease c:
draw two lines parallel to c at ±offset
for each vertex (intersection of original creases):
place a small regular polygon (e.g., octagon) centered at the vertex
Output: doubled creases + vertex holes
Input: crease lines; sheet size S; target thickness t
offset = t / S
for each crease c:
compute unit direction v and normals n1,n2
define two infinite offset lines at ±offset along n1/n2
for each vertex:
collect incident creases; sort by angle into cyclic order
for each adjacent pair (i, j):
pick the offset side that lies inside the angular sector
intersect those two offset lines → one polygon vertex
connect all vertices in order → convex vertex polygon
trim faces/creases by polygons as needed
Output: doubled creases + exact vertex polygons
Following Erik's advice, offsets should scale with the number of layers between faces in the folded state. Plan: estimate per‑crease layer counts and set per‑crease offset = k × thickness × layers(c), while keeping vertex polygons valid to avoid collisions.
Design and cut vinyl stickers and stencils using the vinyl cutter. Assignment started: September 15, 2025 Assignment completed: September 15, 2025
Planned to vinyl cut lab logo stickers at first. Collected .png or .svg for each logo for the vinyl cutter software, and executing the complete workflow from cutting to final application. I hope to come back to the lab logos when there is less traffic on the vinyl cutter, I was more excited to see the fine detail of the vinyl cutter with the 2D parametric design at first!
Used a parametric sticker design that can be infinitely customized through an interactive web application. The design demonstrates the power of parametric modeling in creating personalized vinyl stickers with adjustable parameters for size, complexity, and visual elements. The assignment demonstrated both successful execution and troubleshooting when equipment issues arose.
This web-based generator was created as a final project in 6.5310 last semester and was used to download a PNG of the default design shown above. Generate unlimited variations of this parametric design by adjusting parameters in real-time to create custom stickers for any application.
🎨 Generate Custom DesignsComplete workflow from vinyl cutting through final sticker application, demonstrating the full process from design to finished product.
Vinyl cutter in action, cutting the Gladyshev Lab logo design with precise blade control and proper tension settings.
Clean vinyl cut showing precise cuts with proper registration and no tearing or incomplete cuts.
Systematic weeding process: starting from center, progressing through unwanted sections, and finishing with clean weeded vinyl ready for transfer.
Transfer paper application process: carefully applying transfer paper to hold vinyl design, then removing it to prepare for final application.
Final application steps: positioning vinyl on target surface, using tweezers for precise removal of the broken thin vinyl strip, and achieving clean final application.
Final result: applied laptop vinyl sticker and takehome vinyl sticker ready for use, demonstrating successful completion of the vinyl cutting workflow.
Encountered and resolved a motor failure during the vinyl cutting process, demonstrating troubleshooting skills and equipment maintenance knowledge.
Vinyl cutter displaying motor error, indicating a mechanical issue that prevented normal operation.
Side-by-side comparison showing the vinyl roll before and after the failure. The roll ran out of material and then fell, causing the vinyl to kink under the cutting wheel.
Simple fix process:
This common issue occurs when the vinyl roll runs out or becomes misaligned, causing the cutting mechanism to fail. The solution involves proper material handling and machine setup.
Download or view design files from this week's assignments.
Key insights and challenges encountered during this week.
Acknowledgements for the help I received this week from my classmates and instructors.
This week's training was well-documented by Katrina Li, thank you!
Camron reviewed the website and provided valuable feedback on navigation and user experience. She suggested implementing a table of contents to make it easier for visitors to navigate through the different sections of the weekly documentation. This suggestion led to the creation of the sleek, organized table of contents that now appears on both Week 0 and Week 1 pages, significantly improving the overall user experience and accessibility of the documentation.
Photos and videos of this week's training were taken by Mariam Fitaihi, thank you!
Anthony Pennes provided the kerf and clearance design files (HTMAJointTest v0.f3d and HTMAJointTest.dxf) as part of the laser cutter training. These files were essential for understanding joint tolerances and clearance requirements when working with laser-cut materials.
Erik reviewed my approach to folding thick materials and suggested using the Ku & Demaine thick‑origami method. He pointed me to the paper and emphasized that doubled creases should be spaced according to the number of layers between faces in the final fold. This feedback led me to implement two variants (estimate and exact) of thickened crease patterns and to plan a next step for layer‑aware offsets.
Jesse helped me with laser cutting while figuring out how to do it, providing valuable guidance and support during the learning process. His assistance was crucial for understanding the laser cutting workflow and techniques.
I took notes on the individual group assignment for characterizaing the laser cutter. :-)
Transparent documentation of AI assistance used in this week's work, following course guidelines for ethical AI usage.
This week's webpage development, training documentation, and assignment preparation were assisted by Cursor AI. The AI helped with HTML/CSS implementation, content organization, and structuring technical documentation.
Cursor AI assisted with organizing webpage sections, cleaning up HTML structure, and updating content for both week 0 and week 1 pages. The AI helped with code refactoring, section reorganization, and maintaining consistent styling across pages.
Cursor AI assisted with adding Anthony's HTMA guides to the Week 1 page, creating a new "Useful Documentation" section, and integrating external resource links with proper styling and organization.
Cursor AI assisted with creating a comprehensive Results section for the Laser Cutter Characterization Group Assignment, including kerf and clearance analysis tables, measurement methodology documentation, concept diagrams, and downloadable resources. The AI helped integrate Google Sheets data, create professional data tables, and include the complete measurement analysis workflow.
Cursor AI assisted with creating an interactive highlights section for Week 1, including visual improvements to the results display, clickable navigation links to detailed sections, table of contents integration, and enhanced user experience features. The AI helped implement hover effects, proper section linking, and visual hierarchy improvements for better navigation.
A separate chat was used to convert a flat crease pattern into a thick‑origami version following Ku & Demaine.
The AI doubled each crease into parallel offsets and generated vertex polygons (estimate = regular polygons; exact = sector‑aware polygons) and explained how to map desired thickness by setting OFFSET_FRAC = thickness / sheet_size.
Cursor AI assisted with repository size management by breaking down large commits into smaller, manageable chunks under 10MB each. The AI helped organize media files into logical commits, added the Vinyl Cutter Individual Assignment section with proper styling and image sizing, and provided guidance on git workflow optimization to work within platform size limits.
Cursor AI assisted with updating the vinyl cutter individual assignment section with comprehensive documentation including successful cut and transfer workflow, motor failure troubleshooting, and 2D parametric design integration. The AI helped structure the content with proper media organization, descriptive captions, and consistent styling throughout the section.
Cursor AI assisted with adding a comprehensive Design Files section to the Week 1 page, including table of contents integration, organized file listings with download and view links, and contextual hyperlinks throughout the page for easy access to design assets. The AI helped structure the files by category (laser cutter CAD files, SVG patterns, vinyl cutter images) with appropriate styling.
Cursor AI assisted with adding kerf and clearance design files from Anthony Pennes to the Week 1 Design Files section, creating a new subsubsection for joint test files, and properly attributing the contribution in the contributions section with links to both the Slack message and design files section.
Cursor AI assisted with fixing layout and margin issues in week1.html, including resolving width inconsistencies and ensuring proper section structure. The AI helped identify and fix extra closing div tags that were causing content to appear outside their intended sections. The issue was resolved by manually removing the duplicate closing tags.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Embedded Programming · HTMAA 2025
Key achievements and insights from the embedded programming week, including successful microcontroller programming, sensor integration, and the development of the ReactionAge reaction time measurement system.
Comprehensive analysis of critical design decisions in embedded system development
💻 Design Files Available
Download Arduino code, view circuit diagrams, and access all embedded programming assets from this week's assignments
📄 View Design FilesThis week focuses on embedded programming fundamentals, exploring how to program microcontrollers and create interactive electronic systems. We'll dive into the world of digital electronics, sensors, and real-time programming to build responsive devices that can sense, process, and react to their environment.
Essential training materials and resources for embedded programming fundamentals, microcontroller setup, and sensor integration techniques.
Comprehensive guide to soldering techniques for embedded electronics, covering both wire soldering and solder paste methods.
Safety First:
Always wear safety glasses and use proper ventilation. Lead-free solder requires extra caution due to flux crystallization at high temperatures.
Step-by-step video demonstrations of essential soldering techniques for embedded electronics, covering both wire soldering and surface-mount component assembly.
Learn the proper technique for soldering wires to components, starting with one edge to keep the part in place, then soldering the rest easily by keeping your wrist stable and the solder tip on the pad to heat it up with the part, and slowly bringing in solder.
Wire soldering demonstration showing proper technique and wrist stability
Complete process for surface-mount component assembly: apply solder paste first, place the component, then heat up with a heat gun to create strong, reliable connections.
Applying solder paste to PCB pads
Placing microcontroller on prepared solder paste
Using heat gun to reflow solder paste and complete the connection
Pro Tips:
For detailed soldering guides and advanced techniques, refer to the Electronics & Soldering Training section above for comprehensive documentation and resources.
Complete programming setup and workflow for QPAD development boards, covering bootloader configuration, IDE setup, and code deployment for different microcontroller architectures.
The QPAD-XIAO features a Raspberry Pi RP2040 microcontroller with built-in bootloader support. No bootloader burning required - simply configure your IDE and start programming!
The QPAD21 uses an ATSAMD21 microcontroller that requires bootloader programming using a XIAO programmer chip. Follow these steps to get it ready for development.
The QPAD-XIAO-ESP32S3 features an ESP32-S3 microcontroller with built-in WiFi and Bluetooth capabilities. No bootloader burning required - just configure your IDE and start programming!
Comprehensive documentation covering embedded programming fundamentals, electronics theory, and practical implementation guides.
Official course slides covering microcontroller architecture, I/O techniques, sensor integration, and real-time system design.
Essential resources for designing custom PCBs and development boards for embedded programming projects, covering schematic design, layout principles, and manufacturing considerations.
USB-C board design showing proper connector placement, power routing, and signal integrity considerations for embedded development boards
Design Files Available:
Complete KiCad project files, schematics, and PCB layouts are available in the Design Files section below for download and modification.
Essential troubleshooting techniques for embedded development boards, covering power supply verification and connector testing using multimeter measurements.
Verify that your voltage regulator is functioning correctly by measuring output voltages with a multimeter when power is connected to the board.
Voltage regulator testing points showing input, output, and ground connections for multimeter verification
💡 Pro Tip:
Use a portable multimeter for easy on-the-go testing and debugging of embedded circuits.
Verify USB-C connector functionality by testing power delivery, data lines, and ground connections using multimeter measurements.
USB-C pinout diagram showing power, data, and ground connections for multimeter testing and verification
🔧 Troubleshooting Tips:
Official course resources for embedded programming and electronics fundamentals.
Embedded Programming - MIT Academy
Comprehensive resource covering microcontroller programming, I/O techniques, sensor integration, and real-time system design. Includes tutorials, code examples, and hardware interfacing methods.
HTMA Electronics Slides - Google Slides
Official course slides covering microcontroller architecture, programming fundamentals, sensor integration, and real-time system design principles.
Comprehensive analysis of design tradeoffs in embedded programming, covering language selection, development workflows, and microcontroller choices. This collaborative project examines the critical decisions that shape embedded system development and their impact on project success.
Comprehensive comparison of microcontroller architectures and their tradeoffs in embedded programming
Programming language tradeoff analysis showing performance vs development speed considerations
As illustrated in the language tradeoff diagram above, the choice of programming language involves critical tradeoffs between performance and development speed. The analysis shows how different languages balance factors like memory efficiency, execution speed, development time, and ecosystem support for embedded system development.
Development workflow tradeoff analysis comparing different methodologies and their impact on project outcomes
The development workflow tradeoff diagram above demonstrates how different methodologies balance project timeline, code quality, and team collaboration. This visual analysis compares traditional waterfall approaches with agile methodologies, showing the impact of various toolchains, testing strategies, and deployment processes on embedded system development success.
Microcontroller tradeoff analysis showing performance vs power consumption and cost considerations
The microcontroller tradeoff diagram above illustrates the critical balance between performance, power consumption, and cost in embedded system design. This visual analysis evaluates different architectures, processing capabilities, memory configurations, and peripheral options, providing a clear framework for selecting the optimal platform based on specific application requirements.
Workflow examples showing different development approaches for ARM and RP2040 architectures
The example workflow table above provides specific recommendations for different embedded system applications, showing how the tradeoff analyses from the previous sections translate into practical implementation strategies. This comprehensive table demonstrates the relationship between project requirements and optimal technology choices.
As demonstrated in the workflow table above, ARM workflows emphasize professional-grade debugging with industry-standard tools, while RP2040 workflows emphasize accessibility and flexibility through drag-and-drop deployment and unique PIO capabilities. The comprehensive architecture comparison shows that optimal design choices depend heavily on project constraints, performance requirements, and development timeline. High-performance applications favor ARM Cortex-M with professional toolchains, while rapid prototyping benefits from RP2040's flexible development environment and built-in features.
Build a reaction time measurement system based on research methodologies from cognitive science literature. This project combines embedded programming with scientific measurement principles to create a tool for studying human cognitive performance.
Based on PLOS ONE study (2017) examining reaction time variability and cognitive decline.
Repository: github.com/saleemaldajani/reaction-timer-saleemaldajani
Reaction time model showing the relationship between age and cognitive performance, based on research from PLOS ONE (2017)
| # | Image | Part | Description | Qty | Unit Price | Datasheet | Price |
|---|---|---|---|---|---|---|---|
| 1 |
|
ATmega32 Microcontroller | 8-bit AVR MCU, 32 KB Flash, 2 KB SRAM, ADC, timers, USART | 1 | $7.50 | $7.50 | |
| 2 |
|
0.96″ OLED Display Module (SSD1306) | 128×64 mono OLED, I²C/SPI interface | 1 | $2.50 | $2.50 | |
| 3 |
|
Male Header Pins (2.54 mm) | Breakaway straight male headers | 2 | $0.20 | $0.40 | |
| 4 |
|
Dual-Row Header (2×5, 2.54 mm) | 10-pin dual-row header, unshrouded | 1 | $0.50 | $0.50 | |
| 5 |
|
SMD MOSFET (AO3400A example) | N-channel MOSFET, SOT-23 | 2 | $0.15 | $0.30 | |
| 6 |
|
SMD Capacitor (MLCC 0603 example) | Ceramic capacitor, value TBD | 5 | $0.03 | $0.15 | |
| 7 |
|
SMD Resistors – 1 kΩ (code '1001') | Precision SMD resistors | 5 | $0.02 | $0.10 | |
| 8 |
|
SMD LED (0603, color TBD) | Indicator LED | 2 | $0.05 | $0.10 | |
| 9 |
|
USB-C Receptacle | Surface-mount USB Type‑C connector | 1 | $0.60 | $0.60 | |
| 10 |
|
Custom PCB (Capacitive-Touch Board) | Project-specific PCB | 1 | $10.00 | N/A | $10.00 |
| TOTAL | $22.15 | ||||||
Complete bill of materials for the ReactionAge embedded programming project. All prices are in USD and subject to supplier variations.
An interactive training module designed for young children, utilizing Nintendo Switch controller input to create an engaging letter-matching game that develops motor skills and cognitive recognition.
An AI-powered Mario-style platformer game built with ChatGPT assistance, featuring touch controls, adaptive gameplay mechanics, and classic 2D platforming elements optimized for embedded systems.
Mario-style platformer gameplay with touch controls and adaptive difficulty
ChatGPT-assisted development process and code generation demonstration
Button mapping obtained using QPAD21 Touch Monitor utility in Design Files
A collaborative multiplayer game featuring geometric shape interactions, synchronized gameplay between two ESP32 devices, inspired by educational content in Shape Song Swing Along.
Shape Song Swing Along educational content integration and gameplay
Circle and square shape interaction gameplay mechanics
MAC addresses obtained using Get MAC Address utility in Design Files
Source code, circuit diagrams, and documentation files for embedded programming projects.
This code was developed collaboratively with ChatGPT. View development transcript
// Reaction time prediction models
const float A_POOLED = -35.392393f, B_POOLED = 0.1930446f;
const float A_MALE = -58.123019f, B_MALE = 0.2640664f;
const float A_FEMALE = -31.603679f, B_FEMALE = 0.176111015f;
float predictAgeFromRT(unsigned long rt_ms) {
float x = (float)rt_ms;
float a, b, c;
switch (currentModel) {
case SX_MALE: a=A_MALE; b=B_MALE; c=C_MALE; break;
case SX_FEMALE: a=A_FEMALE; b=B_FEMALE; c=C_FEMALE; break;
default: a=A_POOLED; b=B_POOLED; c=C_POOLED; break;
}
return a + b*x + c*x*x;
}
This code was developed collaboratively with ChatGPT. View development transcript
// Touch control mapping and adaptive thresholds
#define N_TOUCH 6
#define THRESHOLD 500
uint8_t touch_pins[N_TOUCH] = {2,3,4,5,6,7}; // P0..P5 -> pins (2,3,4,5,6,7)
// Game state management
enum GameState { MENU, PLAYING, PAUSED, GAME_OVER };
GameState currentState = MENU;
// Player physics and collision detection
struct Player {
float x, y, vx, vy;
bool onGround, canJump;
int lives, coins;
};
// Adaptive difficulty system
#define USE_ADAPTIVE_THRESH 1
int adaptive_thresholds[N_TOUCH] = {500, 500, 500, 500, 500, 500};
void updateGame() {
// Physics update
player.vy += GRAVITY;
player.y += player.vy;
// Collision detection with platforms
for (int i = 0; i < N_PLATFORMS; i++) {
if (checkCollision(player, platforms[i])) {
player.onGround = true;
player.vy = 0;
}
}
}
This code was developed collaboratively with ChatGPT. View development transcript
// ESP-NOW communication setup
#include
#include
// Player state structure for network synchronization
struct PlayerState {
int x, y, vx, vy;
bool onGround;
};
struct NetPacket {
PlayerState st;
bool ready;
};
// MAC address-based player identification
uint8_t macP1[] = {0xD8, 0x3B, 0xDA, 0x75, 0x05, 0xAC};
uint8_t macP2[] = {0xD8, 0x3B, 0xDA, 0x75, 0xE1, 0x9C};
// Touch controls for shape interaction
#define N_TOUCH 6
#define THRESHOLD 100000UL
int touch_pins[N_TOUCH] = {9, 8, 7, 2, 3, 1};
#define IDX_LEFT 1
#define IDX_RIGHT 0
#define IDX_JUMP 2
// ESP-NOW callback for receiving data
void OnDataRecv(const uint8_t * mac, const uint8_t *incomingData, int len) {
memcpy(&otherState, incomingData, sizeof(otherState));
otherReady = true;
}
A touch monitoring utility developed with ChatGPT assistance for debugging and calibrating touch sensor inputs on the QPAD21 development board. This tool provides real-time visualization of touch sensor values and helps optimize touch thresholds for embedded applications.
This code was developed collaboratively with ChatGPT. View development transcript
// Touch monitoring and calibration utility
#include
#include
#include
#define N_TOUCH 6
#define THRESHOLD 500
uint8_t touch_pins[N_TOUCH] = {2,3,4,5,6,7};
Adafruit_FreeTouch* touch_devices[N_TOUCH];
int touch_values[N_TOUCH] = {0,0,0,0,0,0};
bool pin_touched[N_TOUCH] = {false,false,false,false,false,false};
void updateTouchMonitor() {
for (int i = 0; i < N_TOUCH; i++) {
int value = touch_devices[i]->measure();
touch_values[i] = value;
pin_touched[i] = (value > THRESHOLD);
// Serial output for debugging
Serial.print("Touch ");
Serial.print(i);
Serial.print(": ");
Serial.print(value);
Serial.print(" ");
Serial.println(pin_touched[i] ? "PRESSED" : "RELEASED");
}
}
A utility developed with ChatGPT assistance for retrieving and displaying MAC addresses on ESP32 devices. This tool is essential for ESP-NOW communication setup, allowing developers to identify device MAC addresses for peer-to-peer networking configuration in multiplayer embedded applications.
This code was developed collaboratively with ChatGPT. View development transcript
// MAC address retrieval utility for ESP32
#include
#include
void setup() {
Serial.begin(115200);
// Get WiFi STA MAC address
uint8_t mac[6];
esp_read_mac(mac, ESP_MAC_WIFI_STA);
Serial.print("WiFi STA MAC: ");
for (int i = 0; i < 6; i++) {
if (i > 0) Serial.print(":");
if (mac[i] < 16) Serial.print("0");
Serial.print(mac[i], HEX);
}
Serial.println();
// Get Bluetooth MAC address
esp_read_mac(mac, ESP_MAC_BT);
Serial.print("Bluetooth MAC: ");
for (int i = 0; i < 6; i++) {
if (i > 0) Serial.print(":");
if (mac[i] < 16) Serial.print("0");
Serial.print(mac[i], HEX);
}
Serial.println();
}
Complete KiCad project files for custom PCB design, including schematics, PCB layouts, and project configurations for embedded development boards.
Design Documentation:
For detailed board design principles, USB-C connector placement, and manufacturing considerations, refer to the Board Design Documentation section above.
Key insights and lessons learned from embedded programming work this week.
Acknowledgments for collaborative support and technical assistance during the embedded programming project development.
Helped me so much with the ESP32S3 code development with his awesome examples. Also assisted with resoldering the microcontroller when we were debugging the USBC QPAD issues. Couldn't have done it without you!
Spent so many hours helping me debug my USBC QPAD while resoldering the USBC port. Eventually it turned out the problem was that we crashed the board with XIAO code on a D21! Too much to say and so little time until class. Couldn't have done it without you.
Helped me test the boards after programming them, providing valuable assistance with debugging and validation of the embedded programming projects. Your support was crucial for ensuring everything worked correctly.
Contributed valuable insights and technical discussions about microcontroller tradeoffs during the group assignment development. Provided expertise in evaluating different microcontroller architectures and their suitability for various embedded programming applications.
Engaged in extensive discussions throughout the embedded programming design trades section, providing valuable technical insights and collaborative support during the group assignment development process.
Participated in numerous office hours discussions about embedded programming design trades, contributing significantly to the technical analysis and collaborative development of the group assignment content.
Thank you for showing me Flappy Bird and inspiring me to push through the game development part of my plan this week. Your demonstration and encouragement were instrumental in motivating me to complete the ChatGPTMario2D and CircleSquareShapeSongSwingalong projects, helping me overcome technical challenges and stay focused on the embedded programming goals.
Thank you for taking videos of the soldering training! Your documentation of the training process was incredibly helpful for understanding the techniques and procedures. The visual reference you provided made it much easier to follow along and learn the proper soldering methods for the embedded programming projects.
Special thanks to the collaborative learning environment that enabled knowledge sharing and technical problem-solving throughout the embedded programming week.
Transparent documentation of AI assistance used in this week's work, following course guidelines for ethical AI usage.
Used ChatGPT to generate the weekly template HTML structure and styling framework, and created structured task prompts for week 2 embedded programming content development.
Used Cursor AI for iterative content development, page structure optimization, and aesthetic improvements. The AI helped with HTML/CSS implementation, content organization, and maintaining consistent styling across the embedded programming documentation.
Used Cursor AI to implement the comprehensive BOM table with clickable images, datasheet links, and full-width responsive design. The AI helped with HTML table structure, CSS styling, and integration of CSV data into an aesthetically pleasing table format.
Used ChatGPT for image recognition of electronic components to identify parts and construct the bill of materials CSV file. The AI analyzed component images and provided detailed part specifications, descriptions, and pricing information for the embedded programming project.
Used Cursor AI for comprehensive project content updates, documentation improvements, and HTML structure optimization. The AI assisted with content organization, styling consistency, and integration of multimedia elements for the embedded programming documentation.
Used ChatGPT for Arduino programming assistance based on the ReactionAge project code from GitHub repository and Quentin Bolsée's QPAD documentation available on the course website embedded system assignment page. The AI provided guidance on embedded programming concepts, code optimization, and integration with the QPAD-xiao board design.
Used Cursor AI for comprehensive development of the embedded programming design trades group assignment section. The AI assisted with creating structured subsections for language, development workflow, and microcontroller tradeoffs, implementing professional HTML tables with data extracted from PowerPoint presentations, and integrating download links for PDF and PPTX files.
Used Cursor AI for comprehensive development of the training section, including soldering techniques, QPAD programming guides, board design documentation, and troubleshooting procedures. The AI assisted with creating structured subsections, implementing video demonstrations, adding download links for design files, and developing comprehensive troubleshooting guides with multimeter testing procedures.
Used Cursor AI to resolve portrait video cropping within the soldering training section, tune responsive spacing around the media, and validate caption alignment. The transcript documents the CSS refinements, verification steps, and follow-up adjustments that keep the tutorial content accessible.
Used Cursor AI to analyze file sizes and break down large changes into smaller, manageable commits to avoid exceeding Git's 25MB commit size limit. The AI helped organize files by size, created a logical commit strategy, and automated the commit and push process for better version control practices.
Used ChatGPT for collaborative development of the ChatGPTMario2D platformer game, including touch control implementation, adaptive difficulty systems, collision detection, and game physics. The AI provided guidance on embedded programming concepts, state machine architecture, and optimization for the QPAD21 development board.
Used ChatGPT for collaborative development of the CircleSquareShapeSongSwingalong multiplayer game, including ESP-NOW wireless communication, MAC address-based player identification, synchronized game state management, and educational content integration. The AI provided guidance on embedded networking, real-time synchronization, and touch-based geometric learning systems.
Used ChatGPT for collaborative development of the QPAD21 touch monitoring utility, including real-time sensor value display, threshold calibration assistance, debugging tools, and serial output for data logging. The AI provided guidance on touch sensor integration, calibration procedures, and embedded debugging techniques for the QPAD21 development board.
Used ChatGPT for collaborative development of the get_mac_address utility for ESP32 devices, including WiFi STA and Bluetooth MAC address retrieval, serial output formatting, and ESP-NOW integration support. The AI provided guidance on embedded networking concepts, MAC address extraction methods, and device identification techniques for multiplayer embedded applications.
Used Cursor AI for comprehensive development of toddler training subsections, including ChatGPTMario2D and CircleSquareShapeSongSwingalong game implementations. The AI assisted with creating structured subsections, implementing video demonstrations, adding download links for Arduino code files, and developing comprehensive documentation for embedded programming educational content.
Cursor AI assisted with fixing layout and margin issues in week2.html, including resolving width inconsistencies and ensuring proper section structure. The AI helped identify and fix extra closing div tags (lines 1251-1252) that were causing content to appear outside their intended sections. The issue was resolved by manually removing the duplicate closing tags.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
3D Scanning and Printing
Exploring the capabilities and limitations of additive manufacturing through hands-on design and printing experiments.
This week focuses on additive manufacturing processes and 3D scanning technologies. We'll explore the differences between subtractive and additive processes, learn about various 3D printing materials and techniques, and understand the design constraints and capabilities of 3D printing systems.
Understanding different additive manufacturing technologies and their applications.
Most common desktop 3D printing method using thermoplastic filament
UV-cured resin printing for high detail and smooth surfaces
Powder-based printing for complex geometries without supports
Similar to SLA but uses digital light projection for faster printing
Powder and binder system for full-color printing
Inkjet-style printing with multiple materials and colors
Common 3D printing materials and their properties.
Plant-based, biodegradable, easy to print, glass transition ~60°C, more brittle
Oil-based recyclable, glass transition ~80°C, tougher, better UV resistance
Strong, impact-resistant, requires heated bed, more difficult to print
Flexible material for rubber-like properties
Water-soluble support material
Wood-filled, metal-filled, carbon fiber, conductive materials
Key constraints and guidelines for designing printable objects.
Various technologies for capturing 3D geometry of physical objects.
Multiple photos from different angles, processed with software like Meshroom, Polycam
Projects patterns onto object, captures deformation for 3D reconstruction
Time-of-flight or triangulation-based distance measurement
Two cameras for depth perception, similar to human vision
X-ray tomography for internal structures and complex geometries
Light detection and ranging for large-scale scanning
Comprehensive guide to 3D scanning techniques and best practices using structured light scanning systems.
Structured light 3D scanner with advanced scanning capabilities and user-friendly interface
Professional-grade 3D scanner with high-resolution capture and precision scanning features
Ensure consistent, diffused lighting. Avoid direct sunlight and harsh shadows that can interfere with structured light scanning.
Clean the object thoroughly. For dark or reflective surfaces, consider applying scanning spray or powder for better light reflection.
Place small, recognizable objects around the main subject to help the scanner track movement and improve alignment accuracy.
Capture the object from multiple angles and positions to ensure complete coverage and minimize blind spots in the final mesh.
Comprehensive guide to 3D printing techniques, software configuration, and best practices for optimal print quality and efficiency.
Advanced slicing software with comprehensive print settings and optimization tools
High-quality PLA filament with consistent properties and reliable printing characteristics
0.4mm nozzle diameter for balanced detail and print speed
Maximum height: 10 inches - consider this constraint for tall prints
Start with 0.2mm for most prints, reduce to 0.15mm for detailed parts, increase to 0.3mm for prototypes.
Test higher speeds gradually - many printers can handle faster speeds than default settings suggest.
Use 2-3 walls for most applications, increase for structural parts, decrease for decorative items.
Remember the 10-inch height limit when designing tall objects - consider splitting large prints.
Common file formats used in 3D printing and scanning workflows.
Test the design rules for your 3D printer(s) by creating a comprehensive test print that evaluates various design constraints and capabilities.
Download Detailed Slides: PDF Version | Keynote Version
Comprehensive presentation covering all design rule tests, results, and analysis methodology.
These tests evaluate features that should print reliably without additional support structures:
These tests push the boundaries of what the printer can achieve without additional support structures:
Additional design rule tests planned for future evaluation:
Comprehensive STL files for testing various 3D printing design rules and constraints:
To optimize print time and material usage, we strategically grouped all supported and unsupported tests into two separate prints, allowing for comprehensive evaluation of printer capabilities.
Supported tests print - grouped for efficient testing
Unsupported tests print - challenging geometry evaluation
Print Strategy: By grouping all supported tests in one print and all unsupported tests in another, we were able to efficiently evaluate printer capabilities while minimizing material waste and print time. This approach allowed for direct comparison between different test geometries and provided comprehensive data on the printer's performance across various design constraints.
The supported tests focused on features that should print reliably, while the unsupported tests pushed the boundaries of what the printer could achieve without additional support structures.
Design, document, and 3D print an object that could not be made subtractively. The object should be small (few cm³) and limited by printer time.
I hope to design and fabricate a handgripper to measure age based on hand grip strength. This builds on our aging biomarker research by focusing on locomotive age assessment.
The idea is to measure functional decline due to aging across multiple modalities:
Last week we measured cognition with reaction time - this week we focus on locomotive age through grip strength.
Complex interlocking geometry that cannot be manufactured subtractively. Tests printer's ability to create intricate, self-supporting structures with minimal overhangs.
Helical spring mechanism for grip force measurement. Demonstrates additive manufacturing's capability to create functional mechanical components in one piece.
Nested spring system with different spring constants. Tests printer's ability to create complex, multi-component mechanisms with varying material properties.
Spiral coil for force transmission and measurement. Explores the limits of printable coil geometries and their mechanical properties for sensor integration.
A complex 3D printable design that demonstrates additive manufacturing capabilities through intricate geometry that cannot be manufactured subtractively.
This design was co-developed through iterative discussion with ChatGPT, exploring various approaches to create an untanglable slinky geometry.
The design process involved multiple iterations, with the first two attempts resulting in tangled slinkies that failed to maintain proper coil separation:
First attempt: Tangled slinky with improper coil geometry
Second attempt: Still resulted in tangled geometry
The third iteration successfully achieved the desired untanglable geometry:
Final design: Untanglable slinky with proper coil geometry
Design Description: This design shows an untanglable slinky with flat, oval coils, tiny spacers between turns, and solid end caps, so it collapses neatly without knots. The geometry enforces orderly stacking while preventing coils from slipping past each other.
This design was co-created through iterative discussion with ChatGPT, demonstrating effective human-AI collaboration in 3D design. View the full conversation transcript in the Ethical AI Use section →
My initial exploration into creating a 3D printable slinky design, inspired by the classic magic spring concept.
First slinky design - exploring the magic spring concept
Design Description: This first slinky design represents my initial exploration into creating a 3D printable magic spring. The design focuses on creating a helical structure that can collapse and expand while maintaining its structural integrity.
This design was inspired by the classic slinky toy and the concept of creating a "magic spring" that can be 3D printed in one piece, demonstrating the unique capabilities of additive manufacturing.
This design draws inspiration from several sources and builds upon existing work in 3D printable magic springs:
Watch the slinky in action: YouTube Video - Magic Spring Demonstration
Download the original design: Printables - Slinky Magic Spring
Explore the technical details: Hackaday - 3D Printed Magic Spring
The Hackaday project provides comprehensive documentation including Java code for generating G-code, print parameters, and detailed instructions for creating 3D printable magic springs. The project includes multiple iterations of the code with different features like temperature control, spiral priming, and delta printer compatibility.
Design files are available for download in the Design Files section.
Bringing the first slinky design to life through 3D printing, testing the design's printability and mechanical properties in real-world conditions.
The printing process involved careful preparation, parameter optimization, and real-time monitoring to ensure successful fabrication of the complex slinky geometry.
First slinky print in progress - demonstrating 3D printing of complex helical geometry
Print Process Description: The printing process successfully demonstrated the capability of FDM 3D printing to create complex helical geometries. The slinky was printed using PETG material with optimized settings for layer adhesion and dimensional accuracy. The print required careful support structure management and precise temperature control to achieve the desired mechanical properties.
Key printing parameters included: 0.4mm nozzle diameter, 0.2mm layer height, PETG material, and optimized print speed for complex geometry. The print time was approximately 55 minutes for the complete slinky structure.
Watch the complete printing process in action, showing the layer-by-layer construction of the slinky's complex helical structure.
Complete printing process video showing layer-by-layer construction of the slinky
Video Analysis: This video demonstrates the successful 3D printing of the complex slinky geometry, showing how the printer handles overhangs, bridging, and intricate details. The printing process reveals the importance of proper support structures and temperature management for achieving functional mechanical properties.
Notable aspects: smooth layer transitions, successful overhang printing, proper bridging between coil segments, and consistent material flow throughout the complex geometry.
The primary goal of this project is to optimize the number of revolutions that are printable in a single slinky design, pushing the limits of 3D printing capabilities for complex helical geometries.
Key Hypothesis: Since this is a slinky design, the flexible nature of the printed object should allow for pulling and stretching to remove support material from between the coils. This hypothesis drives the design optimization process, testing how many revolutions can be successfully printed while maintaining the ability to remove internal supports through mechanical manipulation.
This approach challenges traditional 3D printing constraints by leveraging the inherent flexibility of the slinky geometry to overcome support removal limitations, potentially enabling more complex and longer helical structures than would otherwise be printable.
Documenting the initial printing results and the challenges encountered with support removal, leading to refined techniques for successful slinky fabrication.
The first attempts at support removal revealed that simple breaking methods were insufficient for the complex internal geometry of the slinky design.
Breaking support fails - insufficient for complex internal geometry
Razor method works but requires caution and carefulness
Support Removal Analysis: Initial attempts at simply breaking support material proved ineffective due to the complex internal geometry of the slinky coils. The support material was too tightly integrated with the helical structure to be removed through mechanical breaking alone.
The razor method, while effective, requires extreme caution and carefulness to avoid damaging the delicate slinky structure. This process demands patience and precision to successfully remove internal supports without compromising the print quality.
After refining the support removal technique, the slinky prints achieved successful results with proper mechanical functionality.
First successful slinky print - demonstrating proper mechanical functionality
Multiple successful slinky prints - validation of design and process
Success Analysis: The successful slinky prints demonstrate the viability of 3D printing complex helical geometries with proper support removal techniques. The prints maintain their structural integrity and exhibit the expected slinky behavior, validating both the design approach and the fabrication process.
Key achievements: successful support removal, maintained geometric accuracy, functional mechanical properties, and reproducible results across multiple prints.
The final iteration of the slinky design achieved full functionality with proper mechanical behavior and successful support removal.
The final step in support removal required precise snipping to achieve clean separation and proper slinky functionality.
Fine snipping process - precise support removal for clean slinky functionality
Snipping Process: The final support removal required careful snipping with precision tools to achieve clean separation between the slinky coils and support material. This delicate process was essential for maintaining the structural integrity while enabling proper slinky movement.
Key considerations: maintaining coil geometry, avoiding damage to the helical structure, and ensuring smooth movement between coils.
The completed slinky successfully passed the classic staircase test, demonstrating proper mechanical functionality and slinky behavior.
Complete full slinky - ready for staircase test
Slinky staircase test - demonstrating proper mechanical functionality
Test Success: The slinky successfully passed the staircase test, demonstrating proper mechanical functionality with smooth coil movement and the characteristic slinky behavior. This validates both the design approach and the 3D printing process for creating functional mechanical objects.
Key achievements: proper coil separation, smooth movement, maintained structural integrity, and classic slinky behavior.
Design files and G-code are available for download in the Design Files section.
Print Success Analysis: The first slinky print successfully demonstrated the viability of 3D printing complex helical geometries. The printed slinky maintained its structural integrity and demonstrated the expected mechanical properties, proving that additive manufacturing can create functional objects with intricate geometries that would be impossible to manufacture using traditional subtractive methods.
Key achievements: successful overhang printing, proper layer adhesion, maintained geometric accuracy, and functional mechanical properties suitable for the intended slinky behavior.
Using 3D scanning technology to capture and digitize physical objects, demonstrating the capabilities of structured light scanning systems.
For the 3D scanning component, I selected a detailed Darth Vader sculpture as the target object. This choice was strategic - the sculpture's complex geometry, dark surface, and intricate details would test the scanning system's capabilities and limitations.
Darth Vader sculpture - complex geometry with dark surfaces and intricate details
Object Description: This detailed Darth Vader sculpture features complex geometry including flowing cape details, helmet ridges, and facial features. The dark surface material and intricate details present an excellent challenge for 3D scanning technology, testing both the system's ability to capture fine details and handle challenging surface properties.
The sculpture's combination of organic curves, sharp edges, and dark surface finish makes it an ideal test subject for evaluating scanning system performance across different surface types and geometric complexities.
The first scanning attempt used default scanner settings, which proved insufficient for capturing the complex geometry and dark surfaces of the Vader sculpture.
Failed scan result using default scanner settings - insufficient detail capture
Scan Failure Analysis: The default settings failed to capture sufficient detail from the dark Vader sculpture. The resulting mesh shows significant gaps, missing geometry, and poor surface reconstruction. This demonstrates the importance of optimizing scanner settings for specific object characteristics.
Key issues identified: insufficient lighting for dark surfaces, inadequate resolution settings, and suboptimal scanning angle coverage for complex geometry.
The optimization process involved adjusting scanner settings and scanning parameters to achieve better results with the challenging Vader sculpture.
Video demonstration of scanner optimization process and parameter adjustment
Optimization Process: This video demonstrates the iterative process of adjusting scanner settings, lighting conditions, and scanning parameters to achieve optimal results. The process involved multiple attempts with different configurations to find the best balance between detail capture and scanning efficiency.
Key optimization steps included: adjusting lighting intensity, modifying scanning resolution, optimizing turntable speed, and fine-tuning surface detection parameters for dark materials.
After optimization, the scanning process successfully captured the complex geometry of the Vader sculpture with significantly improved detail and surface reconstruction.
Successful 3D scan result with optimized settings - improved detail capture and surface reconstruction
Successful Scan Analysis: The optimized scan successfully captured the complex geometry of the Vader sculpture, including fine details like helmet ridges, cape folds, and facial features. The resulting mesh shows good surface reconstruction with minimal gaps and accurate geometric representation.
Key improvements achieved: enhanced detail capture, better surface reconstruction, reduced scanning artifacts, and improved geometric accuracy for complex organic shapes.
Converting the scanned Vader model into a printable 3D object required extensive post-processing and optimization for 3D printing constraints.
Successfully printed Vader sculpture - demonstrating 3D scan to print workflow
Print Success: The printed Vader sculpture successfully demonstrates the complete workflow from 3D scanning to 3D printing. Despite the complex post-processing requirements, the final print maintains good detail and structural integrity, proving the viability of scanning-to-printing workflows for complex organic shapes.
Key achievements: successful mesh cleanup, proper alignment, effective support generation, and successful printing of complex scanned geometry.
Essential resources for 3D scanning, printing, and design optimization.
3D Scanning & Printing - MIT Academy
Comprehensive resource covering 3D scanning techniques, 3D printing technologies, design for additive manufacturing, and post-processing methods. Includes tutorials on photogrammetry, structured light scanning, and various 3D printing processes.
3D Scanning & Printing Recitation - Google Slides
Hands-on tutorial covering 3D scanning workflows, 3D printing setup and optimization, design rules for additive manufacturing, and troubleshooting common issues.
Links to CAD files, STL files, and other design assets for this week's assignments.
Comprehensive STL files for testing various 3D printing design rules and constraints:
Complete set of slinky design files and G-code for testing printable revolution limits and support removal hypothesis:
Initial slinky design with 28 coil iterations - testing maximum printable length
📥 DownloadRefined version of the 28-coil slinky design with optimized geometry
📥 DownloadThird iteration with 21 coils - balanced design for printability and functionality
📥 DownloadFourth iteration with 7 coils - compact version for testing support removal hypothesis
📥 DownloadG-code for 28-coil slinky (51 minutes print time) - maximum length test
📥 DownloadG-code for 14-coil slinky (58 minutes print time) - mid-range optimization
📥 DownloadG-code for 21-coil slinky (55 minutes print time) - optimal balance
📥 DownloadG-code for 7-coil slinky (57 minutes print time) - support removal test
📥 DownloadDesign Iteration Strategy: These files represent a systematic approach to optimizing printable slinky designs. Each iteration tests different coil counts to find the maximum number of revolutions that can be successfully printed while maintaining the ability to remove internal supports through mechanical manipulation of the flexible slinky structure.
The progression from 7 to 28 coils allows for testing the limits of 3D printing capabilities while validating the hypothesis that slinky flexibility enables support removal in complex internal geometries.
Design files for the first slinky exploration project:
Key insights and lessons learned from working with 3D printing and scanning technologies.
Acknowledgements and team roles for this week's work.
Contributions will be documented as work progresses
Documentation of AI tool usage for this week's assignments and design work.
This session covers the development of the Week 3 page for 3D scanning and printing, including content population from MIT Academy resources, navigation updates, and design file integration.
This session documents the iterative co-design process for creating an untanglable slinky using ChatGPT. The conversation shows multiple design iterations, from initial failed attempts to the final successful geometry.
This session documents the comprehensive refactoring of the entire web development repository to make it fully responsive and mobile-friendly. The conversation covers the implementation of modern CSS techniques, responsive design patterns, and mobile-first development approaches.
This session documents the integration of the first slinky design into the untanglable slinky section, including adding the image, description, inspiration resources, and design file links.
This session documents the consultation with ChatGPT for optimizing the Vader scan post-processing workflow. The conversation focused on edge smoothing techniques in Meshmixer and best practices for preparing scanned meshes for 3D printing.
This session documents the development and updates to the Week 3 individual assignment focusing on 3D scanning and printing. The conversation covers 3D scanning techniques, Vader model processing, slinky design iterations, and individual project implementation.
This session documents the comprehensive development and updates to the Week 3 group assignment focusing on 3D printing design rule testing. The conversation covers group collaboration, design rule testing methodology, PDF content extraction, and group project implementation.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
[Reflection notes placeholder.]
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Course Introduction & Setup · HTMAA 2025
Course orientation and initial project exploration, including safety training completion and initial concept development for the final project.
Initial concept sketch for final project - exploring bioprinting rejuvenated tissue and aging biomarker devices
Welcome to How to Make (Almost) Anything! This week focuses on course orientation, safety training, and setting up the foundation for the semester ahead.
Essential resources for course introduction, design principles, and parametric design.
Principles and Practices - MIT Academy
Course overview covering digital fabrication principles, project management, version control, and web development. Includes links to various CAD programs, simulation tools, and AI-assisted design workflows.
Computer-Aided Design - MIT Academy
Comprehensive guide to 2D and 3D design tools including raster/vector graphics, CAD programs, simulation, and AI-assisted design. Covers everything from basic sketching to advanced parametric modeling.
Parametric Design Recitation - Alfonso Parra Rubio
Hands-on tutorial for Fusion 360 covering parametric design principles, CAD basics, importing/exporting, and design for manufacturing. Includes step-by-step guidance for getting Autodesk Educational access.
Initial concept sketches for the final project - exploring ideas for bioprinting rejuvenated tissue and aging biomarker devices.
Early ideation and planning takeaways for the final project and weekly milestones.
Acknowledgements for the help I received this week from my classmates and instructors.
Thank you to Alfonso for providing helpful Fusion office hours and guidance on CAD modeling techniques.
Thank you to Dan Gilbert for assistance with the CBA shop facilities and equipment access.
Had extensive discussions with Tolga Durak about accessing the SHED bioprinter for potential final project work. Tolga provided detailed guidance on the BRR (Biological Research Registration) process through MIT's CAB/ESCRO committee, including requirements for MIT PI involvement, research proposal structure, and timeline considerations. He explained that the bioprinter is currently limited to approved research projects due to existing commitments, and that any access would need to be scheduled around three ongoing research projects. Prof. Gershenfeld agreed to serve as the MIT PI for the proposal, and I'm working through the BRR forms to submit the necessary documentation for committee review.
Transparent documentation of AI assistance used in this week's work, following course guidelines for ethical AI usage.
This week's webpage development was assisted by Cursor AI to create the initial page structure, styling, and content organization. The AI helped with HTML/CSS implementation and responsive design patterns.
Cursor AI assisted with organizing webpage sections, cleaning up HTML structure, and updating content for both week 0 and week 1 pages. The AI helped with code refactoring, section reorganization, and maintaining consistent styling across pages.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Principles & Cutting · HTMAA 2025
Successfully fabricated and assembled parametric design modular construction kit based on laser cutting origami crease patterns on cardboard.
Applied vinyl stickers and completed takehome assignment
📁 Design Files Available
Download CAD files, view SVG patterns, and access all design assets from this week's assignments
📄 View Design FilesThis week focuses on the fundamentals of digital fabrication: computer-aided design (CAD), computer-aided manufacturing (CAM), laser cutting, and vinyl cutting. We'll learn the complete workflow from design to finished parts.
Comprehensive training notes and media from laser cutter and vinyl cutter sessions.
Additional resources and guides from Anthony Pennes for laser cutting, vinyl cutting, and image compression workflows.
Comprehensive guides covering essential digital fabrication techniques, parameter optimization, and workflow best practices.
Essential techniques for optimizing image file sizes, setting up batch processing tools, and maintaining repository efficiency.
📖 View GuideComplete laser cutting workflow including safety, parameter optimization, joint design, and troubleshooting techniques.
📖 View GuideDetailed vinyl cutting techniques, software setup, machine operation, and weeding processes for professional results.
📖 View GuideNote: These guides were created by Anthony Pennes and are maintained as supplementary resources for the HTMAA course. View original Slack message for context.
Official course resources for project management, computer cutting, and Git recitation materials.
Project Management - MIT Academy
Comprehensive guide to project management including file synchronization, version control (Git, GitHub, GitLab), web development, and documentation workflows. Covers essential tools for managing digital fabrication projects.
Computer Cutting - MIT Academy
Complete resource for computer-controlled cutting including laser cutting, vinyl cutting, and other cutting technologies. Covers safety, machine operation, material selection, and design considerations.
Hands-on Git tutorial covering version control fundamentals, repository management, collaboration workflows, and best practices for digital fabrication projects.
Systematic testing and documentation of laser cutter parameters for optimal cutting performance. Assignment started: September 11, 2025 Assignment completed: September 16, 2025
Workflow Note: The Universal Control Software provides precise control over laser parameters, while Inkscape serves as the design interface for creating and sending cutting jobs. The color-coded line system allows for efficient batch processing of both cutting and engraving operations in a single job.
Finger joint - interlocking design for structural connections
Size of the slot that provides tight enough fit for assembly
Material choice significantly impacts joint quality and assembly precision
Comprehensive analysis of kerf and clearance measurements from systematic laser cutter testing.
View and interact with the complete dataset in Google Sheets:
Systematic measurement approach for characterizing laser cutter kerf and joint clearance using precision calipers.
Peg and hole measurement setup
Top measurement view
Bottom measurement view
Kerf Definition: The width of material removed by the laser beam during cutting. Measured as the difference between prescribed cut dimensions and actual cut dimensions.
| Prescribed Cut (in) | Actual Cut #1 (in) | Actual Cut #2 (in) | Actual Cut #3 (in) | Average (in) | Std Dev (in) | Avg Kerf (in) |
|---|---|---|---|---|---|---|
| 0.136 | 0.142 | 0.1405 | 0.1445 | 0.1423 | 0.0020 | 0.0032 |
| 0.137 | 0.142 | 0.1475 | 0.149 | 0.1462 | 0.0037 | 0.0046 |
| 0.138 | 0.148 | 0.1495 | 0.147 | 0.1482 | 0.0013 | 0.0051 |
| 0.139 | 0.146 | 0.146 | 0.1475 | 0.1465 | 0.0009 | 0.0038 |
| 0.143 | 0.155 | 0.1555 | 0.1545 | 0.1550 | 0.0005 | 0.0060 |
| 0.144 | 0.1535 | 0.153 | 0.15 | 0.1522 | 0.0019 | 0.0041 |
| 0.152 | 0.163 | 0.1655 | 0.165 | 0.1645 | 0.0013 | 0.0063 |
| 0.153 | 0.166 | 0.169 | 0.168 | 0.1677 | 0.0015 | 0.0073 |
Clearance Definition: The dimensional difference between hole and peg sizes. Negative values indicate interference fit (hole smaller than peg), while positive values indicate clearance fit.
| Measurement | Sample #1 (in) | Sample #2 (in) | Sample #3 (in) | Average (in) | Std Dev (in) |
|---|---|---|---|---|---|
| Peg Size | 0.1505 | 0.1535 | 0.1505 | 0.1515 | 0.0017 |
| Hole Size | 0.156 | 0.1575 | 0.1545 | 0.1560 | 0.0015 |
| Clearance | -0.0055 | -0.004 | -0.004 | -0.0045 | 0.0009 |
0.0049 inches (4.9 thousandths)
-0.0045 inches (interference fit)
±0.0017 inches (high precision)
±0.0009 inches (very consistent)
Design and fabricate a modular construction kit using laser cutting and vinyl cutting techniques. Assignment started: September 10, 2025. Assignment completed: September 15, 2025
Initial research and clarification discussions with instructors and classmates to understand assignment requirements and technical constraints.
Special thanks to Erik Demaine for guidance on vinyl cutting crease patterns and fold differentiation techniques, and Anthony Pennes for clarifying assignment requirements, file formats, and material options. Their expertise was invaluable in understanding the technical constraints and design possibilities for this project.
I hand‑folded the crease pattern to validate the assembly. The parts do come together when the folds are reasonably close to the intended lines. A second, refolded gadget produced a noticeably cleaner assembly.
We tried cutting the full crease pattern on one side. It failed when layers with opposite cuts overlapped and had to be push‑folded through during the final folding step — the piece broke.
First assembly of two gadgets. Folds were smooth after separating mountain and valley cuts onto opposite sides to avoid breaking during push‑folds over multiple layers. The shop cardboard was tight in the assembly pocket; thinner Amazon box cardboard folded by hand fit better. Next step: make the laser‑cut pattern more exact.
Left to right: clear mountain cut, flip and cut the opposite side, clear valley cut.
Measurements confirming mountain vs valley sides.
Side profile and insertion step demonstrating clean folds without tearing.
Exploring double‑part strategies: sequential cuts, flips between sides, and opposite‑side cuts.
Temporary rubber band used to hold alignment while completing final folds.
Switched to thinner Amazon cardboard because the shop cardboard was too thick for the assembly pocket. The Amazon box folded by hand fit perfectly and assembled smoothly. Three gadgets worked with the thinner material. Rubber bands are just for compression — they assemble without them, though the bands make it easier to keep them as one unit during insertion into the next gadget.
Scaling up to quadruple cuts: sequential cuts, flips between sides, and opposite-side cutting strategies.
Three views of the double gadget assembled with Amazon cardboard, showing clean folds and proper fit.
Assembly sequence showing compression, support positioning, and insertion process for stable construction.
The culmination: triple gadget assembly demonstrating successful scaling with Amazon cardboard and support techniques.
Applying thick‑origami concepts to avoid layer collisions and enable folding with real material thickness.
From a discussion with Erik Demaine: double each crease into parallel offsets and open vertex regions as polygons to prevent collisions. A uniform spacing works for a first pass, but for correctness the spacing between the doubled creases should depend on how many layers lie between faces in the final folded state. I plan to estimate per‑crease layer counts from an unfolded model and adjust offsets accordingly.
Citation: Jason S. Ku and Erik D. Demaine, “Folding Flat Crease Patterns With Thick Materials”, Journal of Mechanisms and Robotics, 8(3), June 2016, pp. 031003‑1–6. View paper
Input: crease segments with mountain/valley labels; sheet size S; target thickness t
offset = t / S (or a chosen fraction)
for each crease c:
draw two lines parallel to c at ±offset
for each vertex (intersection of original creases):
place a small regular polygon (e.g., octagon) centered at the vertex
Output: doubled creases + vertex holes
Input: crease lines; sheet size S; target thickness t
offset = t / S
for each crease c:
compute unit direction v and normals n1,n2
define two infinite offset lines at ±offset along n1/n2
for each vertex:
collect incident creases; sort by angle into cyclic order
for each adjacent pair (i, j):
pick the offset side that lies inside the angular sector
intersect those two offset lines → one polygon vertex
connect all vertices in order → convex vertex polygon
trim faces/creases by polygons as needed
Output: doubled creases + exact vertex polygons
Following Erik's advice, offsets should scale with the number of layers between faces in the folded state. Plan: estimate per‑crease layer counts and set per‑crease offset = k × thickness × layers(c), while keeping vertex polygons valid to avoid collisions.
Design and cut vinyl stickers and stencils using the vinyl cutter. Assignment started: September 15, 2025 Assignment completed: September 15, 2025
Planned to vinyl cut lab logo stickers at first. Collected .png or .svg for each logo for the vinyl cutter software, and executing the complete workflow from cutting to final application. I hope to come back to the lab logos when there is less traffic on the vinyl cutter, I was more excited to see the fine detail of the vinyl cutter with the 2D parametric design at first!
Used a parametric sticker design that can be infinitely customized through an interactive web application. The design demonstrates the power of parametric modeling in creating personalized vinyl stickers with adjustable parameters for size, complexity, and visual elements. The assignment demonstrated both successful execution and troubleshooting when equipment issues arose.
This web-based generator was created as a final project in 6.5310 last semester and was used to download a PNG of the default design shown above. Generate unlimited variations of this parametric design by adjusting parameters in real-time to create custom stickers for any application.
🎨 Generate Custom DesignsComplete workflow from vinyl cutting through final sticker application, demonstrating the full process from design to finished product.
Vinyl cutter in action, cutting the Gladyshev Lab logo design with precise blade control and proper tension settings.
Clean vinyl cut showing precise cuts with proper registration and no tearing or incomplete cuts.
Systematic weeding process: starting from center, progressing through unwanted sections, and finishing with clean weeded vinyl ready for transfer.
Transfer paper application process: carefully applying transfer paper to hold vinyl design, then removing it to prepare for final application.
Final application steps: positioning vinyl on target surface, using tweezers for precise removal of the broken thin vinyl strip, and achieving clean final application.
Final result: applied laptop vinyl sticker and takehome vinyl sticker ready for use, demonstrating successful completion of the vinyl cutting workflow.
Encountered and resolved a motor failure during the vinyl cutting process, demonstrating troubleshooting skills and equipment maintenance knowledge.
Vinyl cutter displaying motor error, indicating a mechanical issue that prevented normal operation.
Side-by-side comparison showing the vinyl roll before and after the failure. The roll ran out of material and then fell, causing the vinyl to kink under the cutting wheel.
Simple fix process:
This common issue occurs when the vinyl roll runs out or becomes misaligned, causing the cutting mechanism to fail. The solution involves proper material handling and machine setup.
Download or view design files from this week's assignments.
Key insights and challenges encountered during this week.
Acknowledgements for the help I received this week from my classmates and instructors.
This week's training was well-documented by Katrina Li, thank you!
Camron reviewed the website and provided valuable feedback on navigation and user experience. She suggested implementing a table of contents to make it easier for visitors to navigate through the different sections of the weekly documentation. This suggestion led to the creation of the sleek, organized table of contents that now appears on both Week 0 and Week 1 pages, significantly improving the overall user experience and accessibility of the documentation.
Photos and videos of this week's training were taken by Mariam Fitaihi, thank you!
Anthony Pennes provided the kerf and clearance design files (HTMAJointTest v0.f3d and HTMAJointTest.dxf) as part of the laser cutter training. These files were essential for understanding joint tolerances and clearance requirements when working with laser-cut materials.
Erik reviewed my approach to folding thick materials and suggested using the Ku & Demaine thick‑origami method. He pointed me to the paper and emphasized that doubled creases should be spaced according to the number of layers between faces in the final fold. This feedback led me to implement two variants (estimate and exact) of thickened crease patterns and to plan a next step for layer‑aware offsets.
Jesse helped me with laser cutting while figuring out how to do it, providing valuable guidance and support during the learning process. His assistance was crucial for understanding the laser cutting workflow and techniques.
I took notes on the individual group assignment for characterizaing the laser cutter. :-)
Transparent documentation of AI assistance used in this week's work, following course guidelines for ethical AI usage.
This week's webpage development, training documentation, and assignment preparation were assisted by Cursor AI. The AI helped with HTML/CSS implementation, content organization, and structuring technical documentation.
Cursor AI assisted with organizing webpage sections, cleaning up HTML structure, and updating content for both week 0 and week 1 pages. The AI helped with code refactoring, section reorganization, and maintaining consistent styling across pages.
Cursor AI assisted with adding Anthony's HTMA guides to the Week 1 page, creating a new "Useful Documentation" section, and integrating external resource links with proper styling and organization.
Cursor AI assisted with creating a comprehensive Results section for the Laser Cutter Characterization Group Assignment, including kerf and clearance analysis tables, measurement methodology documentation, concept diagrams, and downloadable resources. The AI helped integrate Google Sheets data, create professional data tables, and include the complete measurement analysis workflow.
Cursor AI assisted with creating an interactive highlights section for Week 1, including visual improvements to the results display, clickable navigation links to detailed sections, table of contents integration, and enhanced user experience features. The AI helped implement hover effects, proper section linking, and visual hierarchy improvements for better navigation.
A separate chat was used to convert a flat crease pattern into a thick‑origami version following Ku & Demaine.
The AI doubled each crease into parallel offsets and generated vertex polygons (estimate = regular polygons; exact = sector‑aware polygons) and explained how to map desired thickness by setting OFFSET_FRAC = thickness / sheet_size.
Cursor AI assisted with repository size management by breaking down large commits into smaller, manageable chunks under 10MB each. The AI helped organize media files into logical commits, added the Vinyl Cutter Individual Assignment section with proper styling and image sizing, and provided guidance on git workflow optimization to work within platform size limits.
Cursor AI assisted with updating the vinyl cutter individual assignment section with comprehensive documentation including successful cut and transfer workflow, motor failure troubleshooting, and 2D parametric design integration. The AI helped structure the content with proper media organization, descriptive captions, and consistent styling throughout the section.
Cursor AI assisted with adding a comprehensive Design Files section to the Week 1 page, including table of contents integration, organized file listings with download and view links, and contextual hyperlinks throughout the page for easy access to design assets. The AI helped structure the files by category (laser cutter CAD files, SVG patterns, vinyl cutter images) with appropriate styling.
Cursor AI assisted with adding kerf and clearance design files from Anthony Pennes to the Week 1 Design Files section, creating a new subsubsection for joint test files, and properly attributing the contribution in the contributions section with links to both the Slack message and design files section.
Cursor AI assisted with fixing layout and margin issues in week1.html, including resolving width inconsistencies and ensuring proper section structure. The AI helped identify and fix extra closing div tags that were causing content to appear outside their intended sections. The issue was resolved by manually removing the duplicate closing tags.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Embedded Programming · HTMAA 2025
Key achievements and insights from the embedded programming week, including successful microcontroller programming, sensor integration, and the development of the ReactionAge reaction time measurement system.
Comprehensive analysis of critical design decisions in embedded system development
💻 Design Files Available
Download Arduino code, view circuit diagrams, and access all embedded programming assets from this week's assignments
📄 View Design FilesThis week focuses on embedded programming fundamentals, exploring how to program microcontrollers and create interactive electronic systems. We'll dive into the world of digital electronics, sensors, and real-time programming to build responsive devices that can sense, process, and react to their environment.
Essential training materials and resources for embedded programming fundamentals, microcontroller setup, and sensor integration techniques.
Comprehensive guide to soldering techniques for embedded electronics, covering both wire soldering and solder paste methods.
Safety First:
Always wear safety glasses and use proper ventilation. Lead-free solder requires extra caution due to flux crystallization at high temperatures.
Step-by-step video demonstrations of essential soldering techniques for embedded electronics, covering both wire soldering and surface-mount component assembly.
Learn the proper technique for soldering wires to components, starting with one edge to keep the part in place, then soldering the rest easily by keeping your wrist stable and the solder tip on the pad to heat it up with the part, and slowly bringing in solder.
Wire soldering demonstration showing proper technique and wrist stability
Complete process for surface-mount component assembly: apply solder paste first, place the component, then heat up with a heat gun to create strong, reliable connections.
Applying solder paste to PCB pads
Placing microcontroller on prepared solder paste
Using heat gun to reflow solder paste and complete the connection
Pro Tips:
For detailed soldering guides and advanced techniques, refer to the Electronics & Soldering Training section above for comprehensive documentation and resources.
Complete programming setup and workflow for QPAD development boards, covering bootloader configuration, IDE setup, and code deployment for different microcontroller architectures.
The QPAD-XIAO features a Raspberry Pi RP2040 microcontroller with built-in bootloader support. No bootloader burning required - simply configure your IDE and start programming!
The QPAD21 uses an ATSAMD21 microcontroller that requires bootloader programming using a XIAO programmer chip. Follow these steps to get it ready for development.
The QPAD-XIAO-ESP32S3 features an ESP32-S3 microcontroller with built-in WiFi and Bluetooth capabilities. No bootloader burning required - just configure your IDE and start programming!
Comprehensive documentation covering embedded programming fundamentals, electronics theory, and practical implementation guides.
Official course slides covering microcontroller architecture, I/O techniques, sensor integration, and real-time system design.
Essential resources for designing custom PCBs and development boards for embedded programming projects, covering schematic design, layout principles, and manufacturing considerations.
USB-C board design showing proper connector placement, power routing, and signal integrity considerations for embedded development boards
Design Files Available:
Complete KiCad project files, schematics, and PCB layouts are available in the Design Files section below for download and modification.
Essential troubleshooting techniques for embedded development boards, covering power supply verification and connector testing using multimeter measurements.
Verify that your voltage regulator is functioning correctly by measuring output voltages with a multimeter when power is connected to the board.
Voltage regulator testing points showing input, output, and ground connections for multimeter verification
💡 Pro Tip:
Use a portable multimeter for easy on-the-go testing and debugging of embedded circuits.
Verify USB-C connector functionality by testing power delivery, data lines, and ground connections using multimeter measurements.
USB-C pinout diagram showing power, data, and ground connections for multimeter testing and verification
🔧 Troubleshooting Tips:
Official course resources for embedded programming and electronics fundamentals.
Embedded Programming - MIT Academy
Comprehensive resource covering microcontroller programming, I/O techniques, sensor integration, and real-time system design. Includes tutorials, code examples, and hardware interfacing methods.
HTMA Electronics Slides - Google Slides
Official course slides covering microcontroller architecture, programming fundamentals, sensor integration, and real-time system design principles.
Comprehensive analysis of design tradeoffs in embedded programming, covering language selection, development workflows, and microcontroller choices. This collaborative project examines the critical decisions that shape embedded system development and their impact on project success.
Comprehensive comparison of microcontroller architectures and their tradeoffs in embedded programming
Programming language tradeoff analysis showing performance vs development speed considerations
As illustrated in the language tradeoff diagram above, the choice of programming language involves critical tradeoffs between performance and development speed. The analysis shows how different languages balance factors like memory efficiency, execution speed, development time, and ecosystem support for embedded system development.
Development workflow tradeoff analysis comparing different methodologies and their impact on project outcomes
The development workflow tradeoff diagram above demonstrates how different methodologies balance project timeline, code quality, and team collaboration. This visual analysis compares traditional waterfall approaches with agile methodologies, showing the impact of various toolchains, testing strategies, and deployment processes on embedded system development success.
Microcontroller tradeoff analysis showing performance vs power consumption and cost considerations
The microcontroller tradeoff diagram above illustrates the critical balance between performance, power consumption, and cost in embedded system design. This visual analysis evaluates different architectures, processing capabilities, memory configurations, and peripheral options, providing a clear framework for selecting the optimal platform based on specific application requirements.
Workflow examples showing different development approaches for ARM and RP2040 architectures
The example workflow table above provides specific recommendations for different embedded system applications, showing how the tradeoff analyses from the previous sections translate into practical implementation strategies. This comprehensive table demonstrates the relationship between project requirements and optimal technology choices.
As demonstrated in the workflow table above, ARM workflows emphasize professional-grade debugging with industry-standard tools, while RP2040 workflows emphasize accessibility and flexibility through drag-and-drop deployment and unique PIO capabilities. The comprehensive architecture comparison shows that optimal design choices depend heavily on project constraints, performance requirements, and development timeline. High-performance applications favor ARM Cortex-M with professional toolchains, while rapid prototyping benefits from RP2040's flexible development environment and built-in features.
Build a reaction time measurement system based on research methodologies from cognitive science literature. This project combines embedded programming with scientific measurement principles to create a tool for studying human cognitive performance.
Based on PLOS ONE study (2017) examining reaction time variability and cognitive decline.
Repository: github.com/saleemaldajani/reaction-timer-saleemaldajani
Reaction time model showing the relationship between age and cognitive performance, based on research from PLOS ONE (2017)
| # | Image | Part | Description | Qty | Unit Price | Datasheet | Price |
|---|---|---|---|---|---|---|---|
| 1 |
|
ATmega32 Microcontroller | 8-bit AVR MCU, 32 KB Flash, 2 KB SRAM, ADC, timers, USART | 1 | $7.50 | $7.50 | |
| 2 |
|
0.96″ OLED Display Module (SSD1306) | 128×64 mono OLED, I²C/SPI interface | 1 | $2.50 | $2.50 | |
| 3 |
|
Male Header Pins (2.54 mm) | Breakaway straight male headers | 2 | $0.20 | $0.40 | |
| 4 |
|
Dual-Row Header (2×5, 2.54 mm) | 10-pin dual-row header, unshrouded | 1 | $0.50 | $0.50 | |
| 5 |
|
SMD MOSFET (AO3400A example) | N-channel MOSFET, SOT-23 | 2 | $0.15 | $0.30 | |
| 6 |
|
SMD Capacitor (MLCC 0603 example) | Ceramic capacitor, value TBD | 5 | $0.03 | $0.15 | |
| 7 |
|
SMD Resistors – 1 kΩ (code '1001') | Precision SMD resistors | 5 | $0.02 | $0.10 | |
| 8 |
|
SMD LED (0603, color TBD) | Indicator LED | 2 | $0.05 | $0.10 | |
| 9 |
|
USB-C Receptacle | Surface-mount USB Type‑C connector | 1 | $0.60 | $0.60 | |
| 10 |
|
Custom PCB (Capacitive-Touch Board) | Project-specific PCB | 1 | $10.00 | N/A | $10.00 |
| TOTAL | $22.15 | ||||||
Complete bill of materials for the ReactionAge embedded programming project. All prices are in USD and subject to supplier variations.
An interactive training module designed for young children, utilizing Nintendo Switch controller input to create an engaging letter-matching game that develops motor skills and cognitive recognition.
An AI-powered Mario-style platformer game built with ChatGPT assistance, featuring touch controls, adaptive gameplay mechanics, and classic 2D platforming elements optimized for embedded systems.
Mario-style platformer gameplay with touch controls and adaptive difficulty
ChatGPT-assisted development process and code generation demonstration
Button mapping obtained using QPAD21 Touch Monitor utility in Design Files
A collaborative multiplayer game featuring geometric shape interactions, synchronized gameplay between two ESP32 devices, inspired by educational content in Shape Song Swing Along.
Shape Song Swing Along educational content integration and gameplay
Circle and square shape interaction gameplay mechanics
MAC addresses obtained using Get MAC Address utility in Design Files
Source code, circuit diagrams, and documentation files for embedded programming projects.
This code was developed collaboratively with ChatGPT. View development transcript
// Reaction time prediction models
const float A_POOLED = -35.392393f, B_POOLED = 0.1930446f;
const float A_MALE = -58.123019f, B_MALE = 0.2640664f;
const float A_FEMALE = -31.603679f, B_FEMALE = 0.176111015f;
float predictAgeFromRT(unsigned long rt_ms) {
float x = (float)rt_ms;
float a, b, c;
switch (currentModel) {
case SX_MALE: a=A_MALE; b=B_MALE; c=C_MALE; break;
case SX_FEMALE: a=A_FEMALE; b=B_FEMALE; c=C_FEMALE; break;
default: a=A_POOLED; b=B_POOLED; c=C_POOLED; break;
}
return a + b*x + c*x*x;
}
This code was developed collaboratively with ChatGPT. View development transcript
// Touch control mapping and adaptive thresholds
#define N_TOUCH 6
#define THRESHOLD 500
uint8_t touch_pins[N_TOUCH] = {2,3,4,5,6,7}; // P0..P5 -> pins (2,3,4,5,6,7)
// Game state management
enum GameState { MENU, PLAYING, PAUSED, GAME_OVER };
GameState currentState = MENU;
// Player physics and collision detection
struct Player {
float x, y, vx, vy;
bool onGround, canJump;
int lives, coins;
};
// Adaptive difficulty system
#define USE_ADAPTIVE_THRESH 1
int adaptive_thresholds[N_TOUCH] = {500, 500, 500, 500, 500, 500};
void updateGame() {
// Physics update
player.vy += GRAVITY;
player.y += player.vy;
// Collision detection with platforms
for (int i = 0; i < N_PLATFORMS; i++) {
if (checkCollision(player, platforms[i])) {
player.onGround = true;
player.vy = 0;
}
}
}
This code was developed collaboratively with ChatGPT. View development transcript
// ESP-NOW communication setup
#include
#include
// Player state structure for network synchronization
struct PlayerState {
int x, y, vx, vy;
bool onGround;
};
struct NetPacket {
PlayerState st;
bool ready;
};
// MAC address-based player identification
uint8_t macP1[] = {0xD8, 0x3B, 0xDA, 0x75, 0x05, 0xAC};
uint8_t macP2[] = {0xD8, 0x3B, 0xDA, 0x75, 0xE1, 0x9C};
// Touch controls for shape interaction
#define N_TOUCH 6
#define THRESHOLD 100000UL
int touch_pins[N_TOUCH] = {9, 8, 7, 2, 3, 1};
#define IDX_LEFT 1
#define IDX_RIGHT 0
#define IDX_JUMP 2
// ESP-NOW callback for receiving data
void OnDataRecv(const uint8_t * mac, const uint8_t *incomingData, int len) {
memcpy(&otherState, incomingData, sizeof(otherState));
otherReady = true;
}
A touch monitoring utility developed with ChatGPT assistance for debugging and calibrating touch sensor inputs on the QPAD21 development board. This tool provides real-time visualization of touch sensor values and helps optimize touch thresholds for embedded applications.
This code was developed collaboratively with ChatGPT. View development transcript
// Touch monitoring and calibration utility
#include
#include
#include
#define N_TOUCH 6
#define THRESHOLD 500
uint8_t touch_pins[N_TOUCH] = {2,3,4,5,6,7};
Adafruit_FreeTouch* touch_devices[N_TOUCH];
int touch_values[N_TOUCH] = {0,0,0,0,0,0};
bool pin_touched[N_TOUCH] = {false,false,false,false,false,false};
void updateTouchMonitor() {
for (int i = 0; i < N_TOUCH; i++) {
int value = touch_devices[i]->measure();
touch_values[i] = value;
pin_touched[i] = (value > THRESHOLD);
// Serial output for debugging
Serial.print("Touch ");
Serial.print(i);
Serial.print(": ");
Serial.print(value);
Serial.print(" ");
Serial.println(pin_touched[i] ? "PRESSED" : "RELEASED");
}
}
A utility developed with ChatGPT assistance for retrieving and displaying MAC addresses on ESP32 devices. This tool is essential for ESP-NOW communication setup, allowing developers to identify device MAC addresses for peer-to-peer networking configuration in multiplayer embedded applications.
This code was developed collaboratively with ChatGPT. View development transcript
// MAC address retrieval utility for ESP32
#include
#include
void setup() {
Serial.begin(115200);
// Get WiFi STA MAC address
uint8_t mac[6];
esp_read_mac(mac, ESP_MAC_WIFI_STA);
Serial.print("WiFi STA MAC: ");
for (int i = 0; i < 6; i++) {
if (i > 0) Serial.print(":");
if (mac[i] < 16) Serial.print("0");
Serial.print(mac[i], HEX);
}
Serial.println();
// Get Bluetooth MAC address
esp_read_mac(mac, ESP_MAC_BT);
Serial.print("Bluetooth MAC: ");
for (int i = 0; i < 6; i++) {
if (i > 0) Serial.print(":");
if (mac[i] < 16) Serial.print("0");
Serial.print(mac[i], HEX);
}
Serial.println();
}
Complete KiCad project files for custom PCB design, including schematics, PCB layouts, and project configurations for embedded development boards.
Design Documentation:
For detailed board design principles, USB-C connector placement, and manufacturing considerations, refer to the Board Design Documentation section above.
Key insights and lessons learned from embedded programming work this week.
Acknowledgments for collaborative support and technical assistance during the embedded programming project development.
Helped me so much with the ESP32S3 code development with his awesome examples. Also assisted with resoldering the microcontroller when we were debugging the USBC QPAD issues. Couldn't have done it without you!
Spent so many hours helping me debug my USBC QPAD while resoldering the USBC port. Eventually it turned out the problem was that we crashed the board with XIAO code on a D21! Too much to say and so little time until class. Couldn't have done it without you.
Helped me test the boards after programming them, providing valuable assistance with debugging and validation of the embedded programming projects. Your support was crucial for ensuring everything worked correctly.
Contributed valuable insights and technical discussions about microcontroller tradeoffs during the group assignment development. Provided expertise in evaluating different microcontroller architectures and their suitability for various embedded programming applications.
Engaged in extensive discussions throughout the embedded programming design trades section, providing valuable technical insights and collaborative support during the group assignment development process.
Participated in numerous office hours discussions about embedded programming design trades, contributing significantly to the technical analysis and collaborative development of the group assignment content.
Thank you for showing me Flappy Bird and inspiring me to push through the game development part of my plan this week. Your demonstration and encouragement were instrumental in motivating me to complete the ChatGPTMario2D and CircleSquareShapeSongSwingalong projects, helping me overcome technical challenges and stay focused on the embedded programming goals.
Thank you for taking videos of the soldering training! Your documentation of the training process was incredibly helpful for understanding the techniques and procedures. The visual reference you provided made it much easier to follow along and learn the proper soldering methods for the embedded programming projects.
Special thanks to the collaborative learning environment that enabled knowledge sharing and technical problem-solving throughout the embedded programming week.
Transparent documentation of AI assistance used in this week's work, following course guidelines for ethical AI usage.
Used ChatGPT to generate the weekly template HTML structure and styling framework, and created structured task prompts for week 2 embedded programming content development.
Used Cursor AI for iterative content development, page structure optimization, and aesthetic improvements. The AI helped with HTML/CSS implementation, content organization, and maintaining consistent styling across the embedded programming documentation.
Used Cursor AI to implement the comprehensive BOM table with clickable images, datasheet links, and full-width responsive design. The AI helped with HTML table structure, CSS styling, and integration of CSV data into an aesthetically pleasing table format.
Used ChatGPT for image recognition of electronic components to identify parts and construct the bill of materials CSV file. The AI analyzed component images and provided detailed part specifications, descriptions, and pricing information for the embedded programming project.
Used Cursor AI for comprehensive project content updates, documentation improvements, and HTML structure optimization. The AI assisted with content organization, styling consistency, and integration of multimedia elements for the embedded programming documentation.
Used ChatGPT for Arduino programming assistance based on the ReactionAge project code from GitHub repository and Quentin Bolsée's QPAD documentation available on the course website embedded system assignment page. The AI provided guidance on embedded programming concepts, code optimization, and integration with the QPAD-xiao board design.
Used Cursor AI for comprehensive development of the embedded programming design trades group assignment section. The AI assisted with creating structured subsections for language, development workflow, and microcontroller tradeoffs, implementing professional HTML tables with data extracted from PowerPoint presentations, and integrating download links for PDF and PPTX files.
Used Cursor AI for comprehensive development of the training section, including soldering techniques, QPAD programming guides, board design documentation, and troubleshooting procedures. The AI assisted with creating structured subsections, implementing video demonstrations, adding download links for design files, and developing comprehensive troubleshooting guides with multimeter testing procedures.
Used Cursor AI to resolve portrait video cropping within the soldering training section, tune responsive spacing around the media, and validate caption alignment. The transcript documents the CSS refinements, verification steps, and follow-up adjustments that keep the tutorial content accessible.
Used Cursor AI to analyze file sizes and break down large changes into smaller, manageable commits to avoid exceeding Git's 25MB commit size limit. The AI helped organize files by size, created a logical commit strategy, and automated the commit and push process for better version control practices.
Used ChatGPT for collaborative development of the ChatGPTMario2D platformer game, including touch control implementation, adaptive difficulty systems, collision detection, and game physics. The AI provided guidance on embedded programming concepts, state machine architecture, and optimization for the QPAD21 development board.
Used ChatGPT for collaborative development of the CircleSquareShapeSongSwingalong multiplayer game, including ESP-NOW wireless communication, MAC address-based player identification, synchronized game state management, and educational content integration. The AI provided guidance on embedded networking, real-time synchronization, and touch-based geometric learning systems.
Used ChatGPT for collaborative development of the QPAD21 touch monitoring utility, including real-time sensor value display, threshold calibration assistance, debugging tools, and serial output for data logging. The AI provided guidance on touch sensor integration, calibration procedures, and embedded debugging techniques for the QPAD21 development board.
Used ChatGPT for collaborative development of the get_mac_address utility for ESP32 devices, including WiFi STA and Bluetooth MAC address retrieval, serial output formatting, and ESP-NOW integration support. The AI provided guidance on embedded networking concepts, MAC address extraction methods, and device identification techniques for multiplayer embedded applications.
Used Cursor AI for comprehensive development of toddler training subsections, including ChatGPTMario2D and CircleSquareShapeSongSwingalong game implementations. The AI assisted with creating structured subsections, implementing video demonstrations, adding download links for Arduino code files, and developing comprehensive documentation for embedded programming educational content.
Cursor AI assisted with fixing layout and margin issues in week2.html, including resolving width inconsistencies and ensuring proper section structure. The AI helped identify and fix extra closing div tags (lines 1251-1252) that were causing content to appear outside their intended sections. The issue was resolved by manually removing the duplicate closing tags.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
3D Scanning and Printing
Exploring the capabilities and limitations of additive manufacturing through hands-on design and printing experiments.
This week focuses on additive manufacturing processes and 3D scanning technologies. We'll explore the differences between subtractive and additive processes, learn about various 3D printing materials and techniques, and understand the design constraints and capabilities of 3D printing systems.
Understanding different additive manufacturing technologies and their applications.
Most common desktop 3D printing method using thermoplastic filament
UV-cured resin printing for high detail and smooth surfaces
Powder-based printing for complex geometries without supports
Similar to SLA but uses digital light projection for faster printing
Powder and binder system for full-color printing
Inkjet-style printing with multiple materials and colors
Common 3D printing materials and their properties.
Plant-based, biodegradable, easy to print, glass transition ~60°C, more brittle
Oil-based recyclable, glass transition ~80°C, tougher, better UV resistance
Strong, impact-resistant, requires heated bed, more difficult to print
Flexible material for rubber-like properties
Water-soluble support material
Wood-filled, metal-filled, carbon fiber, conductive materials
Key constraints and guidelines for designing printable objects.
Various technologies for capturing 3D geometry of physical objects.
Multiple photos from different angles, processed with software like Meshroom, Polycam
Projects patterns onto object, captures deformation for 3D reconstruction
Time-of-flight or triangulation-based distance measurement
Two cameras for depth perception, similar to human vision
X-ray tomography for internal structures and complex geometries
Light detection and ranging for large-scale scanning
Comprehensive guide to 3D scanning techniques and best practices using structured light scanning systems.
Structured light 3D scanner with advanced scanning capabilities and user-friendly interface
Professional-grade 3D scanner with high-resolution capture and precision scanning features
Ensure consistent, diffused lighting. Avoid direct sunlight and harsh shadows that can interfere with structured light scanning.
Clean the object thoroughly. For dark or reflective surfaces, consider applying scanning spray or powder for better light reflection.
Place small, recognizable objects around the main subject to help the scanner track movement and improve alignment accuracy.
Capture the object from multiple angles and positions to ensure complete coverage and minimize blind spots in the final mesh.
Comprehensive guide to 3D printing techniques, software configuration, and best practices for optimal print quality and efficiency.
Advanced slicing software with comprehensive print settings and optimization tools
High-quality PLA filament with consistent properties and reliable printing characteristics
0.4mm nozzle diameter for balanced detail and print speed
Maximum height: 10 inches - consider this constraint for tall prints
Start with 0.2mm for most prints, reduce to 0.15mm for detailed parts, increase to 0.3mm for prototypes.
Test higher speeds gradually - many printers can handle faster speeds than default settings suggest.
Use 2-3 walls for most applications, increase for structural parts, decrease for decorative items.
Remember the 10-inch height limit when designing tall objects - consider splitting large prints.
Common file formats used in 3D printing and scanning workflows.
Test the design rules for your 3D printer(s) by creating a comprehensive test print that evaluates various design constraints and capabilities.
Download Detailed Slides: PDF Version | Keynote Version
Comprehensive presentation covering all design rule tests, results, and analysis methodology.
These tests evaluate features that should print reliably without additional support structures:
These tests push the boundaries of what the printer can achieve without additional support structures:
Additional design rule tests planned for future evaluation:
Comprehensive STL files for testing various 3D printing design rules and constraints:
To optimize print time and material usage, we strategically grouped all supported and unsupported tests into two separate prints, allowing for comprehensive evaluation of printer capabilities.
Supported tests print - grouped for efficient testing
Unsupported tests print - challenging geometry evaluation
Print Strategy: By grouping all supported tests in one print and all unsupported tests in another, we were able to efficiently evaluate printer capabilities while minimizing material waste and print time. This approach allowed for direct comparison between different test geometries and provided comprehensive data on the printer's performance across various design constraints.
The supported tests focused on features that should print reliably, while the unsupported tests pushed the boundaries of what the printer could achieve without additional support structures.
Design, document, and 3D print an object that could not be made subtractively. The object should be small (few cm³) and limited by printer time.
I hope to design and fabricate a handgripper to measure age based on hand grip strength. This builds on our aging biomarker research by focusing on locomotive age assessment.
The idea is to measure functional decline due to aging across multiple modalities:
Last week we measured cognition with reaction time - this week we focus on locomotive age through grip strength.
Complex interlocking geometry that cannot be manufactured subtractively. Tests printer's ability to create intricate, self-supporting structures with minimal overhangs.
Helical spring mechanism for grip force measurement. Demonstrates additive manufacturing's capability to create functional mechanical components in one piece.
Nested spring system with different spring constants. Tests printer's ability to create complex, multi-component mechanisms with varying material properties.
Spiral coil for force transmission and measurement. Explores the limits of printable coil geometries and their mechanical properties for sensor integration.
A complex 3D printable design that demonstrates additive manufacturing capabilities through intricate geometry that cannot be manufactured subtractively.
This design was co-developed through iterative discussion with ChatGPT, exploring various approaches to create an untanglable slinky geometry.
The design process involved multiple iterations, with the first two attempts resulting in tangled slinkies that failed to maintain proper coil separation:
First attempt: Tangled slinky with improper coil geometry
Second attempt: Still resulted in tangled geometry
The third iteration successfully achieved the desired untanglable geometry:
Final design: Untanglable slinky with proper coil geometry
Design Description: This design shows an untanglable slinky with flat, oval coils, tiny spacers between turns, and solid end caps, so it collapses neatly without knots. The geometry enforces orderly stacking while preventing coils from slipping past each other.
This design was co-created through iterative discussion with ChatGPT, demonstrating effective human-AI collaboration in 3D design. View the full conversation transcript in the Ethical AI Use section →
My initial exploration into creating a 3D printable slinky design, inspired by the classic magic spring concept.
First slinky design - exploring the magic spring concept
Design Description: This first slinky design represents my initial exploration into creating a 3D printable magic spring. The design focuses on creating a helical structure that can collapse and expand while maintaining its structural integrity.
This design was inspired by the classic slinky toy and the concept of creating a "magic spring" that can be 3D printed in one piece, demonstrating the unique capabilities of additive manufacturing.
This design draws inspiration from several sources and builds upon existing work in 3D printable magic springs:
Watch the slinky in action: YouTube Video - Magic Spring Demonstration
Download the original design: Printables - Slinky Magic Spring
Explore the technical details: Hackaday - 3D Printed Magic Spring
The Hackaday project provides comprehensive documentation including Java code for generating G-code, print parameters, and detailed instructions for creating 3D printable magic springs. The project includes multiple iterations of the code with different features like temperature control, spiral priming, and delta printer compatibility.
Design files are available for download in the Design Files section.
Bringing the first slinky design to life through 3D printing, testing the design's printability and mechanical properties in real-world conditions.
The printing process involved careful preparation, parameter optimization, and real-time monitoring to ensure successful fabrication of the complex slinky geometry.
First slinky print in progress - demonstrating 3D printing of complex helical geometry
Print Process Description: The printing process successfully demonstrated the capability of FDM 3D printing to create complex helical geometries. The slinky was printed using PETG material with optimized settings for layer adhesion and dimensional accuracy. The print required careful support structure management and precise temperature control to achieve the desired mechanical properties.
Key printing parameters included: 0.4mm nozzle diameter, 0.2mm layer height, PETG material, and optimized print speed for complex geometry. The print time was approximately 55 minutes for the complete slinky structure.
Watch the complete printing process in action, showing the layer-by-layer construction of the slinky's complex helical structure.
Complete printing process video showing layer-by-layer construction of the slinky
Video Analysis: This video demonstrates the successful 3D printing of the complex slinky geometry, showing how the printer handles overhangs, bridging, and intricate details. The printing process reveals the importance of proper support structures and temperature management for achieving functional mechanical properties.
Notable aspects: smooth layer transitions, successful overhang printing, proper bridging between coil segments, and consistent material flow throughout the complex geometry.
The primary goal of this project is to optimize the number of revolutions that are printable in a single slinky design, pushing the limits of 3D printing capabilities for complex helical geometries.
Key Hypothesis: Since this is a slinky design, the flexible nature of the printed object should allow for pulling and stretching to remove support material from between the coils. This hypothesis drives the design optimization process, testing how many revolutions can be successfully printed while maintaining the ability to remove internal supports through mechanical manipulation.
This approach challenges traditional 3D printing constraints by leveraging the inherent flexibility of the slinky geometry to overcome support removal limitations, potentially enabling more complex and longer helical structures than would otherwise be printable.
Documenting the initial printing results and the challenges encountered with support removal, leading to refined techniques for successful slinky fabrication.
The first attempts at support removal revealed that simple breaking methods were insufficient for the complex internal geometry of the slinky design.
Breaking support fails - insufficient for complex internal geometry
Razor method works but requires caution and carefulness
Support Removal Analysis: Initial attempts at simply breaking support material proved ineffective due to the complex internal geometry of the slinky coils. The support material was too tightly integrated with the helical structure to be removed through mechanical breaking alone.
The razor method, while effective, requires extreme caution and carefulness to avoid damaging the delicate slinky structure. This process demands patience and precision to successfully remove internal supports without compromising the print quality.
After refining the support removal technique, the slinky prints achieved successful results with proper mechanical functionality.
First successful slinky print - demonstrating proper mechanical functionality
Multiple successful slinky prints - validation of design and process
Success Analysis: The successful slinky prints demonstrate the viability of 3D printing complex helical geometries with proper support removal techniques. The prints maintain their structural integrity and exhibit the expected slinky behavior, validating both the design approach and the fabrication process.
Key achievements: successful support removal, maintained geometric accuracy, functional mechanical properties, and reproducible results across multiple prints.
The final iteration of the slinky design achieved full functionality with proper mechanical behavior and successful support removal.
The final step in support removal required precise snipping to achieve clean separation and proper slinky functionality.
Fine snipping process - precise support removal for clean slinky functionality
Snipping Process: The final support removal required careful snipping with precision tools to achieve clean separation between the slinky coils and support material. This delicate process was essential for maintaining the structural integrity while enabling proper slinky movement.
Key considerations: maintaining coil geometry, avoiding damage to the helical structure, and ensuring smooth movement between coils.
The completed slinky successfully passed the classic staircase test, demonstrating proper mechanical functionality and slinky behavior.
Complete full slinky - ready for staircase test
Slinky staircase test - demonstrating proper mechanical functionality
Test Success: The slinky successfully passed the staircase test, demonstrating proper mechanical functionality with smooth coil movement and the characteristic slinky behavior. This validates both the design approach and the 3D printing process for creating functional mechanical objects.
Key achievements: proper coil separation, smooth movement, maintained structural integrity, and classic slinky behavior.
Design files and G-code are available for download in the Design Files section.
Print Success Analysis: The first slinky print successfully demonstrated the viability of 3D printing complex helical geometries. The printed slinky maintained its structural integrity and demonstrated the expected mechanical properties, proving that additive manufacturing can create functional objects with intricate geometries that would be impossible to manufacture using traditional subtractive methods.
Key achievements: successful overhang printing, proper layer adhesion, maintained geometric accuracy, and functional mechanical properties suitable for the intended slinky behavior.
Using 3D scanning technology to capture and digitize physical objects, demonstrating the capabilities of structured light scanning systems.
For the 3D scanning component, I selected a detailed Darth Vader sculpture as the target object. This choice was strategic - the sculpture's complex geometry, dark surface, and intricate details would test the scanning system's capabilities and limitations.
Darth Vader sculpture - complex geometry with dark surfaces and intricate details
Object Description: This detailed Darth Vader sculpture features complex geometry including flowing cape details, helmet ridges, and facial features. The dark surface material and intricate details present an excellent challenge for 3D scanning technology, testing both the system's ability to capture fine details and handle challenging surface properties.
The sculpture's combination of organic curves, sharp edges, and dark surface finish makes it an ideal test subject for evaluating scanning system performance across different surface types and geometric complexities.
The first scanning attempt used default scanner settings, which proved insufficient for capturing the complex geometry and dark surfaces of the Vader sculpture.
Failed scan result using default scanner settings - insufficient detail capture
Scan Failure Analysis: The default settings failed to capture sufficient detail from the dark Vader sculpture. The resulting mesh shows significant gaps, missing geometry, and poor surface reconstruction. This demonstrates the importance of optimizing scanner settings for specific object characteristics.
Key issues identified: insufficient lighting for dark surfaces, inadequate resolution settings, and suboptimal scanning angle coverage for complex geometry.
The optimization process involved adjusting scanner settings and scanning parameters to achieve better results with the challenging Vader sculpture.
Video demonstration of scanner optimization process and parameter adjustment
Optimization Process: This video demonstrates the iterative process of adjusting scanner settings, lighting conditions, and scanning parameters to achieve optimal results. The process involved multiple attempts with different configurations to find the best balance between detail capture and scanning efficiency.
Key optimization steps included: adjusting lighting intensity, modifying scanning resolution, optimizing turntable speed, and fine-tuning surface detection parameters for dark materials.
After optimization, the scanning process successfully captured the complex geometry of the Vader sculpture with significantly improved detail and surface reconstruction.
Successful 3D scan result with optimized settings - improved detail capture and surface reconstruction
Successful Scan Analysis: The optimized scan successfully captured the complex geometry of the Vader sculpture, including fine details like helmet ridges, cape folds, and facial features. The resulting mesh shows good surface reconstruction with minimal gaps and accurate geometric representation.
Key improvements achieved: enhanced detail capture, better surface reconstruction, reduced scanning artifacts, and improved geometric accuracy for complex organic shapes.
Converting the scanned Vader model into a printable 3D object required extensive post-processing and optimization for 3D printing constraints.
Successfully printed Vader sculpture - demonstrating 3D scan to print workflow
Print Success: The printed Vader sculpture successfully demonstrates the complete workflow from 3D scanning to 3D printing. Despite the complex post-processing requirements, the final print maintains good detail and structural integrity, proving the viability of scanning-to-printing workflows for complex organic shapes.
Key achievements: successful mesh cleanup, proper alignment, effective support generation, and successful printing of complex scanned geometry.
Essential resources for 3D scanning, printing, and design optimization.
3D Scanning & Printing - MIT Academy
Comprehensive resource covering 3D scanning techniques, 3D printing technologies, design for additive manufacturing, and post-processing methods. Includes tutorials on photogrammetry, structured light scanning, and various 3D printing processes.
3D Scanning & Printing Recitation - Google Slides
Hands-on tutorial covering 3D scanning workflows, 3D printing setup and optimization, design rules for additive manufacturing, and troubleshooting common issues.
Links to CAD files, STL files, and other design assets for this week's assignments.
Comprehensive STL files for testing various 3D printing design rules and constraints:
Complete set of slinky design files and G-code for testing printable revolution limits and support removal hypothesis:
Initial slinky design with 28 coil iterations - testing maximum printable length
📥 DownloadRefined version of the 28-coil slinky design with optimized geometry
📥 DownloadThird iteration with 21 coils - balanced design for printability and functionality
📥 DownloadFourth iteration with 7 coils - compact version for testing support removal hypothesis
📥 DownloadG-code for 28-coil slinky (51 minutes print time) - maximum length test
📥 DownloadG-code for 14-coil slinky (58 minutes print time) - mid-range optimization
📥 DownloadG-code for 21-coil slinky (55 minutes print time) - optimal balance
📥 DownloadG-code for 7-coil slinky (57 minutes print time) - support removal test
📥 DownloadDesign Iteration Strategy: These files represent a systematic approach to optimizing printable slinky designs. Each iteration tests different coil counts to find the maximum number of revolutions that can be successfully printed while maintaining the ability to remove internal supports through mechanical manipulation of the flexible slinky structure.
The progression from 7 to 28 coils allows for testing the limits of 3D printing capabilities while validating the hypothesis that slinky flexibility enables support removal in complex internal geometries.
Design files for the first slinky exploration project:
Key insights and lessons learned from working with 3D printing and scanning technologies.
Acknowledgements and team roles for this week's work.
Contributions will be documented as work progresses
Documentation of AI tool usage for this week's assignments and design work.
This session covers the development of the Week 3 page for 3D scanning and printing, including content population from MIT Academy resources, navigation updates, and design file integration.
This session documents the iterative co-design process for creating an untanglable slinky using ChatGPT. The conversation shows multiple design iterations, from initial failed attempts to the final successful geometry.
This session documents the comprehensive refactoring of the entire web development repository to make it fully responsive and mobile-friendly. The conversation covers the implementation of modern CSS techniques, responsive design patterns, and mobile-first development approaches.
This session documents the integration of the first slinky design into the untanglable slinky section, including adding the image, description, inspiration resources, and design file links.
This session documents the consultation with ChatGPT for optimizing the Vader scan post-processing workflow. The conversation focused on edge smoothing techniques in Meshmixer and best practices for preparing scanned meshes for 3D printing.
This session documents the development and updates to the Week 3 individual assignment focusing on 3D scanning and printing. The conversation covers 3D scanning techniques, Vader model processing, slinky design iterations, and individual project implementation.
This session documents the comprehensive development and updates to the Week 3 group assignment focusing on 3D printing design rule testing. The conversation covers group collaboration, design rule testing methodology, PDF content extraction, and group project implementation.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Electronics Design
This week focuses on electronics design using EDA tools to create embedded microcontroller systems. We'll learn schematic design, PCB layout, design rule checking, and simulation techniques.
This week introduces Electronic Design Automation (EDA) tools for creating embedded microcontroller systems. We'll learn to design schematics, create PCB layouts, perform design rule checks, and simulate circuit operation before fabrication.
Essential training materials for electronics design, EDA tools, and test equipment operation.
Essential resources for electronics design, EDA tools, and embedded systems development.
Key design rules and EDA tool recommendations from course Slack discussions.
Easiest starting point for this class
Open source, widely used, improved UI
Cloud-synced part libraries, team access
Feedback Available: Send schematic/PCB files and screenshots for design review. View Design Guidelines Discussion • View EDA Tools Discussion
Official course resources for electronics design and EDA tools.
Electronics Design - MIT Academy
Comprehensive resource covering electronics design principles, EDA tools, circuit simulation, PCB design, and embedded systems development. Includes tutorials on KiCad, Eagle, and other EDA platforms.
Electronics Design Recitation - Vimeo
Hands-on tutorial covering electronics design workflows, EDA tool usage, circuit simulation techniques, and PCB design best practices for embedded systems development.
Use the test equipment in your lab to observe the operation of an embedded microcontroller. This assignment focuses on understanding how to use oscilloscopes, multimeters, and other test equipment to analyze microcontroller behavior and circuit operation.
Group Assignment Link: View Complete Group Assignment Summary
The following test equipment was utilized to observe and analyze microcontroller operation. Multimeter and oscilloscope provide 99% of the information needed for comprehensive analysis.
Essential for basic measurements - voltages (slow, twice a second), resistances, and current (in series)
High-speed analysis - voltages (fast, 5 million times a second) and digital signal decoding
Digital signal analysis - both cheap and professional models available, can also be done on oscilloscope
Signal generation for testing receiving ends of boards - most scopes have built-in generators
Controlled power delivery for stable microcontroller operation
Circuit prototyping and testing platform
Detailed procedures and observations using various test equipment to analyze microcontroller operation:
Digital signal decoding and serial communication protocol analysis using oscilloscope capabilities.
Serial Signal Analysis: Oscilloscope capture showing digital serial communication signals with clear 1s and 0s pattern, enabling ASCII character decoding and protocol verification.
SCL Clock Signal: I2C SCL line showing clean 400kHz square wave
SDA Data Signal: I2C SDA line showing data bits changing on SCL falling edges
Comprehensive analysis of test equipment effectiveness and real-world applications:
Oscilloscope Probing Setup:
Multimeter Continuity Testing:
Understanding how to properly probe circuits, interpret oscilloscope traces, and use test equipment to diagnose embedded system operation. This foundation is essential for debugging and verifying circuit designs before and after fabrication. The systematic approach starting with basic power supply verification and progressing to complex signal analysis ensures comprehensive coverage of all critical operational aspects.
Use an EDA tool to design an embedded microcontroller system using parts from the inventory, check its design rules for fabrication, and simulate its operation. This assignment involves complete schematic capture, PCB layout, design rule checking, and circuit simulation.
Complete step-by-step workflow for designing embedded systems in Fusion360 EDA:
Simulation: After completing the PCB design, remake the circuit in simulation websites like Wokwi to verify operation before fabrication.
Two ESP32S3-based designs created using Fusion360 EDA: a standalone LED board and a dev board connector with LED on the back.
Standalone LED board design for ESP32S3 microcontroller with integrated LED and supporting circuitry.
Development board connector design with LED mounted on the back for easy integration with ESP32S3 development boards.
Interactive circuit simulations created using Wokwi to verify the ESP32S3 LED circuit designs before fabrication. These simulations demonstrate the button-controlled LED functionality and help validate the circuit operation.
Primary simulation showing button-controlled LED circuit with ESP32S3 microcontroller.
Interactive Simulation: Button-controlled LED circuit with ESP32S3
Alternative simulation configuration with the same circuit functionality.
Alternative Configuration: Same circuit with different simulation setup
The same Arduino code is used for both simulations, implementing a simple button-controlled LED circuit.
#define BUTTON_PIN 5 // button pin
#define LED_PIN 4 // LED pin (but currently miswired in series with button)
void setup() {
Serial.begin(115200);
Serial.println("Hello, ESP32-S3!");
pinMode(BUTTON_PIN, INPUT_PULLUP);
pinMode(LED_PIN, OUTPUT); // drive LED pin actively
}
void loop() {
int buttonState = digitalRead(BUTTON_PIN);
if (buttonState == LOW) { // button pressed
digitalWrite(LED_PIN, HIGH); // push current out GPIO 3
Serial.println("Button pressed -> LED ON");
} else {
digitalWrite(LED_PIN, LOW); // no current
Serial.println("Button released -> LED OFF");
}
delay(10);
}
Overview of Electronic Design Automation tools and the complete design workflow from schematic to fabrication.
Open-source, full-featured EDA suite with excellent community support
Professional PCB design tool with extensive component libraries
Advanced PCB design with integrated simulation capabilities
All design files, schematics, PCB layouts, and simulation files for the electronics design assignments.
Key insights and learnings from the electronics design week, including challenges faced and skills developed.
Acknowledgements and team contributions for the electronics design week assignments.
AI tools used to assist with electronics design, EDA tool learning, and circuit simulation guidance.
Cursor AI assistance for creating the week4.html template with comprehensive electronics design content, including EDA tools, assignments, and documentation resources.
Focus: HTML template creation, electronics design content population, assignment structure, and documentation integration
Cursor AI assistance for updating the individual assignment section with detailed Fusion360 design process, board design images, and improved visual presentation.
Focus: Fusion360 design process documentation, board design image integration, visual alignment improvements, and highlights section updates
Cursor AI assistance for updating the Week 4 group assignment section with comprehensive test equipment details, image link updates, and group page integration.
Focus: Group assignment content integration, test equipment documentation, image path updates, and group page link addition
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Electronics Production
This week focuses on electronics production, covering PCB fabrication techniques, assembly processes, and creating functional embedded microcontroller systems. We explore both in-house production methods and board house services.
Characterize design rules for in-house production
Functional double-sided PCB with toddler-friendly interface
Submit PCB to board house for production
This week covers the practical aspects of electronics production, including PCB fabrication methods, assembly techniques, and creating functional embedded microcontroller systems. We explore both in-house production capabilities and board house services.
Understanding different approaches to PCB production, from traditional etching to modern machining and boardhouse services.
Essential techniques for assembling and soldering electronic components, from through-hole to surface-mount devices.
Boardhouse PCB production services offer higher precision, better quality, and advanced features compared to in-house methods.
Systematic approach to troubleshooting and verifying electronic systems functionality.
Essential training materials and procedures for PCB milling using the Othermill machine.
PCB milling process using the Othermill machine
Process for creating double-sided PCBs using the Othermill machine with fixture installation and proper alignment techniques.
Double-sided fixture bracket installation for precise board alignment and milling
Critical process for installing copper rivets to create electrical connections between board layers in double-sided designs.
Copper rivets used for creating electrical connections between board layers
Handle rivets carefully - there are thousands available but they are relatively expensive. Avoid spilling them during the installation process.
Source: Original hand-typed training notes
Detailed training notes from MIT HTMAA Slack channel with comprehensive SOP and pro tips
Characterize the design rules for in-house PCB production process and submit a PCB design to a board house.
Comprehensive PCB design rule test pattern created to characterize in-house production capabilities and validate design constraints for successful fabrication.
Design rule test pattern showing trace widths, spacing tolerances, and hole sizes for characterization
KiCad PCB: Complete PCB design file with test patterns and design rules.
Gerber Files: Front copper layer, edge cuts, and drill holes for PCB fabrication.
Comprehensive testing of in-house PCB production capabilities through systematic evaluation of trace widths, spacing tolerances, and mechanical durability.
Note: Design rules are guidelines; actual tolerances may vary based on material and process conditions
Pre-test: Initial trace pattern
Post-test: Surviving traces after mechanical stress
Evaluation of boardhouse PCB manufacturing services through JLCPCB submission to compare design rules, pricing, and production capabilities with in-house methods.
JLCPCB order confirmation showing PCB specifications, pricing, and production parameters
Detected 2 layer board of 100x100mm(3.94x3.94 inches).
PCB Specifications: Detailed technical specifications, design rules, and manufacturing parameters for the PCB order.
Order Checkout: Complete order details including pricing breakdown, shipping options, and payment confirmation.
Make and test an embedded microcontroller system that you designed, with extra credit for using an alternative production process.
Design and fabricate custom embedded microcontroller systems using single-sided PCB milling techniques, focusing on ESP32-S3 based development boards with comprehensive testing protocols.
Develop origami-style PCB design that mechanically activates LED through folding mechanism
Create flexible PCB using copper ring material for accelerometer integration
Key challenges encountered during FR1 soldering and solutions developed through experimentation and peer collaboration.
| Problem | Solution | Source |
|---|---|---|
| Can't heat for too long otherwise you burn off the copper | Preheating helped with flame retardant boards, but doesn't work with FR1 | Personal experience |
| Can't use too much solder, otherwise it flies off onto other parts | Extra solder bunches up on flame retardant boards, but FR1 requires precise control | Personal experience |
| Poor solder sticking to copper grooves | Careful sand papering for the grooves to help with solder sticking | Omar Aldajani (previous HTMAA student) |
| Poor thermal transfer and solder adhesion | Using flux on the copper for better thermal transfer and solder sticking | Omar Aldajani (previous HTMAA student) |
| Extra solder on copper is annoying and hard to remove | Add more solder and remove it again, or carve away some copper so the short doesn't matter | Anthony (lab instructor) |
Source: MIT HTMAA Slack Discussion
Additional insights and peer collaboration on FR1 soldering challenges and solutions
Initial PCB milling design showing early layout and trace patterns
Final successful PCB milling result with clean traces and proper spacing
Successfully assembled Xiao ESP32-S3 microcontroller board with LED functionality
Diode testing failure highlighting soldering challenges and component orientation issues
FR1 soldering presents unique challenges compared to flame retardant boards, requiring precise heat control and solder management. Through peer collaboration and systematic problem-solving, effective techniques were developed including careful sanding, flux application, and strategic solder removal methods.
After gathering copper solder insights from peer collaboration and lab experience, I prepared all necessary components and tools for assembly. With boards ready, solder wick prepared, and pen flux available, I proceeded with the soldering process.
Complete soldering setup showing boards, components, solder wick, and pen flux ready for assembly
Following soldering, I conducted comprehensive testing including resistivity measurements, diode tests, and continuity tests to diagnose and resolve minor issues. This systematic approach helped identify and fix problems such as additional solder needed from rivet to board connections and removing shorts (e.g., 10k resistor bridges).
The front LED functionality was successfully implemented on the base development board design. However, the button remains shorted despite multiple troubleshooting attempts including solder wicking and microscopic inspection for bridges.
Front LED diode test demonstrating successful LED functionality on base development board
Base LED development board successfully programmed and operational with LED control functionality
The board functions correctly without the button, and the LED has been successfully programmed. The button shorting issue continues to be investigated, as standard troubleshooting methods have not yet resolved the problem.
After practicing with two base LED development board soldering attempts, this fabrication process proceeded smoothly with all components functioning correctly, including the button. The systematic approach of verifying button lead connections before and after pressing, both before and after soldering, proved essential for success. This design uses the ESP32-S3 LED Connector v6 design.
Back LED diode test showing successful functionality on the connector development board design
Double-sided development board demonstration showing full functionality with LED control and button operation
Animated demonstration of the double-sided development board in operation with LED and button functionality
Toddler-friendly button test demonstrating the interface's usability and reliability for young users
Success! The double-sided development board for ESP32-S3 with LED on the back is fully functional. The board passed comprehensive testing including the toddler-friendly interface test. With the LED successfully implemented on the back, substantial real estate is available for additional components such as four buttons and a screen for reaction time and other prototype applications.
Design Note: For one USB-C cable, I needed to trim the connector edge because the microcontroller is positioned further into the board. I found a thinner cable that connects without modification. In future design iterations, it would be beneficial to redesign the board to position the microcontroller closer to the edge, maintaining the same distance as the base LED design for improved accessibility.
I set up a Seeed XIAO ESP32-S3 with an MPU6050 accelerometer, HX711 load cell amplifier, and SSD1306 OLED display, all sharing 3.3V power. The OLED never displayed anything, and both the sensors and display returned repeated I²C timeout errors. I corrected power from 5V to 3.3V, verified wiring, and confirmed that SDA = GPIO 5 (A4) and SCL = GPIO 6 (A5), but the I²C scanner still detected no devices. The MPU6050 powers on, yet no readings appear in the Serial Monitor. The load cell connects and gives intermittent readings, requiring improved connections in future iterations towards the final project.
Development board setup with MPU6050 accelerometer, HX711 load cell, and SSD1306 OLED display
Complete development board configuration showing all sensor connections and power distribution
| Component | VCC | GND | SDA | SCL | Other Pins |
|---|---|---|---|---|---|
| MPU6050 | 5 or 3.3V | GND | A4 (GPIO 5) | A5 (GPIO 6) | — |
| OLED (SSD1306) | 5 or 3.3V | GND | A4 (GPIO 5) | A5 (GPIO 6) | — |
| HX711 + Load Cell | 5V | GND | A0 (DT) | A1 (SCK) | Logic 3.3V-safe |
HX711 load cell amplifier board pinout configuration (Amazon product page)
MPU6050 accelerometer pinout diagram (Wokwi simulation reference)
SDA and SCL were shorted to ground, suspected connector or XIAO board. After removing connector, the short persisted, indicating the XIAO board itself was the issue. The accelerometer still powered on despite the communication failure.
MPU6050 accelerometer showing power indication despite I²C communication issues
The load cell had intermittent connection issues but still provided some readings. Since it uses analog outputs, I was able to capture several data points for analysis.
Serial plotter showing load cell data visualization
Serial monitor displaying load cell readings and status
Additional serial monitor output showing load cell data patterns
Step-by-step process for creating custom jumper cable assemblies, demonstrating proper wire preparation, soldering techniques, and heat shrink application.
Initial wire preparation showing individual conductors ready for assembly
Twisted wire joint preparation before soldering
Completed soldered joint showing proper connection and heat distribution
Heat shrink tubing applied for insulation and strain relief
Apply heat shrink tubing early in the process to use smaller diameter tubing that's easier to position and provides better insulation coverage.
Demonstration of early heat shrink application for optimal cable assembly
Essential resources and detailed guidance for electronics production processes and design rules.
Source: Anthony Pennes - Slack Message
Detailed guidance on the three available PCB fabrication processes and design rules for successful board production.
Critical: Always perform optical inspection before soldering components. Look for stray copper strands and address them with light sanding, steel scraper, or utility knife.
Official course resources for electronics production and PCB fabrication.
Electronics Production - MIT Academy
Comprehensive resource covering PCB fabrication methods, milling processes, soldering techniques, and electronics assembly. Includes tutorials on design rules, file preparation, and production workflows.
Electronics Production Recitation - Google Slides
Hands-on tutorial covering PCB fabrication workflows, milling machine operation, soldering techniques, and electronics assembly best practices.
Electronics Production Recitation - Vimeo
Video tutorial demonstrating PCB fabrication processes, machine setup, and assembly techniques for electronics production.
Complete design files, schematics, PCB layouts, and firmware for the ESP32-S3 development board projects.
Two complete ESP32-S3 development board designs created using Fusion360 EDA, featuring LED control and button input functionality. View detailed design process and simulation in Week 4.
Standalone LED development board with integrated LED control and button input functionality.
Enhanced development board with LED mounted on the back and full pinout connectivity for easy integration with ESP32-S3 development boards.
Button-controlled LED firmware for ESP32-S3 development boards with serial debugging capabilities.
Key insights and lessons learned from this week's electronics production work.
Acknowledgements for help received during this week's electronics production work.
Transparent documentation of AI assistance used in this week's electronics production work.
This individual assignment section was significantly enhanced by Cursor AI to transform informal challenges and solutions into professional documentation. The AI assisted with creating structured tables for FR1 soldering problems and solutions, organizing project documentation images with proper captions, and updating highlight images with improved visual presentation and linking functionality.
This week's individual assignment and production process documentation was significantly enhanced by Cursor AI to transform informal project notes into comprehensive professional documentation. The AI assisted with creating detailed subsections for remilling/soldering processes, front and back LED testing, double-sided milling procedures, rivet installation, and complete design files documentation with proper linking and technical descriptions.
Cursor AI helped transform informal training notes into professional SOP documentation, restructure design rule characterization results with improved visual presentation, develop comprehensive individual assignment plans with clear testing protocols, and create structured tables for documenting FR1 soldering challenges and solutions with proper attribution to peer contributors.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Computer-Controlled Machining
This week focuses on computer-controlled machining, covering CAM workflows, CNC milling techniques, and creating large-scale projects. We explore subtractive manufacturing processes and learn to make something big (~meter-scale).
CNC router characterization: runout and clearance measurements
Floating mirror tensegrity design process
3D printed and wood floating mirrors
3D printed assemblies demonstrating horizontal stability
Wood assembly showing horizontal stability progression
Final floating mirror configuration
This week covers computer-controlled machining processes, focusing on subtractive manufacturing techniques for creating large-scale projects. We explore CAM workflows, CNC milling operations, and learn to make something big (~meter-scale) using precision machining tools.
Comprehensive technical overview of subtractive manufacturing processes, CAM workflows, and design strategies for large-scale fabrication.
Expert instruction from MIT CBA Academy covering subtractive processes, safety protocols, manufacturing workflows, and advanced machining techniques.
Live Discussion:
Complete Documentation:
Comprehensive overview of subtractive manufacturing processes, from basic 2-axis operations to advanced 5-axis machining capabilities.
Scale & Materials
Precision & Quality
Production & Capability
Critical parameters for successful machining operations, tool life optimization, and surface quality control.
Fusion 360 manufacturing mode provides comprehensive CAM workflow for toolpath generation, simulation, and machine control.
Reference Resources:
Essential Tools:
NIFTY DOGBONE (60-day free trial)
Critical Fusion 360 add-in for joint design optimization
Advanced manufacturing strategies covering design optimization, process control, and quality assurance for large-scale fabrication.
Time Analysis:
Quality Verification:
Essential training materials and procedures for computer-controlled machining and CAM workflows.
Comprehensive CAM tutorial in Fusion 360 covering toolpath generation, machining strategies, and workflow optimization for CNC operations.
Anthony's comprehensive CAM tutorial in Fusion 360 covering toolpath generation and machining strategies
Essential resources for computer-controlled machining and CAM workflows.
Computer-Controlled Machining - MIT Academy
Comprehensive resource covering CNC machining principles, CAM workflows, toolpath generation, and machine operation. Includes tutorials on design for machining, fixturing, and safety procedures.
Detailed documentation covering CAM workflows, toolpath strategies, machining parameters, and best practices for computer-controlled machining operations.
Characterize the design rules for the EECS shop CNC router through systematic testing of runout, alignment, fixturing, speeds, feeds, materials, and toolpaths.
Comprehensive testing of CNC router capabilities to establish design rules and manufacturing parameters for successful machining operations.
Source: MIT HTMAA Slack Discussion
Detailed notes from Anthony's training session on CNC router characterization and design rules
Tool Security: Fires can occur from excessive depth or steep angles. Previous incidents involved frame contact. Always verify tool tightness before operation.
Post-Processing Check: Verify TAB_FIT_CHECK, tool diameter, corner radius (cr), and minimum height (zmin) parameters.
Succinct guidelines derived from our characterization to ensure predictable outcomes on the EECS shop CNC router.
Measured at the tool: joints loosen with usage; fresh cuts are tighter.
| Metric | Value (in) |
|---|---|
| Average runout | 0.0036 |
| Standard deviation | 0.0020 |
| Median (freshly machined) | 0.0020 |
Use average + 1σ (~0.0056 in) as a conservative clearance allowance for press fits that will be tested repeatedly.
Dial/runout measurement indicating consistent concentricity with slight loosening after fit tests.
Press-fit clearance tests across increments; loosened joints increase effective clearance over time.
| Peg length [in] | Hole size [in] | Clearance [in] | Joint type | Fit type |
|---|---|---|---|---|
| 1.995 | 2 | -0.005 | Interference | Press |
| 2 | 2 | 0 | Line-to-line | Transition |
| 2.005 | 2 | 0.005 | Clearance | Slip |
Use 0.005 in clearance for slip fits; interference fits require press assembly.
Clearance measurement setup showing press-fit testing across different peg sizes and hole clearances.
| Measurement | Value [in] | Notes |
|---|---|---|
| Fresh cut 1 | 0.002 | Initial measurement |
| Fresh cut 2 | 0.002 | Consistent |
| After fit test 1 | 0.004 | Joint loosening |
| After fit test 2 | 0.005 | Increased wear |
| After fit test 3 | 0.006 | Maximum observed |
| Test | Peg [in] | Hole [in] | Result |
|---|---|---|---|
| Tight fit | 1.995 | 2.000 | Press required |
| Nominal | 2.000 | 2.000 | Snug fit |
| Loose fit | 2.005 | 2.000 | Slip fit |
Designed test part featuring 2-inch slots with 0.005" incremental clearances to systematically evaluate joint tolerances and press-fit characteristics.
Systematic documentation of the CNC router characterization process from design to measurement, capturing key parameters for design rule establishment.
Demo tabs design featuring 2-inch slots with 0.005" clearance increments for systematic joint tolerance testing
CNC router executing characterization cuts with 3HP spindle at optimized speeds and feeds for OSB material
Completed test part showing three joint clearance variations (1.995", 2.000", 2.005") for press-fit tolerance analysis
Post-machining cleanup using vacuum system to remove OSB dust and debris from work area and test parts
Bandsaw cutting for part separation and precision measurement using calipers to determine actual joint clearances and runout characteristics
Demo Tabs v8.f3d: Complete Fusion 360 design file for CNC router characterization test parts with various joint clearances and tab configurations.
Priority Action: Measure runout using calipers based on test part design, then systematically complete all characterization parameters.
Design and fabricate a floating mirror using tensegrity principles, exploring both 3D printing and CNC machining approaches for large-scale fabrication.
Research and inspiration from various tensegrity table designs and floating mirror concepts to inform the project approach.
Note: We decided to go without hooks or screw hooks if needed, focusing on clean design integration.
Design board discussions with Anthony during the floating mirror design process, exploring tensegrity principles and manufacturing considerations
Design process for the floating mirror using Fusion 360, incorporating tensegrity principles and ensuring manufacturability for both 3D printing and CNC machining.
3D design of the floating mirror showing tensegrity structure and mirror integration
Design process from side view showing structural development
Design process from front view showing mirror integration
Important: When using Nifty Dogbone in Fusion, click the face, not the tab! Input our tool diameter as 0.25 in and clearance as 0.001 in.
Download: Nifty Dogbone for Fusion 360
Note: If using Mac, you may need to go to Privacy and Security settings to unblock the installation. See Autodesk support article for troubleshooting.
CAM workflow following the tutorial video with specific modifications for our project requirements and machine capabilities.
Everything same as in the tutorial video: Anthony's CAM Tutorial
Except for the modifications listed below for our specific project requirements.
Note: The .tap extension is a historical carryover from the era of tape-driven machinery. Early numerical control (NC) machines used perforated paper or card stock, known as "tape," to store and transfer program data.
Testing assembly process and configuration using 3D printed models to validate design before CNC machining.
I 3D printed both altogether which failed and also printed flat as the output of the machining process to test the assembly. I used glow in the dark PLA!
Initial 3D print attempt showing failure mode
Failed print result highlighting design issues
Successfully printed parts for assembly testing
Slice and print on Stratasys software, dissolve support, then remove and wash parts after support is dissolved.
Stratasys print in progress with support material
Dissolving support material in solution
Cleaned parts after support material dissolution
CNC machining process with careful attention to safety and proper tab removal techniques.
Run the job with care (see training section for details). When removing tabs, use the battery-powered flat drill. For the circles, drill off both sides of the tabs, then use a crow bar gently around the perimeter until it pops off.
CNC machining process showing wood cutting operation
Completed cut parts ready for assembly
Testing tension cable ratios and angles using 3D printed components. We used fishing rod line because it's mostly transparent and makes the levitation effect more effective as shown in this Instructables guide, especially when imaged in night mode as glow in the dark assembly!
Testing tension cable ratios and angles using 3D printed components with glow-in-the-dark PLA for enhanced levitation effect.
Glow-in-the-dark assembly in night mode showing levitation effect
Color testing of the glow-in-the-dark components
Phone testing of the floating mirror assembly
Wall-mounted floating mirror demonstration
Succulent plant test showing the assembly can hold plants
Initial threading used a sewing needle; the Kevlar line fractured the needle, so we re‑threaded using a smaller carrier thread. The assembly was completed and wall‑mounted. For rapid iteration we temporarily set tension with tape; for final installations, secure with knots to eliminate slip and creep.
Fully threaded Kevlar assembly with smaller thread
Completed Kevlar tensegrity assembly
Kevlar threading process demonstration
Wall-mounted Kevlar tensegrity assembly
Assembled with only knots because tape doesn't work on Stratasys printed filament -- works even better than tape anyway.
Stratasys assembly using only knots for tension cable attachment
What is OSB? Oriented Strand Board is an engineered wood product made from layers of wood strands (or wafers) bonded together with resin and high heat. The strands are "oriented" in specific directions, and each layer is aligned in various directions to enhance the board's strength and stability. It is a popular, more affordable alternative to plywood for applications like sheathing and subflooring.
Here we're using 1000ft-rated 4mm paracord from Amazon for the tension cables.
Cutting tabs with saw tool for part removal
Tab removal process demonstration
Cleaning the CNC bed with vacuum after part removal
Filing joints to achieve proper press fit
Inserting leg into circular bottom panel
Threading paracord through edge holes
Measuring paracord for proper tension
Tightening paracord to marked position
Completed assembly without glue
Horizontal assembly configuration
Holding floating configuration from top
Horizontal assembly without glue showing stability
Wood gluing mirror with clamps for secure bonding
Cured glue after 24-hour wait period
Horizontal mirror configuration after gluing
Horizontal mirror side view showing stability
Ziptie tightening for final tension adjustment
Final tight mirror configuration
Final tight assembly with mirror properly mounted
Final working stable assembly
Monitor weight test showing stability
Laptop stability test demonstrating load capacity
iPad stability test showing versatility
Baby for scale showing actual size
Baby happy with the floating mirror assembly
Final trimmed neat assembly ready for use
Scaled to about 160cm in length, exactly double the current arrangement! The plan is to machine this scaled up version after class because of the limited wood stock in the class.
Design scale-up timeline showing the scaling process in Fusion 360
In the future, will create parametric design by defining the lengths in terms of other lengths and then can scale by setting values to the base parameters of the design.
Complete design files, CAD models, and manufacturing files for the floating mirror tensegrity project.
Complete Fusion 360 design files and manufacturing models for the floating mirror tensegrity structure.
HTMA Team Link: Floating Mirror v12.3mf Project
3MF Model: Complete 3D model for 3D printing and visualization.
DXF File: 2D drawing file for CNC machining and laser cutting.
Flat 3MF: Flat configuration for 3D printing assembly testing.
G-Code: For our shop's printer for flat configuration (0.4mm nozzle, 0.2mm layer height, PLA, 1h52m print time).
TAP File: For our shop's CNC router (postprocessed numerical control file).
Design files for the CNC router characterization test parts used in the group assignment to determine design rules and manufacturing parameters.
Demo Tabs v8.f3d: Complete Fusion 360 design file for CNC router characterization test parts with various joint clearances (1.995", 2.000", 2.005") and tab configurations for testing runout, alignment, and fixturing parameters.
Key insights and lessons learned from this week's computer-controlled machining work.
Acknowledgements for help received during this week's computer-controlled machining work.
Transparent documentation of AI assistance used in this week's computer-controlled machining work.
This week's documentation was significantly enhanced by Cursor AI to transform project notes into comprehensive professional documentation. The AI assisted with creating structured sections for recitation notes, design inspiration resources, CAD process documentation, CAM workflow details, and complete design files organization with proper linking and technical descriptions.
The individual assignment section was significantly enhanced by Cursor AI to transform project notes into comprehensive professional documentation. The AI assisted with creating structured subsections for OSB assembly processes, organizing step-by-step procedures, integrating media files, and developing professional technical descriptions for all assembly stages.
The EECS Shop CNC Router Design Rules section was enhanced by Cursor AI to transform characterization notes into comprehensive professional documentation. The AI assisted with creating structured sections for runout and clearance measurements, formatting measurement tables, organizing design rules, and integrating download links for data files.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Input Devices
Key captures from the group and individual assignments with links to their sections.
This week focuses on exploring various input devices and sensors that can be integrated with microcontroller boards. We'll learn about different sensor types, measurement techniques, and how to read sensor data.
Training materials and documentation for input devices and sensor integration.
Demonstration of capturing a stable clock and synchronous data line, including probe reference selection and verifying edge alignment.
Combined capture shows stable clock and synchronous data suitable for decoding
Essential resources for input devices and sensor integration.
Comprehensive resource covering all input device categories including switches, magnetic field sensors, potentiometers, step response sensors, temperature sensors, light sensors, motion sensors, distance sensors, and more. Includes detailed tutorials, code examples, and implementation guides for various sensor types.
Exploring different types of input devices and sensors for microcontroller integration.
Three focused investigations: load cell (analog vs amplified digital), digital microphone (16 kHz clocked), and IR phototransistor (analog). Each subsection includes probe setup, scope captures, and brief interpretation.
We compare the raw Wheatstone bridge differential output to the HX711-amplified, clocked digital interface.
With a 3.3 V reference, the ADC step size (LSB) is 3.3 V ÷ 2N. Load-cell bridge outputs are only millivolts, so low-resolution ADCs can’t resolve changes well.
At minimum, 12-bit ADC is needed to read the raw bridge signal directly on a microcontroller; a dedicated amplifier + ADC (e.g., HX711) increases gain and effective resolution, improving SNR and measurement fidelity.
Probing V1 (green) and V2 (white) on press
Millivolt-level differential changes at the bridge confirm expected polarity and sensitivity.
The raw Wheatstone bridge output is in the mV range; direct probing between V1 and V2 shows small but measurable changes on load. Mechanical force perturbs the bridge imbalance, producing a differential voltage proportional to applied weight.
We observe the SCK clock and DT data relative to the amplifier board ground.
Clock (SCK) captured at the HX711 board
SCK amplitude appears low due to probing and wiring; edges align with HX711 data framing and drive DT shifting. These clock pulses gate conversion reads and set the bit timing for the data line.
Data (DT) line transitions during conversion readout
Displayed amplitude skews high due to noise/scale; bit transitions align to SCK, representing the ADC output stream. The bit frame repeats for each sample in accordance with the HX711 protocol.
We capture the probe points, the ~16 kHz clock, and the synchronous data output.
Probe points for clock and data relative to module ground
We identify stable ground and tap CLK and DOUT to assess timing integrity before decoding. Proper referencing minimizes measurement artifacts and preserves true edge timing.
~16 kHz clock observed as a clean square wave
The mic’s clock provides the sampling reference for synchronous data capture. Its frequency and duty cycle help verify correct configuration and signal quality.
Digital output edges align to the clock for bit capture
Data toggles on defined edges enabling reliable sampling and decoding. This confirms synchronous serial behavior suitable for I²S/PDM-style acquisition.
We power the IR emitter, probe the phototransistor node, and observe analog voltage changes with illumination.
Probe across sense node and reference
Illumination modulates collector current, changing the sensed voltage across the load. This configuration establishes a clear baseline for assessing the light-dependent response.
IR diode power supply setup
IR diode powered on
The emitter is powered and positioned to control incident IR on the phototransistor for response testing. This enables repeatable stimulus while varying distance and angle.
Analog output changes with IR intensity
Increasing light increases conduction, lowering the sensed voltage across the load resistor. The transfer is monotonic within the tested range, enabling thresholding or analog filtering.
Adding simple and complex input devices to a custom development board, successfully reading sensor data, and implementing edge AI inference.
Using the development board designed in Week 4, I remilled the board to address an SDA/SCL short that prevented proper I²C communication for sensor visualization. The redesigned board incorporates an ESP32S3 with a presoldered camera module header in addition to the LED-button interface.
Implementing the button-LED system using code from Week 5 to verify board functionality. The system provides immediate visual feedback confirming proper GPIO configuration and signal handling.
Button LED system successfully operational on custom development board
Button input device demonstration showing LED response to button press with real-time state changes
Plugging in the camera module to the ESP32S3 requires careful soldering of a small pin header under the microcontroller using solder paste and hot air, so we purchased and used a pre-soldered ESP32S3 camera module to avoid issues with the header. The camera integration enables real-time image capture and edge AI inference on-device.
Developed an edge AI model using Edge Impulse with a dataset of ~60 training images and ~60 testing images. The model is publicly available at Edge Impulse Studio.
Workflow methodology based on embedded architectures recitation session by Erik Strand, covering camera integration, image preprocessing, and neural network inference on ESP32S3.
Successfully captured images and obtained inference results via serial monitor. I²C display communication issues remain under investigation for the development board configuration.
18:42:41.622 -> Captured 160x120 (19200 bytes)
18:42:41.982 -> TTD: 12.103 years
18:42:41.622 -> E (158988) i2c.master:
I2C transaction unexpected
nack detected
18:42:41.622 -> E (158991) i2c.master:
I2C transaction failed
While troubleshooting the display issue on the development board, I tested the camera inference system on the ESP32S3 camera subsystem from my final project, confirming successful edge AI inference functionality.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation.
Camera edge AI inference demonstration showing successful image capture and neural network inference on ESP32S3
Complete design files for camera edge AI inference system including Edge Impulse library, Arduino firmware, and camera configuration.
The Edge Impulse library contains the complete inference engine for running neural network models on the ESP32S3. The library includes the Edge Impulse SDK with support for various sensor inputs, neural network inference execution, signal processing utilities, and classification output handling.
The model was trained on Edge Impulse with ~60 training images and ~60 testing images. You can view and interact with the model online at this Edge Impulse Studio link.
The camera inference script captures images from the ESP32S3 camera module, processes them through the Edge AI model, and displays results on both the OLED screen and Serial Monitor.
BEGIN Setup
Initialize Serial communication
Configure LED and OLED display
Initialize camera module with grayscale format (160x120)
Load Edge AI inference library
END Setup
BEGIN Loop
Check for touch trigger OR 60-second timeout
IF trigger detected THEN
Turn LED ON to indicate capture
Capture image frame from camera
IF capture successful THEN
// Display on OLED with dithering
Scale image to 128x64
Apply contrast enhancement
Apply Floyd-Steinberg dithering
Display dithered image on OLED
// Process for Edge AI inference
Convert grayscale to RGB888
Prepare input buffer for neural network
Run Edge Impulse classifier
Get inference result (TTD value)
// Display results
Display TTD overlay on OLED
Print TTD value to Serial Monitor
ELSE
Show error message on OLED
ENDIF
Turn LED OFF
Return frame buffer to camera driver
ENDIF
Small delay (30ms)
END Loop
Key insights and learnings from working with input devices and sensors.
Acknowledgements and team roles for input devices work and references to related weeks.
Done together by Saleem, Mariam, Jyo, Guillermo, Alex, Jessica, Aijia, and of course, Anthony!
Camera display and Edge AI inference demonstrated in the final project camera subsystem. View camera subsystem
Button-LED implementation documented earlier in Week 5. View Week 5 individual assignment
Transparent documentation of AI assistance used in this week's input devices work.
Cursor AI assisted with structuring this page, refining the group assignment documentation, creating the two-row highlights, and polishing the individual assignment text. The transcript below captures the session focused on Week 7 updates and guidelines.
ChatGPT assisted in developing the camera inference code and troubleshooting the development board camera implementation. The AI provided guidance on camera module initialization, image capture optimization, Edge AI model integration, and I²C display communication debugging.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Output Devices
Key captures from the group and individual assignments with links to their sections.
This week focuses on exploring various output devices and actuators that can be integrated with microcontroller boards. We'll learn about different actuator types, power management, and how to control output devices effectively.
Training materials and documentation for output devices and multi-axis machining.
How to multi-axis mill (almost) anything. Understanding the capabilities and limitations of 3-axis and 5-axis machining for complex part fabrication.
Human fabrication techniques are inherently more 5-axis than 3-axis—consider using a Dremel to carve a pumpkin. Additional axes are decreasing in cost, making 5-axis machining increasingly accessible. Modern 5-axis machines (e.g., UMC series) now cost around $100k USD, similar to robotic arms with 3–6 degrees of freedom.
3-axis machining removes material through shearing. The load is applied to a bending beam from spindle to tool, where the rake angle determines tool sharpness and cutting efficiency.
Feed (inches per tooth, IPT) and surface speed (SFM) determine the processing window, balancing:
Workholding options include clamping, vacuum, adhesives, electrostatic, and magnetic methods. Roughing removes material quickly (can take up to 3 hours), limited by moving the part relative to the machine. Parts need to cool, and thermal expansion must be accounted for. The most accurate approach uses the largest tool that fits, not the smallest. Endmill entry is not flat (helical entry into material), and extra stock is used to account for beam bending of the endmill.
Roughing strategies use adaptive/pocket clearing with simulated stepdowns to balance time vs. inconsistencies. Maximum roughing time should be limited to ~30 minutes before moving on. Surfacing creates surface finishes using toolpath options: steep/shallow, scallop, or contour (you can bound the tool to specific areas in Fusion 360). Larger endmills are faster for surfacing operations.
5-axis machining enables reaching and positioning tools, repositioning workpieces for time and accuracy (without manual intervention), and improved surface finishing. The system is always limited by contact at every point in space.
There are two main approaches:
Limitations include CAM software processing speed (single-threaded operations), complexity of simulating tool movement relative to workholding and part interference, and managing remaining stock (Fusion 360) or rest material (Mastercam).
Understanding machine kinematics is critical—see the MAS.865 mechanical design resources for detailed analysis of how machines move in space.
With 5-axis capability, the distinction between mills and lathes becomes blurred. Mills hold the workpiece while the tool spins; lathes hold the tool stationary while spinning the workpiece. This flexibility reduces design limitations. Think of systems in terms of three components: the tool, workholding, and axes.
Tools: extrusion nozzles, grinding spindles, regular spindles, ultrasonic spindles (for glass), friction stir welding tools, knives, pizza cutters, hole punchers
Axes: different spindle configurations, different head designs
3-axis and 5-axis machining each have their place. The key is understanding which to use when, and how 5-axis capability breaks down barriers between traditional tool classifications, enabling more flexible and creative fabrication approaches.
Essential resources for output devices and actuator integration.
Comprehensive resource covering all output device categories including LEDs, displays (LCD, OLED, TFT), motors (DC, stepper, servo, brushless), speakers, solenoids, and more. Includes detailed tutorials, code examples, and implementation guides for various output device types.
Measuring power consumption of motors, servos, and stepper motors using adjustable power supplies and precision energy analyzers to understand performance characteristics and when to use each type.
We used an adjustable power supply set to 15V and measured quiescent power consumption of 450 mW (0.03 A, where P=IV) before connecting any motors. Power measurements were conducted using the Joulescope (JS220) Precision Energy Analyzer to measure current, voltage, power, charge, and energy in real-time.
Power supply set to 15V for motor testing
Joulescope JS220 Precision Energy Analyzer
Joulescope software interface
Using a potentiometer to change motor speed, we observed that below a certain threshold, the motor doesn't have enough power to spin (no mechanical work, just electrical losses). Above this threshold, speed increases with power. The dI/dt (rate of current change) cannot change instantly—it takes time. The PWM signal from the microcontroller (roughly 50% duty cycle) is smoothed after passing through the motor driver, with more linear behavior on the rising edge than the falling edge.
Motor circuit connections
Complete motor testing setup
50% duty cycle PWM from microcontroller
Smoothed PWM signal after motor driver
Zoomed-in power measurement
Zoomed-out power measurement
All motors tested at 15V supply, 2A current limit, with the same potentiometer value for comparison. The potentiometer determines speed and power for DC motors, and position for servos.
Characteristics: Low-speed, high torque, no positional control
Large DC brushed motor measurement
Characteristics: High speed, medium torque, no positional control
Small DC brushed motor measurement and operation
Characteristics: Medium speed, medium torque, zero absolute positional control (relative positioning only)
Stepper motor measurement and operation
All servos tested at 5V supply, 2A current limit. Potentiometer determines position; power consumption remains relatively constant.
Characteristics: Absolute positional control, slower response
Large servo motor measurement
Characteristics: Absolute positional control, slower response, lower power
Micro servo motor measurement
DC brushed and stepper motors tested at 15V. Potentiometer controls speed for brushed motors, and step rate for stepper motor.
| Motor Type | Size | Current | Voltage | Power | Characteristics |
|---|---|---|---|---|---|
| DC Brushed | Large | 57 mA | 14.49 V | 0.8 W | Low-speed, high torque, no positional control |
| DC Brushed | Small | 0.34 A | 14.47 V | 4.86 W | High speed, medium torque, no positional control |
| Stepper | N/A | 2.0 A | 10.93 V | 22.33 W | Medium speed, medium torque, relative positioning only |
Servo motors tested at 5V. Potentiometer controls position; power consumption remains relatively constant regardless of position.
| Motor Type | Size | Current | Voltage | Power | Characteristics |
|---|---|---|---|---|---|
| Servo | Large | 10.6 mA | 4.99 V | 53.4 mW | Absolute positional control, slower response |
| Servo | Micro | 5.04 mA | 4.99 V | 25.05 mW | Absolute positional control, slower response, lower power |
Adding simple and complex output devices to the custom development board designed in Week 4, including a bright LED and OLED display for sensor visualization.
Implementing a bright LED output using the MP-3014-1100-50-80 from the Fab Lab inventory. This high-power LED requires careful current limiting to achieve maximum brightness while maintaining safe operating conditions.
For maximum brightness, we calculated the current limiting resistor for a 3.3 V supply driving an LED with a forward voltage (Vf) of 2.85 V. Using a 5 Ω resistor (the smallest available in the shop):
Step 1: Voltage across resistor
VR = Vsupply - Vf = 3.3 V - 2.85 V = 0.45 V
Step 2: LED current
I = VR / R = 0.45 V / 5 Ω = 0.09 A = 90 mA
Step 3: Resistor power dissipation
PR = VR × I = 0.45 V × 0.09 A = 0.0405 W
Step 4: LED power dissipation
PLED = Vf × I = 2.85 V × 0.09 A = 0.2565 W
Result: At 3.3 V with a 5 Ω resistor, the LED draws approximately 90 mA, which is well below the 150 mA maximum rating. This provides slightly reduced brightness compared to maximum, but significantly extends lifespan and reduces heat generation.
| Resistor (Ω) | LED Current (mA) | Power in Resistor (W) | Notes |
|---|---|---|---|
| 3.0 Ω | 150 mA | 0.0675 W | Max brightness, close to rated max current (hot) |
| 3.9 Ω | ~115–120 mA | ~0.055 W | Good balance: bright but less stress |
| 4.5 Ω | ~100 mA | ~0.045 W | Cooler, longer life |
| 5.0 Ω | ~90 mA | ~0.0405 W | Selected: even cooler, ~0.26 W in LED, very safe thermally |
Bright LED test demonstration
Bright LED circuit on development board
Implementing an SSD1306 OLED display for visualizing sensor data. The I²C interface requires two 10 kΩ pull-up resistors—one between 3.3 V and SDA, and one between 3.3 V and SCL. These were soldered onto the connectors of the development board to enable proper I²C communication, which was previously failing in earlier weeks without these pull-up resistors.
Development board with OLED display connected, showing pull-up resistors soldered to I²C connectors
Standard OLED connections: VCC to 3.3 V on ESP32S3, GND to GND, SDA to SDA (pin 5), and SCL to SCL (pin 6).
Displaying real-time accelerometer data from the MPU6050 sensor on the OLED screen. This implementation required working SDA and SCL connections with proper pull-up resistors, which were finally fixed this week after troubleshooting I²C communication issues.
Accelerometer data displayed on OLED screen showing acceleration and gyroscope values
Displaying camera capture output on the OLED screen with Edge AI inference results. SDA and SCL were not working in previous weeks without the pull-up resistors. While troubleshooting, camera output was verified through the serial monitor, and the input device functionality was confirmed by testing on Quentin's board. With the pull-up resistors now in place, the OLED display successfully shows camera frames and inference results.
Camera output with Edge AI inference displayed on OLED screen showing TTD (time-to-death) prediction
Complete design files for sensors display and camera dev board implementations including Arduino firmware and configuration files.
The sensors display script reads data from multiple sensors (MPU6050 accelerometer/gyroscope and HX711 load cell) and displays the values on both the OLED screen and Serial Monitor. The system activates data collection when a button is pressed.
BEGIN Setup
Initialize Serial communication (115200 baud)
Configure button pin (INPUT_PULLUP) and LED pin (OUTPUT)
Initialize I2C bus (SDA=5, SCL=6) at 100kHz
Initialize OLED display (SSD1306, 128x64, I2C address 0x3C)
Initialize MPU6050 accelerometer/gyroscope (try 0x68, then 0x69)
Initialize HX711 load cell amplifier
Set calibration factor and tare (zero) the scale
Display "Setup complete!" message on OLED
END Setup
BEGIN Loop
Read button state
IF button pressed THEN
Turn LED ON
IF not already collecting THEN
Start data collection mode
Print "=== START DATA COLLECTION ===" to Serial
ENDIF
IF 500ms have passed since last read THEN
Read accelerometer/gyroscope data (acceleration x,y,z, gyro x,y,z, temperature)
Read load cell data (raw value and weight in grams)
Print all sensor values to Serial Monitor
Clear OLED display
Display formatted sensor data on OLED:
- Acceleration X, Y
- Acceleration Z, Temperature
- Gyroscope X, Y
- Weight (grams)
Update OLED display
Update last read timestamp
ENDIF
ELSE
IF was collecting THEN
Stop data collection mode
Print "=== STOP DATA COLLECTION ===" to Serial
Display "Data collection stopped." on OLED
ENDIF
Turn LED OFF
ENDIF
Small delay (10ms)
END Loop
The camera dev board script captures images from the ESP32S3 camera module, processes them through an Edge AI model, and displays both the camera feed and inference results on the OLED screen. The code requires the camera_pins.h header file which defines GPIO pin mappings for the XIAO ESP32S3 Sense camera module.
BEGIN Setup
Initialize Serial communication (115200 baud)
Configure LED and button pins
Initialize OLED display (SSD1306, 128x64, I2C on pins 5, 6)
Display "OLED Ready!" message
Configure camera module:
- Load camera pin definitions from camera_pins.h
- Set frame size to QQVGA (160x120)
- Set pixel format to grayscale
- Configure frame buffer location (PSRAM)
- Set JPEG quality and frame count
- Configure grab mode (LATEST)
Initialize camera with configuration
Apply vertical flip and horizontal mirror settings
IF camera initialization successful THEN
Set camera_ready flag
Display "Camera Ready - Press Button" on OLED
Set initial capture timestamp
ELSE
Display "Camera init failed" on OLED
ENDIF
END Setup
BEGIN Loop
IF camera not ready THEN
Delay and return
ENDIF
Read button state
IF button pressed (edge triggered) THEN
Set capture trigger flag
ENDIF
Control LED based on button state
IF capture triggered OR 60 seconds elapsed THEN
Reset capture trigger
Update last capture timestamp
Turn LED ON
Display "Capturing..." on OLED
Capture image frame from camera
Turn LED OFF
IF capture successful THEN
Print capture info to Serial (width, height, bytes)
Process and display image on OLED:
- Scale 160x120 image to 128x64 OLED size
- Apply contrast enhancement
- Apply Floyd-Steinberg dithering
- Draw dithered image to OLED
Convert grayscale frame to RGB888 format
Prepare input buffer for Edge AI model
Run Edge Impulse classifier
Get inference result (TTD value in years)
Draw TTD result box overlay on OLED
Print TTD value to Serial Monitor
Return frame buffer to camera driver
ELSE
Print "Capture failed" to Serial
Display "Capture failed" on OLED
ENDIF
ENDIF
Small delay (30ms)
END Loop
The camera_pins.h header file defines GPIO pin mappings for various ESP32 camera models. For the XIAO ESP32S3 Sense, it configures pins for the camera data lines (Y2-Y9), control signals (XCLK, PCLK, VSYNC, HREF), and I²C interface (SIOD, SIOC). You need to ensure this file is in a side-tab or included in your Arduino project.
Key insights and learnings from working with output devices and power measurement.
Acknowledgements and team roles for output devices work.
Power measurement of motors, servos, and stepper motors conducted together by the week 8 team with guidance from instructors.
ChatGPT assisted with the LED current limiting resistor calculation, with correction from Anthony when the initial calculation assumed 5V instead of 3.3V. View ChatGPT conversation
Transparent documentation of AI assistance used in this week's output devices work.
Cursor AI assisted with creating the complete Week 8 documentation page, including the multi-axis machining training section, group assignment power measurement documentation, individual assignment sections for bright LED and OLED display implementations, motor comparison tables, and all formatting and styling. The AI helped refine and organize raw notes into professional, succinct content while preserving key technical details.
ChatGPT assisted with calculating the current limiting resistor for the bright LED. Initially, the AI assumed a 5V supply, but after correcting it to 3.3V with Anthony's guidance, it provided accurate calculations for voltage drop, current, and power dissipation. This helped determine that a 5Ω resistor (the smallest available in the shop) would safely drive the LED at ~90mA, below the 150mA maximum rating.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Networking & Communications
Key captures from the group and individual assignments with links to their sections.
This week focuses on connecting devices through wired and wireless networks. We'll explore communication protocols, network addressing, and how to build networked systems with local input and output devices.
Training materials and documentation for networking and communications protocols.
View original notes on Slack • Recitation link
Understanding failure modes is critical for robust system design. See the class system integration page for detailed failure mode documentation and links.
The contrast between functional boxes (PCs) and designed products (Macs) illustrates the importance of user experience design. This connects to "How to Design (Almost) Anything" and "How to Make (Almost) Anything Usable" classes.
hci.rwth-aachen.de/fabusability — Computer science, design, and psychology intersect in usability research. Usability focuses on designing systems that work great for people, whether in projects, startups, or products like FabFame.
Literature: The Design of Everyday Things (intro to human computer interaction). See hci.rwth-aachen.de/fabusability for more resources.
Route wiring appropriately and mount circuit boards in a box—don't just throw them in! See examples from Gianfranco Paluto on Fab Academy. Consider mechanism alignment and surface finish.
Prevent defects (QA) and detect defects (QC). Methods include shake and drop tests, power cycling, environmental testing, and random inputs.
Details and links available on the class system integration page:
Fail Fast!!! — Feed-forward vs feedback development. Murphy's Law (rocket test engineer): "Anything that can go wrong, will go wrong." This means you have to anticipate how it'll fail and prevent it.
Design and document the system integration for your final project:
Good example: Fab Academy 2016 system integration assignment
Networking and communications enable devices to exchange data through wired and wireless connections. Understanding different protocols helps select the right communication method for each application.
Devices on networks require unique addresses for identification and communication:
Essential resources for networking and communications protocols.
Networking and Communications - MIT Academy
Comprehensive resource covering wired protocols (UART, I²C, SPI, USB, Ethernet), wireless protocols (Wi-Fi, Bluetooth, LoRa, RF), network addressing, OSI layers, modulation techniques, channel sharing methods, error detection/correction, and Internet protocols. Includes detailed tutorials, code examples, and implementation guides for various communication protocols.
Design, build, and connect wired or wireless node(s) with network or bus addresses and local input &/or output device(s). This assignment demonstrates two networked systems: an ESP-NOW multiplayer game and a Wi-Fi camera streaming system with edge AI processing.
A collaborative multiplayer game featuring geometric shape interactions, synchronized gameplay between two ESP32-S3 devices using ESP-NOW wireless communication. The system uses MAC address-based player identification and real-time state synchronization for coordinated gameplay. See Week 2 documentation and design files for complete details.
The system consists of two XIAO ESP32-S3 development boards, each with an SSD1306 OLED display (128x64) and capacitive touch sensors. The boards communicate wirelessly using ESP-NOW protocol, enabling low-latency peer-to-peer communication without requiring a Wi-Fi access point.
Network Type: Wireless peer-to-peer (ESP-NOW)
Protocol: ESP-NOW (Espressif's proprietary low-power wireless communication protocol)
Addressing Scheme: MAC address-based device identification
Shape Song Swing Along educational content integration and gameplay
Circle and square shape interaction gameplay mechanics
Each ESP32-S3 board connects to an OLED display via I²C (SDA pin 5, SCL pin 6) and reads touch sensor inputs from GPIO pins. The ESP-NOW communication is handled entirely through the ESP32-S3's built-in Wi-Fi radio, requiring no external hardware connections.
See Network Address Tables for complete device addressing information including MAC addresses for both player devices.
A Wi-Fi-enabled camera system using ESP32-S3's built-in camera and HTTP server capabilities to stream JPEG frames over Wi-Fi using MJPEG (Motion JPEG) protocol. The system also includes edge AI face detection using a FOMO (Faster Objects, More Objects) model from Edge Impulse for real-time person tracking. See Week 10 documentation and design files for complete details.
The system uses a XIAO ESP32-S3 Sense development board with an integrated camera module. The camera captures frames at QQVGA (160x120) or QVGA resolution, processes them through an Edge AI inference pipeline, and streams the results over Wi-Fi via HTTP multipart response. The system can also display camera output and inference results on an OLED display connected via I²C.
Network Type: Wireless local area network (Wi-Fi)
Protocol: HTTP over Wi-Fi (802.11), MJPEG streaming
Addressing Scheme: IP address (DHCP-assigned) and MAC address (hardware identifier)
The camera module is integrated directly on the XIAO ESP32-S3 Sense board, connected via parallel data bus (Y2-Y9), control signals (XCLK, PCLK, VSYNC, HREF), and I²C interface (SIOD, SIOC). The OLED display connects via I²C (SDA pin 5, SCL pin 6) with pull-up resistors. Wi-Fi connection is established through the ESP32-S3's built-in Wi-Fi radio.
See Network Address Tables for complete device addressing information including MAC and IP addresses for the camera system.
To connect the ESP32-S3 camera system to the MIT Wi-Fi network, you need to generate a device-specific password. The MIT network uses WPA2-PSK authentication for devices that are not 802.1X compatible.
const char* ssid = "MIT";
const char* password = "insert generated password from steps above";
A custom PCB design integrating the XIAO ESP32-S3 with the Wio-SX1262 LoRa radio module for Meshtastic communication, connected to a MAX30102 pulse oximeter sensor. The system enables long-range wireless communication using LoRa (Long Range) radio technology and the Meshtastic mesh networking protocol for decentralized device-to-device messaging.
This project draws inspiration from the Invisible String book series by Patrice Karst, which explores the concept of invisible connections that link us to the people we love, even when we're far apart. The book series and workbook include activities for making an invisible string chain of hearts, symbolizing these connections. This long-range radio system embodies that concept by creating invisible wireless links between devices across distances, enabling communication and data sharing through the Meshtastic mesh network.
I designed a custom PCB that integrates the XIAO ESP32-S3 & Wio-SX1262 Kit for Meshtastic & LoRa (see usage documentation) with a MAX30102 Heart Rate Sensor Module for pulse oximetry. The design process followed the same PCB design workflow established in Week 4, using Fusion 360 Electronics for schematic capture and PCB layout.
The MAX30102 sensor integrates red and infrared LEDs, a photodetector, and low-noise electronics for heart rate and blood oxygen (SpO2) measurement. Reference schematics were sourced from the Amazon product page and the Analog Devices MAX30102 datasheet.
Component selection required careful library management in Fusion 360 Electronics. I started by identifying the correct through-hole header variants needed for the XIAO ESP32-S3 and Wio-SX1262 modules, ensuring proper pin spacing and mechanical compatibility.
Accurate spacing between components was critical for proper module alignment. I used physical measurements from breadboard prototypes and Fusion 360's measurement tools to ensure correct header placement and pin spacing.
The schematic integrates the MAX30102 sensor with pull-up resistors for I²C communication, connects power and ground planes, and routes all necessary signals between the ESP32-S3, Wio-SX1262, and sensor modules.
PCB routing required careful attention to signal integrity, especially for I²C lines and power distribution. I iterated from initial routing to clean, labeled connections, then performed design rule checks to ensure manufacturability.
The final design includes a complete schematic, PCB layout, and 3D CAD rendering showing the assembled board with all modules and components.
The board was milled using the Othermill (Bantam Tools) in room 38-501, following the electronics production protocol established in Week 5. The milling process required careful tool selection, proper depth settings, and multiple passes for traces and isolation.
Through-hole component soldering required advanced techniques using a board stand, positioning the board vertically or horizontally depending on component location. This approach improved access to pads and ensured consistent solder joints.
Milling errors resulted in copper "hairs" creating unintended shorts between traces. I used continuity mode on a multimeter to probe for shorts and an exacto knife to carefully cut away the excess copper, restoring proper isolation between traces.
In the end, I had to isolate the common ground on the interrupt side so that RD (red LED) and IRD (infrared LED) grounding terminals are isolated from the rest of the GND plane. This isolation may be necessary for more accurate sensor readings by preventing ground loops and interference.
Network Type: Long-range wireless mesh network (LoRa)
Protocol: Meshtastic (open-source mesh networking protocol over LoRa radio)
Radio Module: Wio-SX1262 (Semtech SX1262 LoRa transceiver)
Addressing Scheme: Meshtastic node ID and user ID for device identification
The Wio-SX1262 module provides long-range, low-power wireless communication using LoRa (Long Range) spread spectrum modulation. Meshtastic runs on top of LoRa, creating a decentralized mesh network where devices can relay messages to extend range. The system uses the ESP32-S3 as the main processor, running Meshtastic firmware that manages the LoRa radio and mesh networking protocols.
The MAX30102 sensor uses optical reflection (PPG - photoplethysmography) to detect heart rate and blood oxygen levels. It integrates red (660nm) and infrared (880nm) LEDs with a photodetector to measure changes in blood volume through the skin.
The XIAO ESP32-S3 connects to the Wio-SX1262 module via SPI interface for LoRa radio control. The MAX30102 sensor connects via I²C (SDA and SCL lines) with pull-up resistors. The OLED display also connects via I²C on the same bus. Power is distributed from the ESP32-S3's 3.3V and 5V rails to all modules. The interrupt pin (INT) from MAX30102 can be connected for event-driven data acquisition, and the RD/IRD pins are isolated from the main ground plane for improved sensor accuracy.
See Network Address Tables for complete device addressing information including Node Number, User ID, and Device Name for the Meshtastic network.
Complete network addressing information for all connected devices, including MAC addresses (hardware identifiers) and IP addresses (network identifiers) for Wi-Fi-connected devices.
| Device | Component | MAC Address | Network Type | Input Device | Output Device |
|---|---|---|---|---|---|
| Player 1 | XIAO ESP32-S3 | D8:3B:DA:75:05:AC | ESP-NOW (P2P) | Touch sensors (6 pads) | OLED (SSD1306, 0x3C) |
| Player 2 | XIAO ESP32-S3 | D8:3B:DA:75:E1:9C | ESP-NOW (P2P) | Touch sensors (6 pads) | OLED (SSD1306, 0x3C) |
| Device | Component | MAC Address | IP Address | Network Type | Input Device | Output Device |
|---|---|---|---|---|---|---|
| Camera Node | XIAO ESP32-S3 Sense | [Device MAC] | 192.168.x.x (DHCP) | Wi-Fi (802.11) | Camera module, Button | OLED (SSD1306, 0x3C), HTTP Server |
| Client Device | Web Browser | [Client MAC] | 192.168.x.x (DHCP) | Wi-Fi (802.11) | N/A | Display (receives MJPEG stream) |
| Device | Component | Node Number | User ID | Device Name | Network Type | Input Device | Output Device |
|---|---|---|---|---|---|---|---|
| Invisible String Node | XIAO ESP32-S3 + Wio-SX1262 | 3665045004 | !da74260c | Meshtastic 260c | LoRa (Meshtastic) | MAX30102 Heart Rate Sensor | OLED (SSD1306, 0x3C) |
Note: MAC addresses are hardware identifiers unique to each ESP32-S3 device. IP addresses are assigned dynamically via DHCP when connecting to a Wi-Fi network. For ESP-NOW communication, only MAC addresses are used (no IP addresses required). The camera system's IP address is printed to the Serial Monitor upon Wi-Fi connection establishment. For Meshtastic networks, devices are identified by Node Number (unique identifier), User ID (hexadecimal user identifier), and Device Name (user-configurable name). The LoRa radio uses spread spectrum modulation for long-range, low-power communication without requiring IP addresses.
Send a message between two projects. This assignment demonstrates inter-project communication using wired or wireless networking protocols.
Send a message between two projects. This assignment demonstrates inter-project communication using wired or wireless networking protocols. The CircleSquareCamera Augmented Reality system extends the ESP-NOW multiplayer game from Week 2 by integrating camera boards that receive game state updates and overlay player positions on live camera feeds.
For detailed documentation of the AI-assisted development process, see AI-Assisted Camera Integration Game Update in the Ethical AI Use section.
An augmented reality system that combines the ESP-NOW multiplayer game CircleSquareShapeSongSwingalong (documented in Week 2 and Week 11 individual assignment) with camera subsystems from the final project. The system enables game boards to send player state updates via ESP-NOW to camera boards, which overlay game characters (square and circle) on live camera feeds displayed on OLED screens. Camera code development spans Week 7, Week 8, Final Project, and Week 12 (placeholder link).
Development Note: Initial implementation attempted to send camera data from camera boards to game boards. After two hours of troubleshooting communication issues, the architecture was reversed: the final system sends game state packets from game boards to camera boards via ESP-NOW. This approach proved more efficient for real-time AR overlay visualization, as game state is lightweight compared to camera frame data.
The system consists of two networked subsystems: game boards (XIAO ESP32-S3 with touch sensors and OLED displays) and camera boards (XIAO ESP32-S3 Sense with integrated cameras and OLED displays). Game boards run the CircleSquareShapeSongSwingalong multiplayer game, processing touch input, updating player physics, and broadcasting player state packets via ESP-NOW to both the other game board and all camera boards. Camera boards receive these packets, capture camera frames, process them through Floyd–Steinberg dithering for OLED display, and overlay game character positions (square for Player 1, circle for Player 2) on the live feed.
Network Type: Wireless peer-to-peer (ESP-NOW)
Protocol: ESP-NOW (Espressif's proprietary low-power wireless communication protocol)
Addressing Scheme: MAC address-based device identification for game boards and camera boards
Communication Pattern: One-to-many broadcast from game boards to peer game board and all camera boards
Game Boards: Each ESP32-S3 board connects to an OLED display via I²C (SDA pin 5, SCL pin 6) and reads touch sensor inputs from GPIO pins. ESP-NOW communication is handled through the ESP32-S3's built-in Wi-Fi radio, requiring no external hardware connections.
Camera Boards: The camera module is integrated directly on the XIAO ESP32-S3 Sense board, connected via parallel data bus (Y2-Y9), control signals (XCLK, PCLK, VSYNC, HREF), and I²C interface (SIOD, SIOC). The OLED display connects via I²C (SDA pin 5, SCL pin 6) with pull-up resistors. ESP-NOW reception uses the ESP32-S3's built-in Wi-Fi radio in receive-only mode.
Complete network addressing information for all devices in the CircleSquareCamera AR system, including MAC addresses (hardware identifiers) for ESP-NOW communication.
| Device | Component | MAC Address | Network Type | Input Device | Output Device |
|---|---|---|---|---|---|
| P1 | XIAO ESP32-S3 | D8:3B:DA:75:05:AC | ESP-NOW (P2P) | Touch sensors (6 pads) | OLED (SSD1306, 0x3C) |
| P2 | XIAO ESP32-S3 | D8:3B:DA:75:E1:9C | ESP-NOW (P2P) | Touch sensors (6 pads) | OLED (SSD1306, 0x3C) |
| Camera A | XIAO ESP32-S3 Sense | B8:F8:62:F9:E2:C0 | ESP-NOW (Receive) | Camera module, ESP-NOW packets | OLED (SSD1306, 0x3C) |
| Camera B | XIAO ESP32-S3 Sense | B8:F8:62:F9:D6:38 | ESP-NOW (Receive) | Camera module, ESP-NOW packets | OLED (SSD1306, 0x3C) |
Note: MAC addresses are hardware identifiers unique to each ESP32-S3 device. Game boards broadcast player state packets to both the peer game board and all camera boards. Camera boards operate in receive-only mode, processing incoming ESP-NOW packets to identify player positions and overlay them on camera feeds.
Complete Arduino code for game boards and camera boards, including ESP-NOW communication, camera feed processing, and AR overlay implementation.
Complete design files for networked systems including ESP-NOW multiplayer game and Wi-Fi camera streaming implementations with Arduino firmware and configuration files.
Arduino code for ESP-NOW wireless multiplayer game with MAC address-based player identification and synchronized game state management.
Arduino code for ESP32-S3 camera livestreaming over Wi-Fi using MJPEG protocol and Edge AI face detection using FOMO models from Edge Impulse.
SETUP:
1. Initialize Serial communication (115200 baud)
2. Configure camera pins (from camera_pins.h)
3. Create camera_config_t structure
4. Initialize camera with esp_camera_init()
5. Connect to Wi-Fi network
6. Start HTTP server with stream handler
STREAM_HANDLER:
1. Set HTTP response type to "multipart/x-mixed-replace"
2. Enter loop: capture frame, send via HTTP, repeat
LOOP:
- Minimal delay to allow other tasks
Arduino code for capturing camera frames and streaming them to an OLED display using Floyd–Steinberg dithering for high-quality monochrome rendering. This code demonstrates real-time image processing and display optimization for low-resolution OLED screens.
SETUP:
1. Initialize Serial communication (115200 baud)
2. Initialize I²C bus for OLED display
3. Initialize OLED display (128x64, I²C address 0x3C)
4. Configure camera pins using camera_pins.h definitions
5. Initialize camera with QQVGA resolution (160x120)
6. Set pixel format to GRAYSCALE
7. Display initialization status on OLED
LOOP:
1. Capture camera frame using esp_camera_fb_get()
2. Process frame through showDitheredPreview():
a. Downsample camera frame (160x120) to OLED resolution (128x64)
b. Average pixel values in each downsampled region
c. Store results in gray_buffer array
d. Normalize brightness values to full range
e. Apply Floyd–Steinberg dithering algorithm
f. Render dithered result to OLED display
3. Return camera frame buffer using esp_camera_fb_return()
4. Repeat continuously for live preview
DITHERING ALGORITHM (Floyd–Steinberg):
For each pixel from top-left to bottom-right:
1. Quantize current pixel (0 or 255)
2. Calculate quantization error
3. Distribute error to neighboring pixels:
- Right: 7/16 of error
- Bottom-left: 3/16 of error
- Bottom: 5/16 of error
- Bottom-right: 1/16 of error
This code provides a foundation for displaying camera feeds on OLED displays. The Floyd–Steinberg dithering algorithm distributes quantization errors across neighboring pixels, creating smooth gradients and improved visual quality compared to simple thresholding. The code uses grayscale format for efficient processing, and the downsampling step averages multiple camera pixels into each OLED pixel to maintain image clarity.
This implementation serves as a building block for the CircleSquareCamera AR system, where camera feeds are combined with game state overlays. See Ethical AI Use section for ChatGPT co-development transcript.
Complete Arduino code for the CircleSquareCamera AR system, including game board code that broadcasts player states via ESP-NOW, and camera board code that receives game packets and overlays player positions on live camera feeds.
Camera board receives ESP-NOW packets from game boards, captures camera frames, processes them through dithering, and overlays game character positions (square for P1, circle for P2) on the live feed displayed on OLED.
SETUP:
1. Initialize Serial, I²C, OLED display
2. Initialize ESP-NOW in receive-only mode
3. Register ESP-NOW receive callback
4. Initialize camera (QQVGA, GRAYSCALE)
5. Determine camera ID from MAC address (A or B)
6. Display camera ready status
ESPNOW_RECEIVE_CALLBACK:
1. Check if packet is correct size (NetPacket)
2. Extract player state from packet
3. Identify player (P1 or P2) based on sender MAC address
4. Store player state in p1 or p2 variable
5. Set gotP1 or gotP2 flag
LOOP:
1. Capture camera frame (esp_camera_fb_get)
2. Process frame through makeFrame():
a. Downsample camera (160x120) to 64x32
b. Normalize brightness
c. Apply Floyd–Steinberg dithering
d. Rotate 180° for correct orientation
3. Draw camera feed to OLED (2x2 pixel blocks)
4. Overlay game characters if received:
- Draw square at p1.x, p1.y for Player 1
- Draw circle at p2.x, p2.y for Player 2
5. Display frame on OLED
6. Return camera frame buffer
Extended version of the CircleSquareShapeSongSwingalong game that broadcasts player state packets to both peer game boards and camera boards via ESP-NOW. This enables real-time AR visualization of gameplay on camera displays.
SETUP:
1. Initialize Serial, OLED display
2. Set WiFi mode to STA
3. Read self MAC address
4. Initialize ESP-NOW
5. Identify player role (P1 or P2) from MAC address
6. Add peers: peer game board + Camera A + Camera B
7. Register ESP-NOW send/receive callbacks
8. Initialize touch sensors (warm-up readings)
9. Reset player states (left/right positioning)
LOOP:
1. Update touch sensor readings
2. Process player input (left, right, jump)
3. Update player physics (movement, gravity, collision)
4. Create NetPacket with player state and ready flag
5. Broadcast packet to all peers:
- Other game board (peerMac)
- Camera A (camA_MAC)
- Camera B (camB_MAC)
6. Render game state to OLED:
- Draw ground line
- Draw own character (square if P1, circle if P2)
- Draw other player character
- Display "Waiting Player" if other not ready
7. Delay for frame timing (30ms)
This code extends the original CircleSquareShapeSongSwingalong game by adding camera board peers to the ESP-NOW network. The game broadcasts player state packets (position, velocity, onGround flag) to enable AR overlay on camera displays. The camera boards use these packets to draw game characters at the correct positions relative to the camera feed, creating an augmented reality experience. Player identification is done via MAC address comparison, with P1 using square representation and P2 using circle representation. See Ethical AI Use section for ChatGPT co-development transcript.
Key insights and learnings from working with networking and communications protocols.
Acknowledgements and team roles for networking and communications work.
Developed CircleSquareShapeSongSwingalong multiplayer game with ESP-NOW wireless communication during Week 2. See Week 2 documentation for details.
Developed camera livestreaming and Edge AI face detection system during Week 10. See Week 10 documentation for details.
Transparent documentation of AI assistance used in this week's networking and communications work.
Used ChatGPT for collaborative development of the CircleSquareCamera augmented reality system, integrating ESP-NOW communication between game boards and camera boards. The AI assisted with ESP-NOW packet structure design, camera feed dithering algorithms, AR overlay implementation, and real-time synchronization of player states with camera displays. This extends the original CircleSquareShapeSongSwingalong multiplayer game from Week 2 with augmented reality visualization.
ChatGPT Co-Development Transcript:
Cursor AI assisted with comprehensive updates to Week 11 documentation, including ethical AI use section updates, design files documentation with pseudocode and key features, group assignment CircleSquareCamera AR section development, network address tables, and overall page organization. The AI helped refine technical content, create professional code documentation, add media captions and descriptions, and organize information for optimal readability and aesthetic presentation.
The AI was used to:
Complete AI Assistance Transcript:
Cursor AI assisted with creating the complete Week 11 documentation page, including networking protocols overview, group assignment CircleSquareCamera AR documentation, individual assignment sections summarizing ESP-NOW multiplayer game and Wi-Fi camera streaming systems, network address tables, and all formatting and styling. The AI helped refine and organize content from Week 2 and Week 10 into professional, succinct networking documentation while preserving key technical details.
The AI was used to:
Complete AI Assistance Transcript:
Cursor AI assisted with updating the Week 11 recitation notes section with comprehensive system integration content. The AI helped refine and organize raw notes from the recitation session into a well-structured, professional format with proper headings, bullet points, and external links. The updated section includes usability principles from Dr. Jan Borchers's session, failure modes analysis, testing strategies, repair considerations, lifecycle planning, and assignment details.
The AI was used to:
Complete AI Assistance Transcript:
Cursor AI assisted with comprehensive updates to the Week 11 individual assignment section, including MIT Wi-Fi connection instructions, Networked System 3 (Long Range Radio Board Design, Fabrication, and Connection to Meshtastic), and Network Address Tables. The AI helped refine technical content, organize documentation with proper structure, add detailed captions and descriptions to all images and videos, and create professional documentation following the established template format.
The AI was used to:
Complete AI Assistance Transcript:
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Interface & Application Programming
Key captures from the group and individual assignments with links to their sections.
This week focuses on creating user interfaces and applications that connect users with input and output devices. We'll explore interface design patterns, application programming techniques, and how to build interactive systems that provide meaningful user experiences.
Training materials and documentation for interface and application programming.
No recitation this week.
Essential resources for interface and application programming.
Interface and Application Programming - MIT Academy
Comprehensive resource covering interface design, application programming, GUI frameworks, embedded UI development, web interfaces for embedded systems, and user interaction patterns. Includes detailed tutorials, code examples, and implementation guides for various interface programming tools and frameworks.
Write an application that interfaces a user with an input &/or output device that you made. This assignment demonstrates three interface applications: (a) a simple button user interface on OLED, (b) a real-time graphical user interface through OLED and Wi-Fi for pulse oximeter readings, and (c) a Wi-Fi grip strength monitor with a mechanical gripper and age-estimation web dashboard.
A dual-interface health monitoring system that interfaces users with a MAX30102 pulse oximeter sensor through both an OLED display and a WiFi-enabled web interface. The system provides real-time visualization of heart rate (BPM), blood oxygen saturation (SpO2), and raw sensor data (IR and RED LED readings) through multiple output channels. Users can view data locally on the OLED display or remotely via a web browser accessing the ESP32-S3's built-in HTTP server. See Ethical AI Use: I2C Device Not Found for troubleshooting documentation.
The system provides dual-interface access: a local OLED display shows real-time sensor readings (IR, RED, BPM, SpO2) with visual heartbeat indicators, while a WiFi web interface provides comprehensive data visualization including numerical displays and real-time plotting charts. The web interface uses Chart.js for dynamic graph generation and automatically refreshes every 300ms to maintain real-time data updates. Both interfaces run simultaneously, enabling local monitoring and remote access.
Initial OLED display test showing successful I2C communication and display initialization. See Ethical AI Use: Hello World Code for development details.
Real-time pulse oximeter data displayed on OLED showing IR and RED LED readings, heart rate (BPM), and blood oxygen saturation (SpO2) percentage. A heartbeat indicator flashes when a pulse is detected. See Ethical AI Use: MAX30102 Initialization Fix for sensor setup documentation.
WiFi web interface providing remote access to pulse oximeter data through a responsive HTML dashboard. The interface displays real-time numerical values for BPM, SpO2, IR, and RED readings with automatic data refresh. The web server runs on the ESP32-S3, allowing access from any device on the same WiFi network.
Enhanced WiFi web interface with real-time plotting capabilities using Chart.js library. The interface displays live line charts for BPM and SpO2 values, showing trends over time with automatic scrolling and data point management. This visualization enables users to observe patterns in heart rate and blood oxygen levels during monitoring sessions.
Successful I2C device detection after fixing power isolation issues. The system now correctly identifies the OLED display at address 0x3C and the MAX30102 sensor at address 0x57. Initial troubleshooting required isolating the RD (red LED) and IRD (infrared LED) ground terminals from the general ground plane to prevent power issues and enable proper sensor operation.
20:04:48.612 -> Scanning...
20:04:48.644 -> I2C device found at address 0x3C !
20:04:48.644 -> I2C device found at address 0x57 !
20:04:48.644 -> done
A Wi-Fi-connected grip strength meter that interfaces a user with a mechanical hand gripper, an 80 kg load cell, and an HX711 amplifier. The ESP32-S3 continuously measures grip force, streams it to a web dashboard, and uses a published relationship between handgrip strength and age to display an estimated “physiological age” in real time. The design is inspired by the open-source grip strength meter from NeuroTech Hub at WashU and the associated PLOS ONE paper on grip strength and age trajectories (Syddall et al., 2014), adapted to my load cell and handle geometry.
Mechanically, a custom 3D-printed gripper clamps around a CZL601-type 80 kg bar load cell (sourced from Amazon 80 kg load cell). I imported a 100 kg load cell CAD from GrabCAD, checked the mounting dimensions against the CZL601AC datasheet (online, download), and confirmed that the hole spacing and footprint match the commercial part used here.
The age-estimation logic in the firmware is based on the non-linear, sex-specific grip-strength trajectories reported in Dodds et al., 2014 (PLOS ONE). As shown in Figure 1 of that paper, grip strength rises rapidly through adolescence, peaks around ages 30–40, and then declines non-linearly with age, with different magnitudes for males and females.
To reproduce this shape efficiently on the ESP32, the firmware uses inverted quadratic approximations centered at age 35, consistent with the observed peak in the PLOS ONE data. In the forward direction, grip strength is approximated as:
Males: Grip(age) ≈ 52 − 0.0089·(age − 35)2
Females: Grip(age) ≈ 32 − 0.0059·(age − 35)2
On the device, this relationship is analytically inverted so that the measured grip strength can be mapped back to an estimated age:
age = 35 + √((a − Grip)/b), where a is the peak strength (52 kg for males, 32 kg for females) and b is the curvature (0.0089 for males, 0.0059 for females). The firmware clamps grip strength to physiologically meaningful ranges before applying this formula so the square root remains well-defined.
float ageFromGripMale(float g) {
float a = 52.0f, b = 0.0089f;
if (g > a) g = a;
if (g < 5) g = 5;
return 35.0f + sqrtf((a - g) / b);
}
float ageFromGripFemale(float g) {
float a = 32.0f, b = 0.0059f;
if (g > a) g = a;
if (g < 4) g = 4;
return 35.0f + sqrtf((a - g) / b);
}
For full methodological details and original tables/figures, see the open-access paper: Dodds et al., 2014, PLOS ONE (download local PDF). The implementation here is an educational approximation and not intended for clinical or diagnostic use.
Summary of how each interface application connects users with input and output devices.
| Application | How It Interfaces with the User | Input Device(s) | Output Device(s) |
|---|---|---|---|
| (a) Button User Interface | Button press patterns (single, double, triple press, hold) provide mode selection input. OLED display shows current menu state and mode options. Users receive immediate visual feedback for each interaction. | Button (GPIO pin 4) | OLED Display (SSD1306, I²C 0x3C) LED (GPIO pin 3, flashlight mode) |
| (b) Real-Time GUI (OLED & WiFi) | Physical finger placement on MAX30102 sensor provides input. OLED displays real-time sensor data locally. WiFi web interface enables remote monitoring with numerical displays and live plotting charts accessible from any web browser. | MAX30102 Pulse Oximeter Sensor (I²C 0x57) | OLED Display (SSD1306, I²C 0x3C) WiFi Web Interface (HTTP Server with Chart.js visualization) |
| (c) Grip Strength GUI (Load Cell & WiFi) | Users squeeze a mechanical hand gripper that transfers force into an 80 kg bar load cell. The ESP32-S3 measures grip strength through the HX711 amplifier and streams results to a WiFi web dashboard that displays live grip force and an estimated physiological age in real time. | Mechanical gripper linked to 80 kg load cell (CZL601-type) HX711 load cell amplifier |
WiFi Web Interface (HTTP Server with Chart.js visualization) Optional Serial Plotter (for calibration traces) |
Compare as many tool options as possible for interface and application programming.
Comprehensive comparison of interface and application programming tools available for embedded systems and IoT development. This analysis covers tools from the MIT Academy Interface & Application Programming class page, evaluating their capabilities, use cases, and trade-offs for different application scenarios.
| Tool/Framework | Description | Pros | Cons | Best For |
|---|---|---|---|---|
| OLED (SSD1306) | Monochrome OLED displays via I²C | Low power, high contrast, simple I²C interface, pixel-level control | Limited resolution (128x64), monochrome only, small display size | Status displays, sensor readouts, simple menus, embedded dashboards |
| LCD Character Displays | Character-based LCD displays (16x2, 20x4) | Simple API, readable in sunlight, low cost | Limited to text/characters, no graphics, requires backlight power | Text-only interfaces, status messages, debugging output |
| TFT Color Displays | Full-color TFT LCD displays | Color graphics, touch support, high resolution | High power consumption, complex drivers, larger footprint | Rich GUIs, image display, interactive touch interfaces |
| Tool/Framework | Description | Pros | Cons | Best For |
|---|---|---|---|---|
| HTTP Server (ESP32) | Built-in HTTP server on ESP32 for web interfaces | Remote access, cross-platform (any browser), no app installation, real-time updates | Requires WiFi, network dependency, limited to local network without port forwarding | Remote monitoring, data visualization, control panels, sensor dashboards |
| WebSockets | Real-time bidirectional communication | Low latency, bidirectional, efficient for frequent updates | More complex than HTTP, requires persistent connection | Real-time streaming, interactive controls, live data feeds |
| Chart.js | JavaScript charting library for web interfaces | Beautiful charts, interactive, responsive, easy integration | Requires JavaScript, CDN dependency or local hosting | Data visualization, time series plots, sensor monitoring dashboards |
| Tool/Framework | Description | Pros | Cons | Best For |
|---|---|---|---|---|
| React Native | Cross-platform mobile app framework | Cross-platform (iOS/Android), native performance, large ecosystem | Complex setup, requires mobile development knowledge | Full-featured mobile apps, production deployments |
| Flutter | Google's UI toolkit for cross-platform apps | Single codebase, high performance, customizable UI | Dart language, larger app size, learning curve | Custom mobile interfaces, platform-specific designs |
| Progressive Web Apps (PWA) | Web apps with native-like features | No app store, cross-platform, offline capability | Limited native API access, browser dependency | Web-based mobile interfaces, simple deployment |
Detailed comparison of the specific interface programming tools implemented in this week's individual assignment, evaluating their performance, ease of use, and suitability for embedded health monitoring applications.
| Aspect | Evaluation |
|---|---|
| Ease of Implementation | Simple I²C interface, well-documented Adafruit library, minimal wiring (SDA, SCL, power, ground) |
| Display Quality | High contrast monochrome display, crisp text rendering, suitable for sensor data visualization |
| Power Consumption | Very low power draw, ideal for battery-powered applications |
| Limitations | 128x64 pixel resolution limits detail, monochrome only, small physical size (0.96" typical) |
| Best Use Cases | Local sensor readouts, menu systems, status indicators, simple data displays |
| Aspect | Evaluation |
|---|---|
| Ease of Implementation | ESP32 WebServer library provides straightforward HTTP server, Chart.js CDN integration requires minimal setup |
| Remote Access | Accessible from any device on WiFi network (phone, tablet, computer), no app installation required |
| Visualization Capabilities | Chart.js enables rich data visualization with real-time plotting, smooth animations, and interactive features |
| Real-Time Updates | JSON API endpoint provides efficient data transfer, 300ms refresh rate maintains responsive feel |
| Limitations | Requires WiFi network, limited to local network without additional configuration, power consumption higher than OLED alone |
| Best Use Cases | Remote monitoring, data logging visualization, multi-user access, professional dashboards |
| Aspect | Evaluation |
|---|---|
| Implementation Approach | Digital GPIO reading with debouncing, press pattern detection (single/double/triple/hold) enables mode navigation |
| User Experience | Intuitive press patterns, immediate visual feedback on OLED, no complex UI navigation required |
| Reliability | Hardware debouncing with INPUT_PULLUP, timeout-based pattern recognition prevents accidental triggers |
| Limitations | Limited to single button interaction, pattern learning required, less flexible than touchscreen interfaces |
| Best Use Cases | Simple mode selection, menu navigation, embedded system controls, low-cost input solutions |
The combination of OLED display and WiFi web interface provides complementary benefits: OLED offers immediate local feedback with minimal latency and power consumption, while the WiFi interface enables rich visualization and remote access. The button-based input provides an intuitive, low-cost navigation method suitable for embedded systems. Together, these tools create a robust interface system that supports both local and remote interaction with sensor data.
Design files and code documentation for interface and application programming projects.
Code files for ESP32-S3 development board interfaces including I2C scanning, OLED display control, and button-based user interface systems. See Week 4 documentation for board design and Week 8 documentation for pinout configurations.
i2cscan_generic.ino)Utility code for scanning and detecting I2C devices on the bus, essential for troubleshooting connection issues.
SETUP:
Initialize I2C bus
Initialize Serial communication
Print "I2C Scanner" message
LOOP:
Print "Scanning..." message
Initialize device counter to 0
FOR each address from 1 to 126:
Begin I2C transmission to address
Check transmission result
IF transmission successful (error == 0):
Print device found at address (hex format)
Increment device counter
ELSE IF unknown error (error == 4):
Print unknown error at address
IF no devices found:
Print "No I2C devices found"
ELSE:
Print "done"
Wait 5 seconds before next scan
hello_world.ino)Basic OLED display test code that initializes the display and shows "Hello World" text, confirming I2C communication and display functionality.
SETUP:
Initialize Serial communication at 115200 baud
Wait 500ms for Serial to stabilize
Initialize I2C bus with SDA pin 5, SCL pin 6
Print "Starting..." message
Attempt to initialize OLED display:
IF initialization fails:
Print "OLED not found" error
Enter infinite loop (halt)
Clear display buffer
Set text size to 2
Set text color to white
Set cursor position to (0, 20)
Print "Hello" line
Print "World!" line
Update display (send buffer to hardware)
LOOP:
Empty (runs once, then waits)
display.display() to send buffer to hardware; drawing functions only modify bufferoutputs.ino)Mode-based navigation system using button press patterns (single, double, triple, hold) to control multiple output devices including OLED display, camera, sensors, and LED flashlight.
SETUP:
Initialize OLED display
Initialize camera module
Initialize sensors (MPU6050, HX711)
Initialize button pin (INPUT_PULLUP)
Initialize LED pin (OUTPUT)
Set current mode to MENU
Display menu screen
LOOP:
Read button state
IF button pressed:
Record press start time
Start press detection timer
IF button released:
Calculate press duration
IF duration > HOLD_THRESHOLD:
Activate flashlight mode
ELSE:
Increment press count
Reset press timeout timer
IF press timeout exceeded:
Process accumulated press count:
IF press count == 1: Switch to CAMERA mode
IF press count == 2: Switch to SENSORS mode
IF press count == 3: Switch to MENU mode
Reset press count
IF current mode == MENU:
Display menu screen (button instructions)
IF current mode == CAMERA:
Capture camera frame
Process image with dithering algorithm
Display on OLED
Run Edge Impulse inference (optional)
Overlay results on display
IF current mode == SENSORS:
Read accelerometer/gyroscope data
Read load cell data
Display sensor readings on OLED
Update display
Design files and firmware for a Wi-Fi-enabled grip strength meter built around an 80 kg bar load cell, HX711 amplifier, and ESP32-S3. The mechanical design was informed by the open-source grip strength meter from NeuroTech Hub and validated against the CZL601AC bar load cell datasheet (online, download).
All parts were printed in PLA with high infill in the load paths to minimize creep and flex during repeated maximal grip trials.
loadcell_simple.ino)Minimal firmware to bring up the HX711 amplifier and read calibrated weight values over Serial for debugging and calibration.
SETUP:
Configure HX711 data and clock pins (DT=A1, SCK=A0)
Initialize Serial at 115200 baud
Begin HX711 with DT/SCK pins
Set calibration factor (grams per ADC unit)
Tare the scale so current load is zero
LOOP (every 200 ms):
If HX711 is ready:
Read averaged weight using get_units(4)
Print "weight:<value>" to Serial for Serial Plotter
Else:
Print "HX711 not ready" for troubleshooting
weight:<value> format is convenient for quickly visualizing changes while squeezing the handle.loadcell_plot.ino)Refined HX711 reader tuned for the Arduino Serial Plotter: it outputs a single numeric value per line for clean plotting of grip strength over time.
SETUP:
Initialize Serial at 115200 baud
Initialize HX711 on DT=A1, SCK=A0
Set calibration factor
Tare the scale
Optionally print debug messages when DEBUG=1
LOOP (every 200 ms):
If HX711 is ready:
Read filtered weight with get_units(4)
Optionally print labeled debug text
Print only the numeric weight to Serial for the Plotter
Else:
Optionally print "not ready" debug text
Print 0 so the plotted line continues gracefully
loadcell_simple.ino for easy swapping.DEBUG set to 0 so the output is a pure numeric stream.loadcell_gui.ino)Full application firmware that exposes grip strength and an estimated age over Wi-Fi using an HTTP server and a Chart.js web dashboard.
SETUP:
Initialize Serial
Initialize HX711 (pins, calibration factor, tare)
Connect ESP32-S3 to WiFi (MIT network in this build)
Start WebServer on port 80
Register handlers:
"/" → serve HTML+JS GUI page from PROGMEM
"/metrics" → serve JSON with current grip and age
LOOP:
Handle incoming HTTP clients
Every 200 ms:
If HX711 is ready:
Read smoothed grip strength (kg) via get_units(3)
Clamp negative values to zero
Store in global gripStrength
AGE ESTIMATION HELPERS:
ageFromGripMale(g):
Clamp g to plausible range
Compute age using inverted quadratic fit derived from normative data
ageFromGripFemale(g):
Same structure with female-specific parameters
HTTP /metrics ENDPOINT:
Read "sex" query parameter (default "male")
Compute ageEstimate = ageFromGripMale/ageFromGripFemale(gripStrength)
Return JSON:
{"grip": <kg>, "age": <years>}
CLIENT-SIDE GUI (HTML + JS):
Render dark-themed dashboard with:
- Sex toggle buttons (male/female)
- Numeric grip strength (kg) and age
- Scrolling Chart.js line plot of recent grip values
Poll /metrics every 200 ms and update DOM + chart
PROGMEM./metrics for future extensions (e.g., logging clients).ssid and password constants to match your network.Code files for the Invisible String pulse oximeter interface system, providing real-time health monitoring through OLED display and WiFi web interface. See Week 11 PCB design documentation for hardware details.
pulse_vo2_reads_fixing.ino)Local OLED display interface showing real-time pulse oximeter readings including heart rate (BPM), blood oxygen saturation (SpO2), and raw IR/RED LED sensor values.
SETUP:
Initialize Serial communication
Initialize I2C bus (SDA=5, SCL=6) at 400kHz
Initialize MAX30102 sensor
Configure sensor:
Set LED mode to RED + IR
Set ADC range to 16384
Set sample rate to 100 Hz
Set pulse width to 411 μs
Set LED brightness to medium (0x3F)
Initialize OLED display
Clear and prepare display
LOOP:
Read IR and RED LED values from sensor
Heart Rate Detection:
Check for heartbeat in IR signal using peak detection
IF heartbeat detected:
Calculate time since last beat
Calculate BPM = 60 / (time_delta / 1000)
Set beat flag for visual indicator
Update last beat timestamp
SpO2 Calculation (every 300ms):
Add IR and RED values to circular buffer
IF buffer has 50 samples:
Calculate DC components (average)
Calculate AC components (RMS)
Calculate R ratio = (AC_RED/DC_RED) / (AC_IR/DC_IR)
Estimate SpO2 = 110 - 25 * R
Clamp SpO2 to 70-100% range
Serial Output:
Print IR, RED, BPM for Serial Plotter visualization
OLED Display Update:
Clear display buffer
Display IR value (small text)
Display RED value (small text)
Display BPM value (large text, size 2)
Display SpO2 percentage (small text)
IF beat detected: Draw heartbeat indicator dot
Update display hardware
Delay 20ms (50 Hz update rate)
pulse_gui.ino)WiFi-enabled web interface providing remote access to pulse oximeter data through HTTP server with JSON API endpoint for real-time data updates.
SETUP:
Initialize Serial communication
Initialize I2C bus
Initialize MAX30102 sensor
Configure sensor (LED mode, sample rate, pulse width)
Initialize OLED display
Initialize WiFi:
Connect to network (SSID, password)
Wait for connection
Print IP address
Initialize HTTP server:
Route "/" → serve GUI HTML page
Route "/gui" → serve GUI HTML page
Route "/metrics" → serve JSON data
Start web server
LOOP:
Handle incoming HTTP client requests
Read IR and RED values from sensor
Heart Rate Calculation:
Check for heartbeat in IR signal
IF heartbeat detected:
Calculate BPM from time delta
Update global BPM variable
SpO2 Calculation:
Add IR/RED to circular buffer
IF buffer full (50 samples):
Calculate SpO2 using ratio-of-ratios
Update global SpO2 variable
Update global variables (g_ir, g_red, g_bpm, g_spo2)
OLED Display Update:
Display IR, RED, BPM, SpO2 values
Update display hardware
server.handleClient() frequently in loop() for responsive web interfacepulse_gui_2.ino)Enhanced WiFi web interface with real-time plotting capabilities using Chart.js library, displaying live line charts for BPM and SpO2 values with automatic scrolling and data management.
SETUP:
(Same as pulse_gui.ino)
Initialize WiFi, sensor, OLED, HTTP server
LOOP:
(Same as pulse_gui.ino)
Handle HTTP requests, read sensor, calculate BPM/SpO2
GUI HTML PAGE (embedded in PROGMEM):
Load Chart.js library from CDN
Create two Chart.js line chart instances:
- BPM chart (y-axis: 40-160 BPM)
- SpO2 chart (y-axis: 70-100%)
Initialize data arrays for each chart
JavaScript Update Function (runs every 300ms):
Fetch JSON data from /metrics endpoint
Update numerical displays (BPM, SpO2, IR, RED)
Add new BPM value to BPM data array
Add new SpO2 value to SpO2 data array
IF array length > maxPoints (100):
Remove oldest value (shift)
Update chart labels (array indices)
Update chart datasets with new data
Call chart.update() to refresh display
Set interval to call update function every 300ms
Key insights and learnings from working with interface and application programming.
Acknowledgements and team roles for interface and application programming work.
Transparent documentation of AI assistance used in this week's interface and application programming work.
This section documents all AI assistance used during Week 12 interface and application programming work. ChatGPT was used for code development, troubleshooting, and technical guidance. All transcripts are linked below for transparency.
AI assistance for developing the button-based menu system with mode navigation (Menu, Camera, Sensors) using press pattern detection. The conversation covered button debouncing, press pattern recognition, mode state management, and OLED menu display design.
Related to: Part (a): Simple Button User Interface on OLED
Troubleshooting session for I2C device detection issues with MAX30102 pulse oximeter sensor. The conversation addressed power isolation problems, I2C wiring verification, sensor initialization order, and ground plane isolation requirements for accurate sensor operation.
Related to: Part (b): Real-Time Graphical User Interface through OLED and WiFi
Development assistance for creating the initial OLED "Hello World" test program. The conversation covered I2C initialization, OLED display library setup, text rendering, and basic display configuration for XIAO ESP32-S3 development board.
Related to: Part (b): Real-Time Graphical User Interface through OLED and WiFi (initial OLED test)
Technical guidance for fixing MAX30102 pulse oximeter sensor initialization issues. The conversation addressed sensor configuration parameters, LED brightness settings, sample rate optimization, initialization sequence requirements, and troubleshooting sensor communication problems.
Related to: Part (b): Real-Time Graphical User Interface through OLED and WiFi (pulse oximeter readings)
Code-focused assistance for designing and refining the WiFi grip strength monitor firmware. The conversation covered HX711 bring-up, load cell pin mapping, Serial Plotter debugging sketches, the ESP32-S3 HTTP server, the Chart.js dashboard, and the age-estimation helpers used in loadcell_simple.ino, loadcell_plot.ino, and loadcell_gui.ino.
Related to: Part (c): Real-Time GUI through Mechanical Gripper and WiFi and Grip Strength Monitor design files
Calibration-focused assistance for mapping HX711 counts to kilograms and validating the grip strength readings. The discussion walked through using the Serial Plotter, collecting reference weights, tuning the calibration factor, and sanity-checking the age-estimation model derived from normative grip strength data.
Related to: Part (c): Real-Time GUI through Mechanical Gripper and WiFi calibration workflow and the Grip Strength Monitor sketches.
Full Cursor AI transcript focused on organizing the grip strength monitor section: structuring Part (c), integrating design files, embedding images and videos, and aligning captions, external references, and the Ethical AI Use documentation for the load cell project.
Related to: Part (c): Real-Time GUI through Mechanical Gripper and WiFi, Grip Strength Monitor design files, and this Ethical AI Use section.
Complete Cursor AI transcript for Week 12 interface and application programming project documentation. This comprehensive conversation covers the creation and population of the Week 12 HTML page, including Individual Assignment sections (button UI and WiFi GUI), Group Assignment tool comparisons, Design Files documentation with pseudocode, and Ethical AI Use section organization. The transcript demonstrates the collaborative process of building professional documentation with structured content, embedded media, and comprehensive code documentation.
Related to: Complete Week 12 documentation including Individual Assignment, Group Assignment, Design Files, and overall page structure
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Wildcard Week
Key captures from the wildcare week assignment, combining fiber laser circuit engraving with UV laser glass engraving, each linked to their sections.
This week focuses on exploring digital fabrication processes not covered in previous assignments. We'll design and produce something using computer-aided design and manufacturing, documenting requirements and including everything necessary for reproduction. This week also includes reflection on the impact and purpose of making (almost) anything.
Week 13 lecture covering major announcements, week 12 review, system integration reminders, and wildcard week preview.
'Step away from the panic'
Assignment: Design and produce something with a digital process (incorporating computer-aided design and manufacturing) not covered in another assignment.
Recitation exploring the purpose, impact, and scaling of fabrication labs and making infrastructure.
Use them, change them, source them locally, ingredients found locally, how to distribute them globally.
First few fablabs were NSF funded. Now corporate CSR, regional funding, labor funding (10 different ways). Successful fablabs are heterogeneous with lots of ways of funding. The ones that fail usually just have one sponsor.
Example: DIY Raman spectrometer for water purification (even after purifications still have sulfates and improving that too).
(Besides a lot of meetings!) What is a reasonable way to design a data collection process about what is needed and how to achieve it.
Timeline of development, cocreation, portable, skin analysis, water testing, biomaterial tech, microscopy.
Focused on rehabilitation of addicts with woodworking. (Vermont is wood producing state).
Hands-on component (no physical robot exists that can do it) and community component is also important. Inclusion is important. Solving unconscious and institutional biases.
People could be doing bad, but not because doing it with a group people with values.
Clarification: Adding infrastructure to existing fablab infrastructure to help support specific community needs (not specialized fab labs).
July 27-31, 2026 at Fab Boston 2026.
Essential resources for wildcard week digital fabrication processes and PCB design.
Comprehensive resource covering wildcard week assignment requirements and possibilities including grinding, multi-axis machining, electrical discharge machining, waterjet cutting, flatbed cutting, plasma cutting, laser metal cutting, laser micromachining, welding, molding and casting, folding, robotics, electronics, embedded programming, food, materials, biotechnology, textiles, and composites.
Course resources for embedded programming with ATtiny412 microcontroller examples:
Embedded Programming - MIT Academy
Design and produce something with a digital process (incorporating computer-aided design and manufacturing) not covered in another assignment. This assignment demonstrates PCB design and fabrication using a combination of CNC milling (Carvera) for edge cuts and hole machining, and fiber laser engraving (xTool F2 Ultra) for trace creation on copper-clad boards. The project includes designing and fabricating two development boards: a simple ATtiny1626 development board and a complex RP2354A development board.
This project explores PCB fabrication using fiber laser engraving, a process not covered in previous assignments. The workflow combines subtractive manufacturing techniques: CNC milling for mechanical features (edge cuts and holes) and laser engraving for electrical traces. This approach enables rapid prototyping of custom PCBs without requiring chemical etching or specialized PCB manufacturing facilities.
ATtiny1626 is an 8-bit microcontroller with 8 pins maximum per port. Each port has 8 pins maximum (some less). Need to specify port letter and pin number:
Using mods → mill 2D PCB workflow:
Using xTool F2 Ultra fiber laser:
Using solder gun (heat gun) for component assembly.
Download design files: ATtiny_QFN.zip and RP2354A-dev-v1.1.zip
Design and produce something with a digital process (incorporating computer-aided design and manufacturing) not covered in another assignment. This assignment demonstrates glass engraving using UV laser technology to create volumetric engravings in K9 glass, enabling both surface and volume engraving techniques with different focal length lenses.
This project explores volumetric glass engraving using UV laser technology, a process not covered in previous assignments. The workflow involves preparing 3D geometry, selecting appropriate lenses for surface or volume engraving, and using scanning mode to create detailed engravings in glass substrates.
Two types of lenses are available for different engraving applications:
The software accepts any 3D file format, including STL files for geometry engraving.
Integration of PCB engraving and glass engraving techniques to create a combined system demonstrating the potential for heart rate visualization through engraved glass from a tiny engraved PCB.
This project combines two wildcard week processes: PCB engraving with fiber laser and glass engraving with UV laser. The engraved PCB contains a blinking LED circuit, while the glass piece features a tiny heart engraving. By centering the engraved heart on top of the blinking engraved PCB, the LED light passes through the volumetric glass engraving, creating a visible blinking effect.
This integration demonstrates the potential for using a back LED to visualize data through glass engravings. For the final project, this technique will be used to demonstrate heart rate readings through a glass engraving from a tiny engraved PCB, creating an elegant visual feedback system that combines digital fabrication processes with embedded electronics.
Design and produce something with a digital process (incorporating computer-aided design and manufacturing) not covered in another assignment. This group assignment explores various digital fabrication processes available during wildcard week.
The wildcard week assignment allows exploration of digital fabrication processes not covered in previous weeks. Possibilities include (but are not limited to):
Various welding techniques for joining materials.
Various composite fabrication techniques.
Design files for PCB engraving projects including ATtiny1626 and RP2354A development boards.
Simple development board for ATtiny1626 microcontroller with connectors, resistors, LED, and capacitor. Designed for breadboard compatibility with 100mil spacing grid.
Complex development board for RP2354A microcontroller with advanced routing and component placement.
UV laser engraving requires a flat surface for optimal focus and engraving quality. Quentin and Jake designed this 3D-printed holder to securely position prisms during the engraving process, ensuring consistent results on curved glass surfaces.
Simple Arduino-compatible test program for the ATtiny1626 development board. The code configures pin PA1 as an output and toggles the LED on and off with 200ms delays, providing a basic functionality test to verify successful board fabrication and programming.
// ATtiny1626 Blinking LED Test Code
// This simple test program verifies board functionality by blinking an LED
// Define the LED pin using ATtiny1626 port/pin notation
// PIN_A1 refers to Port A, Pin 1 (PA1) on the ATtiny1626 microcontroller
#define PIN_LED PIN_A1
// Setup function runs once when the microcontroller starts or resets
void setup() {
// Configure the LED pin as an output pin
// This allows the microcontroller to control the voltage on this pin
pinMode(PIN_LED, OUTPUT);
}
// Loop function runs continuously after setup() completes
void loop() {
// Set the LED pin to HIGH (3.3V or 5V depending on board voltage)
// This turns the LED on by allowing current to flow through it
digitalWrite(PIN_LED, HIGH);
// Wait for 200 milliseconds (0.2 seconds)
// This keeps the LED on for a visible duration
delay(200);
// Set the LED pin to LOW (0V)
// This turns the LED off by stopping current flow
digitalWrite(PIN_LED, LOW);
// Wait for another 200 milliseconds
// This keeps the LED off before turning it on again
delay(200);
// The loop() function automatically repeats, creating a continuous blink pattern
// Total cycle time: 400ms (200ms on + 200ms off)
}
To program the ATtiny1626 microcontroller in Arduino IDE, you need to install the megaTinyCore board package. This core provides support for the ATtiny series microcontrollers, including the ATtiny1626. Installation instructions are available at the megaTinyCore GitHub repository.
Key insights and learnings from this week's wildcard activities, including glass and PCB engraving.
Acknowledgements and team roles for wildcard week PCB engraving work.
Transparent documentation of AI assistance used in this week's wildcard week assignment page creation.
Complete Cursor AI transcript for Week 13 wildcard week assignment page creation. This comprehensive conversation covers the creation and population of the Week 13 HTML page, including lecture and recitation notes, PCB engraving assignment documentation, design files integration, and comprehensive page structure development. The transcript demonstrates the collaborative process of building professional documentation with structured content, embedded media, and comprehensive design file documentation.
Related to: Complete Week 13 documentation including Lecture Notes, Recitation Notes, Assignment, Group Assignment, Design Files, and overall page structure
Complete Cursor AI transcript for Week 13 glass engraving and PCB engraving assignment updates. This conversation covers the addition of glass engraving assignment documentation, updates to PCB engraving process documentation with step-by-step media, design files additions including prism holder and ATtiny1626 test code, highlights reorganization, and image sizing optimizations for better step-by-step documentation flow.
Related to: Glass Engraving Assignment, PCB Engraving Assignment, Design Files, Week Highlights, and Wildcare Integration
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Final Project
Final project integration week focused on bringing together all subsystems into a cohesive MirrorAge system, completing documentation, and preparing for final presentation.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation.
Related: See the Development Timeline on the final project page for the overall project schedule.
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
Reflections and key learnings from the final project integration week.
Acknowledgments for contributions and collaborations during the final project week.
Documentation of AI-assisted tasks and tools used during the final project integration week.
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, you should make rather than buy the parts of your project. Complete breakdown of materials, components, and sourcing information.
Complete list of materials and components:
Running total $216.86 (Week 9). Electronics 54%, mechanical 28%, molding/casting 18%. Detailed line items live in the weekly documentation tables (e.g., ReactionAge BOM CSV) and the midterm Airtable snapshot referenced in the lab-meeting deck.
Strategic decisions on fabrication vs. purchasing:
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Molding & Casting
Snapshot of this week’s molding and casting milestones.
Group SDS review and Anthony’s lion casting demo informed our material tests.
Wax machining, silicone molding, and Drystone casting workflow.
Cross-clamping locked alignment before the Drystone pour.
Parallel pours hit matching fill heights on both domes.
Final demold showcased the matched finish across both domes.
Completed Rotomelt arrow shows full detail capture with clean surface finish.
Caliper measurement confirmed 0.04 in minimum thickness; CAD extrusion adds 0.06 in to arrowhead.
CNC-machined wax arrow with 0.06 in added thickness for improved castability.
Wax machining, silicone casting, gypsum and fusible alloys, and deliberate process control for repeatable parts.
Machine a precise wax mold, pour a silicone tool without flashing, and cast durable Kryptonite arrow parts that hide CAM toolpaths.
Registration design, silicone handling discipline, gypsum mixture prep, low-melt alloy safety, and 3D adaptive CAM tuning.
Group SDS review with material trials, recitation summary, training digest, and an individual wax–silicone–drystone workflow for the Kryptonite arrow mold.
Primary references for molding and casting workflows, materials, and safety.
The MIT Molding & Casting overview consolidates process types (injection, vacuum, blow, rotational), mold materials (wax, foams, silicones, latex, thermosets), additives for performance tuning, and safety expectations around ventilation, PPE, and SDS review (academy.cba.mit.edu).
Refined notes from Quentin Bolsee’s session, anchored to the Slack recap (Slack).
Key reminders from the molding & casting training Slack update (Slack).
Anthony’s Fusion 360 refresher covers feeds, speeds, and tool changes for machining the standard wax block.
Our team benchmarked molding and casting materials by walking through SDS requirements, mixing behaviour, and finished-part characteristics.
| Material | Key Specs | Safety & Handling | Docs |
|---|---|---|---|
| Mold Star 15 SLOW | 1A:1B by volume; 50 min pot life; 4 hr demold; Shore A 15. | Low viscosity pour; ventilate; nitrile gloves; mild skin/eye irritant. | Download PDF |
| Mold Star 30 | 1A:1B by volume; 45 min pot life; 6 hr demold; Shore A 30. | Stiffer silicone for higher clamp pressure; same PPE as 15. | Download PDF |
| Material | Mix / Melt | Performance Notes | Docs |
|---|---|---|---|
| USG Hydro-Stone | 32 parts water : 100 parts plaster (by weight). | High compressive strength; handles drops; needs thicker sections for durability. | Download SDS |
| USG Drystone | 20 mL water : 100 g powder (class recipe). | Low-shrink “self-leveling” mix; thin details can chip if dropped; tintable. | Download SDS |
| RotoMetals Roto281 | Melts at 281 °F; 58% Bi, 42% Sn (lead-free). | Sharp detail, reusable alloy; requires face shield, sleeves, and ventilation (catalog). | Download Certificate |
Progressive Drystone trials, captured here, validated the draft, venting, and disposal guidance outlined in Process Observations and informed the individual dome pours in Dome & Coil Casting Studies.
Documented consumables and mixing practices ensured the entire team aligned with the specifications listed in Core Resources and supported individual experiments like Mirror Screen Coating Trials and Mouse Bone Encapsulation Prep.
Quick-reference notes distilled from the manufacturer safety data sheets reviewed this week. Always consult the full SDS before working and keep copies accessible at the workstation.
References: Smooth-On Mold Star 15/30 TB; USG Hydro-Stone SDS; USG Drystone SDS; RotoMetals 281 °F alloy certificate.
| Aspect | Best Practice | Notes |
|---|---|---|
| Wax Stock & Fixturing | Use 1.5"×3"×7" machinable wax; hot-glue three-point fixture; consult Anthony before upsizing. | Feeds/speeds match wood reference video; secure yet removable glue pads prevent flying stock. |
| Registration & Sizing | Calculate alignment features and maintain ≥5 mm walls. | Registration accuracy avoids flash; resizing requires updated CAM and instructor sign-off. |
| Mold Materials | Select Mold Star 15/30 for silicone tooling (450 °F rated). | Mold Star mixes 1A:1B; degas by tapping, avoid overhangs beyond what can demold. |
| Pour Planning | Ensure watertight mold box; pre-measure volume; pour viscous part first. | Mix inside bottles to homogenize; steady stream pour; tap ≤1 min; monitor exotherm. |
| Surface Prep & Assembly | Apply thin beeswax film; clamp or band mold; enlarge pour spouts. | Avoid thick wax layers (lose detail); big pour holes ease Drystone or metal fills. |
| Method | Strengths | Trade-offs |
|---|---|---|
| 3D-Printed Molds | Fastest to fabricate; easy to iterate geometry; can print complex draft-less shapes. | Requires laborious post-processing—epoxy or wax skim coats to hide layer lines; surface quality depends on print settings. |
| CNC-Machined Molds | Excellent surface finish directly off the tool; precise registration; minimal hand cleanup. | Longer setup time and CAM effort; limited by cutter reach and allowable overhangs. |
| Hand-Sculpted Wax | Quick for simple geometries; easy to tweak by hand. | Only suitable for low-temperature casting media (wax deforms with heat); surface quality depends on sculpting skill. |
| Workflow | Steps | When to Use |
|---|---|---|
| Negative Molds (Direct Cast) | 1. Create rigid negative mold 2. Pour casting material directly into that mold. |
Ideal for rigid molds that tolerate casting temperature—e.g., wax dome cast in Hydro-Stone. Fewer steps but mold must release final part. |
| Positive-to-Negative Molds (Flexible Insert) | 1. Machine or sculpt positive master 2. Cast flexible negative (e.g., silicone) from the master 3. Pour final casting into the flexible mold. |
Best for complex undercuts or when repeated casts are needed—e.g., silicone mold for the Kryptonite arrow. Added step but protects master and simplifies demolding. |
| Casting Material | Mix / Melt | Strength & Finish | Field Notes |
|---|---|---|---|
| Hydro-Stone | 32:100 water to powder (by weight). | Robust, handles drops; needs thicker sections. | Great for load-bearing parts; mix to smooth slurry, avoid over-thinning. |
| Drystone | 20 mL water per 100 g powder. | Captures detail; thin features brittle. | Target peanut-butter texture; keep witness stock to gauge cure without denting final part. |
| Roto281 Alloy | Melt at 281 °F (58% Bi / 42% Sn). | High detail, reusable; requires PPE. | Needs face shield & long sleeves; keep water away; perfect for gravity-fed sprues. |
| Epoxy Resins | Follow bottle-specific ratios; tint with approved dyes. | Durable, customizable color. | Longer cure; dye compatibility varies—check brand guidance before mixing. |
| Sandcasting Metals | Gold or alloys poured into sand or plaster molds. | Suitable for high-temp pours. | Requires dedicated foundry workflow; consider for future metal runs after silicone tooling. |
Reference tables and safety notes established guardrails that were stress-tested in Group Test Casts. Material handling discipline from Group Materials Inventory reinforced the SDS takeaways in Process Observations and fed forward into individual workflows like Arrow Workflow Summary.
Designing, machining, and casting a Kryptonite arrow replica while maintaining crisp mold finishes and production-ready registration features.
Documenting the wax machining, silicone molding, and Drystone casting plan for the week9/designfiles/kryptonite_arrow_merged.stl model.
Machining validated fixture stability and surface quality, providing a master wax that directly enabled the silicone pour documented in Arrow Mold · Mixing & Pouring and ensured alignment with the wax-handling guidelines outlined in Casting Workflow Refinement.
Measured mixes and controlled pours produced void-free silicone negatives that feed into the demolding steps captured in Arrow Mold · Demolding Silicone and align with the pre-wetting approach described in Casting Workflow Refinement · Arrow.
Measured demolding safeguarded the wax master for future use and produced a durable silicone tool that directly supports the Drystone pours captured in Casting Workflow Refinement · Arrow and the dome casting trials in Dome & Coil Casting Studies.
Final casting run captured the full kryptonite arrow workflow: silicone mold conditioning, a 2,000 g Drystone batch hydrated with 400 mL of water, careful demolding, and precision finishing.
Metal casting workflow using Rotomelt 281 low-temperature alloy: mold preparation with talc release agent, melting setup with toaster oven and crucible, controlled pour into silicone molds, and demolding of the cast arrow.
Analysis of first casting attempt revealed minimum thickness limitations. Design improvements include preheating molds, creating MoldStar 60 molds for higher temperature tolerance, and machining a thicker wax positive for improved castability.
Next casting cycle will incorporate three improvements: preheating existing molds to reduce thermal shock, fabricating new MoldStar 60 silicone molds that withstand higher preheating temperatures, and using the thickened wax positive to create new negative molds for casting arrows with improved structural integrity. The thickened design adds 0.06 in to the arrowhead (0.04 in minimum plus 0.02 in safety margin), with corresponding 0.06 in shifts applied to holder, base, and walls to maintain alignment.
To address the minimum thickness limitation, I extruded the arrowhead by 0.06 in based on caliper measurements showing 0.04 in as the minimum castable thickness for Rotomelt 281, with an additional 0.02 in safety margin. To maintain proper alignment, I shifted the holder, base, and walls by the same 0.06 in offset, ensuring all components remain registered correctly for the next casting iteration.
Wax-coated mold halves, balanced venting, and rehearsed Play-Doh trials produced repeatable Drystone domes, while the end-to-end process captured in Dome Casting Sequence demonstrates disciplined wax prep, staggered hydration, and clamp management that mirror the silicone handling in Casting Workflow Refinement · Domes and the pre-wetting strategy in Arrow Mold · Mixing & Pouring Silicone. The new Dome Demolding sequence codifies the guarded putty-knife and screwdriver choreography that protected both the silicone tool and the cast domes.
PVD setup and plasma verification yielded evenly coated OLED mirrors, providing a surface reference for the silicone casting finish targeted in Arrow Mold · Demolding Silicone and the dome polish seen in Dome & Coil Casting Studies.
Isolating tibia samples from 1-, 15-, and 25-month C57BL/6 female mice to compare age-dependent mechanical properties required controlled epoxy encapsulation.
Systematic mold prep and low-exotherm epoxy mixing set the stage for reliable biomechanical testing, while the cured fixtures in Bone Encapsulation Sequence confirm the mounts are stable enough for downstream mechanical assays and dovetail with the silicone and gypsum workflows documented in Arrow Mold · Mixing & Pouring Silicone and Dome & Coil Casting Studies.
Repurposed the dinosaur silicone mold to cast a toothless “rock” mascot, tuning clamp force to protect the delicate jawline while celebrating the Kryptonite arrow narrative.
Creative reuse of the dinosaur mold produced a durable “toothless” mascot that showcases finishing consistency with the Arrow Casting · Finishing Sequence and extends the storytelling from the dome and bone studies.
The CAD and CAM groundwork in Arrow Mold · CAD and CAM Development produced safe, efficient toolpaths that executed cleanly during Arrow Mold · Machining. Controlled silicone mixing in Arrow Mold · Mixing & Pouring Silicone and careful demolding in Arrow Mold · Demolding Silicone delivered a reusable mold that underpins the pour strategy articulated in Casting Workflow Refinement · Arrow and culminates in the finishing workflow chronicled in Arrow Casting · Finishing Sequence.
| Problem | Solution | Action Taken |
|---|---|---|
| Two-part dome mold relied on friction fit; unclear release plan. | Suggested redesigning as a three-part mold to avoid locking and align with shop workflow. | Shifted to multi-part splits and shared updated files for review. |
| Topology-optimized dome core still captured by side walls. | Flagged negative draft surfaces; recommended filling trouble zones with low-temp wax instead of complex CAD edits. | Planned targeted wax fill and re-ran draft analysis before printing. |
| Unclear difference between positive and negative molds for casting solids. | Clarified the class workflow: machine a positive master, pour silicone negative, then cast final positive. | Reframed dome plan around the three-step positive workflow. |
| Attempted to send gcode for MakerBot queue. | Requested native mesh files (STL/3MF) instead of gcode for lab slicing. | Uploaded finalized 3MF assets to the shared Fusion project. |
| Problem | Solution | Action Taken |
|---|---|---|
| Wanted the through-hole machined during mold making. | Advised leaving the hole as a post-cast drilling operation; 3-axis setup cannot machine it cleanly. | Kept the feature in CAD for reference but planned secondary drilling. |
| Fusion post-processor error: non-numeric program number. | Pointed to the program number field in the Post Process dialog; must be 1–9999. | Entered numeric ID, regenerated tap file, and posted successfully. |
| Asked about using a favorite champagne glass as a wax mold. | Noted that the geometry would be difficult to demold cleanly. | Dropped the glass replica experiment for this cycle. |
| Problem | Solution | Action Taken |
|---|---|---|
| Chocolate casting test left residue in silicone molds. | Warned against eating food from shop tooling because of contamination risk. | Limited tasting to unused melt; treated molds as non-food-safe. |
| Unsure about available Mold Star inventory and removal from wax masters. | Confirmed fresh stock under the bench but emphasized conserving supply. | Used remaining mixed bottles sparingly (~125 mL) and planned next pours accordingly. |
Arrow mold development, distilled in Arrow Workflow Summary, drives the finishing pass documented in Arrow Casting · Finishing Sequence and pairs with the Pantheon explorations captured in Dome Casting Sequence. Mirror finishing benchmarks in Mirror Screen Coating Trials guided the target sheen, while biological encapsulation workflows in Bone Encapsulation Sequence and the playful reuse chronicled in Toothless Tyrannosaurus Rock extend the same discipline across research and narrative castings.
CAD refinements locked the envelope, parting line, and fixtures, while CAM simulations qualified adaptive, flat, and pencil strategies used downstream in Arrow Mold Machining and informed the silicone pour volumes documented in Casting Workflow Refinement.
Green Arrow’s Kryptonite arrow contingency doubles as a personal reminder: “You have failed this city” becomes “I will not fail this class.” The kryptonite arrow reminds us that even the strongest heroes need someone who can keep them in check, and is a symbol for me to always remember to keep moving forward, no matter what!
Summary adapted from ChatGPT lore compilation.
Collective material trials, consolidated in Group Assignment Summary, set the safety and mixing playbook that the individual workflows in Individual Assignment Summary expanded into detailed arrow, dome, mirror, and biomedical casts.
Key insights and lessons learned from this week's molding and casting work.
Huge thanks to Anthony for the detailed molding-and-casting training session and the lion casting demo that informed our safety reminders.
Appreciation to James and the 1.583 course staff—Prof. Josephine Carstensen and TAs Hajin and Dat—for guidance on the Pantheon dome concept.
Shout-out to Omar Al Dajani’s shuriken documentation for sparking the two-part mold plan, and to Anthony for helping translate that into a reliable three-part mold workflow.
ChatGPT helped compile Kryptonite arrow canon and symbolism for the motivation section. Transcript: Kryptonite Arrow Lore.
Documented AI-assisted recap of Quentin Bolsee’s recitation notes, later refined into the highlights and training sections.
Cursor-assisted planning pass that sequenced the Dome Demolding gallery and highlight links while enforcing succinct captions.
AI-assisted planning session for restructuring this site’s documentation; outputs informed the new summaries and navigation links.
Tracked the Cursor-led sprint for sequencing images, captions, and highlights throughout this page; prompts and responses validated every media addition and subsummary.
Cursor-assisted updates to week 9 project documentation, including the addition of metal casting sections (Arrow Metal Casting and Preparing for Second Iteration), highlights row updates, and refinement of the thickening solution documentation.
Started with a consultation with ChatGPT for fixing an extrusion profile error, but the suggested solution proved ineffective as the handle was not extruding properly. The solution I developed independently was to extrude the arrowhead by 0.06 in (based on caliper measurement showing 0.04 in minimum thickness plus 0.02 in safety margin), then shift the holder, base, and walls by 0.06 in to maintain alignment.
Other AI-assisted tasks from earlier weeks remain documented in their respective pages; no additional generative content was used beyond rewriting notes for clarity.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Mechanical & Machine Design
Snapshot of this week's mechanical design, machine building, and midterm review milestones.
Swiper mechanism and coordinated tapping/swiping automation for phone interaction.
Real-time person tracking with following and stop behaviors for interactive machine control.
Complete actuation and automation system with all subsystems integrated and coordinated.
Spring-loaded phone holder mechanism and 3D-printed components.
Servo gear system and linear actuator stylus mechanism.
Wi-Fi livestreaming and on-device face detection with Edge AI.
Dual servo opposite-direction sweep pattern for synchronized tapping and swiping mechanisms.
Synchronized 4-step motion pattern (0° → 90° → 180° → 90° → 0°) for coordinated actions.
3D-printed tapper and swiper enclosures with integrated servo mounts and motion guides.
Machine building training session with xylophone demonstration.
System diagram and development timeline for midterm review.
Injection molding process overview with Dan covering mold design and machine operation.
Machine building principles, injection molding processes, mechanical design fundamentals, and midterm review preparation for final project documentation.
Design and build a machine with mechanism, actuation, automation, function, and user interface. Prepare comprehensive midterm review documentation.
Mechanical design principles, stepper motor control, real-time motion systems, injection molding workflows, and project planning.
Group machine design and manual operation, recitation notes on machine building kits, injection molding training summary, and individual midterm review documentation.
Primary references for mechanical design, machine building, and midterm review requirements.
The MIT Mechanical Design overview covers stress-strain relationships, materials selection (plastic, metal, rubber, foam, garolite, wood, cement, ceramic), fasteners, framing systems, drive mechanisms (gears, lead screws, belts), guide systems (shafts, rails, slides), bearings, and mechanical principles (academy.cba.mit.edu).
The Machine Design page covers mechanisms, structural loops, sensors, actuators, end effectors, power electronics, motion control (open-loop, closed-loop), control theory (bang-bang, PID, acceleration, model predictive), timing protocols, and machine control systems (academy.cba.mit.edu).
The Midterm page outlines required deliverables for the final project review (academy.cba.mit.edu).
Refined notes from Quentin Bolsee's machine building recitation, anchored to the Slack recap (Slack).
The control system uses a byte-passing protocol for device communication instead of address hopping.
Stepper motor control involves understanding signals for position, velocity, acceleration, jerk, crackle, and pop. Reference: Stepper Motor Video.
StepDance is a modular real-time motion control system with components for inputs, interfaces, generators, kinematics, recording, outputs, and filters.
See recitation slides for additional references and detailed examples.
Wednesday presentation: Bring your machine and prepare a 15-minute presentation per machine. Win the presentation!
Design and build a machine that includes mechanism, actuation, automation, function, and user interface. Document the group project and your individual contribution.
Design a machine that includes mechanism + actuation + automation + function + user interface. Build the mechanical parts and operate it manually. Document the group project and your individual contribution.
[Placeholder: Group assignment documentation will be added here]
Actuate and automate your machine. Document the group project and your individual contribution. Prepare a demonstration of your machines for the next class.
[Placeholder: Group assignment documentation will be added here]
Document your individual contribution to group assignment 1 and group assignment 2.
I pitched and developed the initial concept for the group project, which helped initiate collaborative design discussions and whiteboard sessions. The concept evolved from a coin flipper machine to the final BrainrotBot design—a mobile robot that navigates and interacts with smartphones.
The initial design concept focused on a coin flipper machine with the following components:
Lever attached to a loaded spring under a platform flips a coin inserted into a curved box.
Lever pushes the loaded spring platform beyond a stopper to actuate the coin flip.
Button activates a motor to push the lever, automating the coin flip actuation.
Schrödinger's cat coin (minimal), heads or tails, 6-sided dice, 10-sided dice random number generator, magic 8-ball.
After the group settled on the BrainrotBot concept, I contributed to splitting the system into modular subsystems with defined interfaces. This modular approach enabled parallel development and clear integration points.
View subsystem breakdown document → | View subsystem references →
I contributed to key architectural decisions that separated the base chassis from the body, enabling an upgradeable design that could transition from two-wheel drive to omnidirectional drive.
Designed a phone holder with integrated passive amplifier for audio output. The design incorporates a spring-loaded mechanism for secure phone mounting and a horn-shaped amplifier for enhanced sound projection.
Developed multiple iterations of the stylus mechanism for touch screen interaction, progressing from simple manual designs to a linear actuator-driven system for precise control.
Designed a motor-driven system for tapping and swiping gestures using a linear actuator mechanism with servo control for precise horizontal movement.
Developed the camera subsystem with Wi-Fi livestreaming and edge AI inference capabilities for real-time object detection and face recognition.
The camera livestream implementation uses ESP32-S3's built-in camera and HTTP server capabilities to stream JPEG frames over Wi-Fi using MJPEG (Motion JPEG) protocol. The system initializes the camera with optimized settings for frame rate and quality, connects to Wi-Fi, and serves a continuous stream of JPEG images via HTTP multipart response.
For detailed pseudocode and implementation, see the Camera Code section in Design Files.
The Edge AI system uses a FOMO (Faster Objects, More Objects) model from Edge Impulse for real-time face detection. The model was trained on person/face classification data from the Model Zoo, converted to TensorFlow Lite format, and compiled as an Arduino library for deployment on the ESP32-S3.
The system processes camera frames through the on-device inference pipeline, outputs bounding box coordinates for detected faces, converts these coordinates to distance measurements, and sends byte packets to motor microcontroller boards for control. This enables real-time person tracking and machine interaction based on face detection.
Edge Impulse Model: View model in Edge Impulse Studio →
Development References: ChatGPT Transcript 1, ChatGPT Transcript 2, ChatGPT Transcript 3, ChatGPT Transcript 4
Designed the v1 GUI for manual control and monitoring of the machine's subsystems.
All design files organized by subsystem component:
Design files for the phone holder with integrated passive amplifier.
phone-holder-print.3mf — Main phone holder 3MF filephone-stand-amplifier-print.3mf — Amplifier horn 3MF fileReferences: Spring Loaded Phone Holder (Thingiverse), Phone Amplifier Passive Speaker (Thingiverse)
Design files for the stylus mechanism.
printable_stylus_with_built_in_stand.stl — Stylus with integrated standReferences: Printable Stylus (Thingiverse)
Design files for the linear actuator and servo-driven tapping/swiping mechanism.
linear_motor.3mf — Linear motor assemblylinear_motor_stylus.3mf — Linear motor with stylus mountCase_R.3mf, Linear_Case_L.3mf — Motor case componentsGear.3mf, Linear_Rack_RL.3mf — Gear and rack componentsReferences: Linear MG90S Micro Servo (Thingiverse), Linear Actuator Design (Thingiverse)
Arduino code for controlling two MG90S servo motors for tapping and swiping mechanisms.
Download Files:
two_servo_spins.zip — Complete project for dual servo sweep testtwo_servo_spins.ino — Dual servo opposite-direction sweep controlback_forth_test.zip — Complete project for 4-step motion testback_forth_test.ino — 4-step synchronized motion pattern (0° → 90° → 180° → 90° → 0°)Vinyl sticker designs generated using VDraw.ai black-and-white image converter for preparing artwork suitable for vinyl cutting.
VDraw_1763512341238.png — "Swiper No Swiping" sticker design converted from original artworkVDraw_1763514225691.png — "Brainrot9000" logo sticker design generated from Gemini-created artworkThe VDraw.ai converter optimizes images for vinyl cutting by creating clean black-and-white designs with clear edges and minimal detail loss, ensuring successful cutting and weeding operations.
Complete design for the phone holder with integrated swiper and tapper mechanisms, including servo mounts, linear actuators, and motion guides.
phone holder and movement v8.f3z — Fusion 360 design file (v8) for phone holder with integrated swiper and tapper mechanismsThe design includes all mechanical components for the phone holder, servo-driven linear actuators for tapping and swiping, mounting brackets, and protective enclosures for reliable operation.
PCB design files for the speaker/amplifier subsystem circuit board, including Gerber files for fabrication and design documentation.
DFPlayer-F_Cu.gbr — Front copper layer Gerber file for PCB fabricationDFPlayer-Edge_Cuts.gbr — Edge cuts Gerber file defining board outlinepcb_design.png — PCB layout visualization showing component placement and trace routingpcb_schematic.png — Circuit schematic diagram showing electrical connections and component relationshipsThe PCB was milled using the Othermill machine following the standard operating procedures documented in Week 5 training documentation.
Arduino code for ESP32-S3 camera livestreaming and Edge AI face detection.
SETUP:
1. Initialize Serial communication (115200 baud)
2. Configure camera pins (from camera_pins.h):
- Data pins (Y2-Y9) for parallel data bus
- Control pins (XCLK, PCLK, VSYNC, HREF)
- I2C pins (SIOD, SIOC) for camera configuration
3. Create camera_config_t structure:
- Set LEDC channel and timer for clock generation
- Map all GPIO pins to camera interface
- Set XCLK frequency to 20MHz
- Set pixel format to JPEG
- Configure frame size (QVGA if PSRAM available, QQVGA otherwise)
- Set JPEG quality to 12 (if PSRAM available)
- Set frame buffer count (2 if PSRAM, 1 otherwise)
4. Initialize camera with esp_camera_init()
5. Connect to Wi-Fi network:
- Begin connection with SSID and password
- Wait until connection established
- Print local IP address
6. Start HTTP server:
- Create HTTP server configuration
- Register URI handler for root path "/"
- Set handler function to stream_handler
- Start server and print access URL
STREAM_HANDLER (HTTP request handler):
1. Set HTTP response type to "multipart/x-mixed-replace; boundary=frame"
2. Enter infinite loop:
a. Capture frame from camera (esp_camera_fb_get())
b. If capture fails, return error
c. Format HTTP multipart header:
- Boundary marker: "--frame"
- Content-Type: "image/jpeg"
- Content-Length: frame buffer length
d. Send header chunk via HTTP response
e. Send frame buffer data chunk
f. Return frame buffer to camera (esp_camera_fb_return())
g. Send boundary terminator "\r\n"
h. If any send operation fails, break loop
3. Return result status
LOOP:
- Minimal delay (10ms) to allow other tasks
Download Files:
camera_stream.zip — Complete camera stream project (includes .ino and .h files)camera_stream.ino — Main Arduino sketch for camera livestreamingcamera_pins.h — GPIO pin definitions for XIAO ESP32-S3 camera moduleEdge Impulse Arduino library for FOMO-based face detection on ESP32-S3.
ei-face-detection--fomo-arduino-1.0.90.zip — Edge Impulse Arduino library (v1.0.90)Edge Impulse Model: View model in Edge Impulse Studio →
Group Collaboration: All design work was documented in the Slack thread after each working session, ensuring real-time communication and progress tracking throughout the project.
Co-developed servo motor control firmware and electrical connections for the tapper and swiper mechanisms with Hayley Bloch. The system uses two MG90S micro servos connected to GPIO pins on the ESP32-S3 for synchronized tapping and swiping motions. Development transcript →
| Component | Connection | ESP32-S3 Pin |
|---|---|---|
| Servo 1 (Tapper) Signal | PWM Control | GPIO1 |
| Servo 2 (Swiper) Signal | PWM Control | GPIO2 |
| Servo 1 & 2 Power | VCC (5V) | 5V Output |
| Servo 1 & 2 Ground | GND | GND |
SETUP:
1. Initialize Serial communication (115200 baud)
2. Allocate PWM timers for ESP32-S3 (timer 0 and timer 1)
3. Attach servo1 to GPIO1 with pulse range 500-2400μs (MG90S range)
4. Attach servo2 to GPIO2 with pulse range 500-2400μs
LOOP:
1. Sweep forward (0° to 180°):
- servo1: 0° → 180° (incrementing)
- servo2: 180° → 0° (decrementing, opposite direction)
- 10ms delay between steps
2. Sweep backward (180° to 0°):
- servo1: 180° → 0° (decrementing)
- servo2: 0° → 180° (incrementing, opposite direction)
- 10ms delay between steps
3. Repeat continuously
SETUP:
1. Initialize Serial communication (115200 baud)
2. Allocate PWM timers (timer 0 and timer 1)
3. Attach both servos to GPIO1 and GPIO2 with 500-2400μs range
MOVE_BOTH function:
- Set both servos to same angle simultaneously
- Wait 120ms for MG90S to reach position (tunable delay)
LOOP (4-step pattern):
1. Move both servos to 90° (center position)
2. Move both servos to 180° (full extension)
3. Move both servos to 90° (return to center)
4. Move both servos to 0° (full retraction)
5. Repeat pattern
For complete code files, see Servo Motor Controls in Design Files.
Collaborated with Hayley Bloch on the mechanical design and 3D printing of tapper and swiper enclosures and actuators. The designs integrate servo mounting points, linear motion guides, and protective casings for reliable operation.
Designed, cut, transferred, and applied custom vinyl stickers to the assembled Brainrot9000 machine. The vinyl graphics enhance the machine's visual identity and provide clear labeling for different subsystems.
The vinyl designs were created using VDraw.ai black-and-white image converter to prepare artwork for vinyl cutting. Two main designs were developed:
Co-designed the tapping and swiping automation system with Hayley Bloch, then assembled and troubleshooted the mechanisms to ensure reliable operation. The system integrates servo-driven actuators with precise motion control for synchronized tapping and swiping actions.
Following the tapping and swiping automation, worked on early iterations of the person follower system. Shared references, helped with code logic, provided implementation code from references, discussed technical issues, and collaborated with programmers on the team to develop the face-tracking and person-following functionality.
Assembled and integrated the complete actuation and automation system with other subsystem teams. This involved coordinating the tapper, swiper, person follower, and camera systems into a unified control architecture.
Assembled the head inner subsystem, which houses the camera, display, and control electronics. Integrated this subsystem with other teams' components to create a cohesive machine head assembly.
Assembled and integrated the complete Brainrot9000 machine, bringing together all subsystem components into a fully functional automated system. Coordinated with multiple teams to ensure proper integration of mechanical, electrical, and software components.
Milled a custom PCB for the speaker/amplifier subsystem using the Othermill machine, creating the circuit board that interfaces the audio output with the phone holder amplifier system. The PCB was designed to integrate with the overall machine electronics and provide reliable audio signal routing. The milling process followed the standard operating procedures documented in Week 5 training documentation.
For complete design files including Gerber files for fabrication, see Speaker PCB in Design Files.
The midterm review was completed. On the final project site: posted a system diagram, listed tasks to be completed, made a schedule, and scheduled a meeting with instructors for a graded review.
The system diagram for the MirrorAge Intrinsic Capacity Mirror project was posted on the final project page, showing the multimodal sensing stack, on-device inference layers, and real-time feedback channels.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score. View full system diagram →
The remaining tasks for the MirrorAge project were listed and organized into five key areas:
A development timeline was created that aligned subsystem sprints with HTMAA milestones from Week 8 through Week 13:
A calendar hold was sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. The meeting was held and the agenda covered subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment.
The meeting slot was referenced in the midterm review schedule.
Midterm Review Completed: All required elements (system diagram, task list, schedule, and instructor meeting) were documented on the final project page midterm review section, which included featured subsystems, completed tasks, execution schedule, and review logistics.
Key concepts and processes from the injection molding training session, anchored to the Slack recap (Slack).
Injection molding is a manufacturing process for producing parts by injecting molten material into a mold. Reference: Schematic diagram of an injection molding machine.
Students can create injection molds using generic mold blanks with core and cavity components.
Reference: Injection molding animation — think of yourself as the plastic pellet traveling through the process.
Designed a phone holder with integrated passive amplifier for audio output. The design incorporates a spring-loaded mechanism for secure phone mounting and a horn-shaped amplifier for enhanced sound projection.
Developed multiple iterations of the stylus mechanism for touch screen interaction, progressing from simple manual designs to a linear actuator-driven system for precise control.
Designed a motor-driven system for tapping and swiping gestures using a linear actuator mechanism with servo control for precise horizontal movement.
Developed the camera subsystem with Wi-Fi livestreaming and edge AI inference capabilities for real-time object detection and face recognition.
The camera livestream implementation uses ESP32-S3's built-in camera and HTTP server capabilities to stream JPEG frames over Wi-Fi using MJPEG (Motion JPEG) protocol. The system initializes the camera with optimized settings for frame rate and quality, connects to Wi-Fi, and serves a continuous stream of JPEG images via HTTP multipart response.
For detailed pseudocode and implementation, see the Camera Code section in Design Files.
The Edge AI system uses a FOMO (Faster Objects, More Objects) model from Edge Impulse for real-time face detection. The model was trained on person/face classification data from the Model Zoo, converted to TensorFlow Lite format, and compiled as an Arduino library for deployment on the ESP32-S3.
The system processes camera frames through the on-device inference pipeline, outputs bounding box coordinates for detected faces, converts these coordinates to distance measurements, and sends byte packets to motor microcontroller boards for control. This enables real-time person tracking and machine interaction based on face detection.
Edge Impulse Model: View model in Edge Impulse Studio →
Development References: ChatGPT Transcript 1, ChatGPT Transcript 2, ChatGPT Transcript 3, ChatGPT Transcript 4
Designed the v1 GUI for manual control and monitoring of the machine's subsystems.
All design files organized by subsystem component:
Design files for the phone holder with integrated passive amplifier.
phone-holder-print.3mf — Main phone holder 3MF filephone-stand-amplifier-print.3mf — Amplifier horn 3MF fileReferences: Spring Loaded Phone Holder (Thingiverse), Phone Amplifier Passive Speaker (Thingiverse)
Design files for the stylus mechanism.
printable_stylus_with_built_in_stand.stl — Stylus with integrated standReferences: Printable Stylus (Thingiverse)
Design files for the linear actuator and servo-driven tapping/swiping mechanism.
linear_motor.3mf — Linear motor assemblylinear_motor_stylus.3mf — Linear motor with stylus mountCase_R.3mf, Linear_Case_L.3mf — Motor case componentsGear.3mf, Linear_Rack_RL.3mf — Gear and rack componentsReferences: Linear MG90S Micro Servo (Thingiverse), Linear Actuator Design (Thingiverse)
Arduino code for controlling two MG90S servo motors for tapping and swiping mechanisms.
Download Files:
two_servo_spins.zip — Complete project for dual servo sweep testtwo_servo_spins.ino — Dual servo opposite-direction sweep controlback_forth_test.zip — Complete project for 4-step motion testback_forth_test.ino — 4-step synchronized motion pattern (0° → 90° → 180° → 90° → 0°)Arduino code for ESP32-S3 camera livestreaming and Edge AI face detection.
SETUP:
1. Initialize Serial communication (115200 baud)
2. Configure camera pins (from camera_pins.h):
- Data pins (Y2-Y9) for parallel data bus
- Control pins (XCLK, PCLK, VSYNC, HREF)
- I2C pins (SIOD, SIOC) for camera configuration
3. Create camera_config_t structure:
- Set LEDC channel and timer for clock generation
- Map all GPIO pins to camera interface
- Set XCLK frequency to 20MHz
- Set pixel format to JPEG
- Configure frame size (QVGA if PSRAM available, QQVGA otherwise)
- Set JPEG quality to 12 (if PSRAM available)
- Set frame buffer count (2 if PSRAM, 1 otherwise)
4. Initialize camera with esp_camera_init()
5. Connect to Wi-Fi network:
- Begin connection with SSID and password
- Wait until connection established
- Print local IP address
6. Start HTTP server:
- Create HTTP server configuration
- Register URI handler for root path "/"
- Set handler function to stream_handler
- Start server and print access URL
STREAM_HANDLER (HTTP request handler):
1. Set HTTP response type to "multipart/x-mixed-replace; boundary=frame"
2. Enter infinite loop:
a. Capture frame from camera (esp_camera_fb_get())
b. If capture fails, return error
c. Format HTTP multipart header:
- Boundary marker: "--frame"
- Content-Type: "image/jpeg"
- Content-Length: frame buffer length
d. Send header chunk via HTTP response
e. Send frame buffer data chunk
f. Return frame buffer to camera (esp_camera_fb_return())
g. Send boundary terminator "\r\n"
h. If any send operation fails, break loop
3. Return result status
LOOP:
- Minimal delay (10ms) to allow other tasks
Download Files:
camera_stream.zip — Complete camera stream project (includes .ino and .h files)camera_stream.ino — Main Arduino sketch for camera livestreamingcamera_pins.h — GPIO pin definitions for XIAO ESP32-S3 camera moduleEdge Impulse Arduino library for FOMO-based face detection on ESP32-S3.
ei-face-detection--fomo-arduino-1.0.90.zip — Edge Impulse Arduino library (v1.0.90)Edge Impulse Model: View model in Edge Impulse Studio →
Group Collaboration: All design work was documented in the Slack thread after each working session, ensuring real-time communication and progress tracking throughout the project.
Co-developed servo motor control firmware and electrical connections for the tapper and swiper mechanisms with Hayley Bloch. The system uses two MG90S micro servos connected to GPIO pins on the ESP32-S3 for synchronized tapping and swiping motions. Development transcript →
| Component | Connection | ESP32-S3 Pin |
|---|---|---|
| Servo 1 (Tapper) Signal | PWM Control | GPIO1 |
| Servo 2 (Swiper) Signal | PWM Control | GPIO2 |
| Servo 1 & 2 Power | VCC (5V) | 5V Output |
| Servo 1 & 2 Ground | GND | GND |
SETUP:
1. Initialize Serial communication (115200 baud)
2. Allocate PWM timers for ESP32-S3 (timer 0 and timer 1)
3. Attach servo1 to GPIO1 with pulse range 500-2400μs (MG90S range)
4. Attach servo2 to GPIO2 with pulse range 500-2400μs
LOOP:
1. Sweep forward (0° to 180°):
- servo1: 0° → 180° (incrementing)
- servo2: 180° → 0° (decrementing, opposite direction)
- 10ms delay between steps
2. Sweep backward (180° to 0°):
- servo1: 180° → 0° (decrementing)
- servo2: 0° → 180° (incrementing, opposite direction)
- 10ms delay between steps
3. Repeat continuously
SETUP:
1. Initialize Serial communication (115200 baud)
2. Allocate PWM timers (timer 0 and timer 1)
3. Attach both servos to GPIO1 and GPIO2 with 500-2400μs range
MOVE_BOTH function:
- Set both servos to same angle simultaneously
- Wait 120ms for MG90S to reach position (tunable delay)
LOOP (4-step pattern):
1. Move both servos to 90° (center position)
2. Move both servos to 180° (full extension)
3. Move both servos to 90° (return to center)
4. Move both servos to 0° (full retraction)
5. Repeat pattern
For complete code files, see Servo Motor Controls in Design Files.
Collaborated with Hayley Bloch on the mechanical design and 3D printing of tapper and swiper enclosures and actuators. The designs integrate servo mounting points, linear motion guides, and protective casings for reliable operation.
Key insights and lessons learned from this week's mechanical and machine design work.
[Placeholder: Contributions section will be added here]
This Week 10 documentation was created with the assistance of Cursor AI (Auto). The AI assistant helped with documentation, organization, HTML formatting, and content refinement. All design decisions, technical implementations, and project contributions documented here are my original work.
I maintained full control over all design decisions, technical implementations, content selection, and final review. All machine designs, component contributions, system architecture, and project work documented here represent my original contributions to the group project.
For full transparency, the complete transcript of AI assistance for Week 10 is available in both formats:
The transcript includes detailed documentation of all AI-assisted tasks, file management, content organization, and quality assurance processes.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License