# Final project documentation updates _Exported on 12/15/2025 at 21:18:07 EST from Cursor (2.1.39)_ --- **User** General commands for cursor — for each image or video add a caption and description. — make all the text professional and succinct without losing key details — make the information as organized and as aesthetically pleasing as possible — don't just take my notes from the prompt, refine it before putting it on the page 1. take a summary of my week 14 system integration on week14.html, and make a system integration section in finalproject.html somewhere (find the best place to put it), add each day as a subsection. and make sure system integration is in the table of contents. add a link to full details on finalproject.html that takes you to week14.html and add links along the summary in finalproject.html to important spots of system integration throughout the section. 2. add a final project masterpiece section on finalproject.html and embed the video from week14/highlights/finalprojectdemo.mp4 with a brief description. link it in week14.html and also link it on youtube: https://youtu.be/4mIPPKHv3AU?si=bMCngMzhDp3IjFNQ 3. update the final project highlights with new rows (keep the existing highlights as is), just add the same system integration highlights from week14.html as additional rows beneath them. make sure everything links you to where it is --- **Cursor** Reviewing the relevant files to understand the structure. Searching for specific sections in these files: Reading specific sections to understand the structure: Searching for system integration highlights in week14.html: Reading the day sections to create summaries: Finding the best place in finalproject.html to add the system integration section: Creating the system integration section and updating the necessary parts. Starting with a todo list: Adding the system integration section after week13: ```html
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components and preparing subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting and vinyl cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components and preparing subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting and vinyl cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components and preparing subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting and vinyl cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete demonstration of the integrated MirrorAge system showcasing all subsystems working together in real-time.
Final Presentation Demo: Complete MirrorAge system demonstration showing the integrated aging clock and grip-strength subsystems operating together on the demo table, with synchronized sensor readouts and real-time display updates across all boards.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components and preparing subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting and vinyl cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete demonstration of the integrated MirrorAge system showcasing all subsystems working together in real-time.
Final Presentation Demo: Complete MirrorAge system demonstration showing the integrated aging clock and grip-strength subsystems operating together on the demo table, with synchronized sensor readouts and real-time display updates across all boards.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
System Integration
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation.
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
Related: See the Development Timeline on the final project page for the overall project schedule.
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration.
Conducted comprehensive testing of all subsystems to ensure proper functionality. Identified and resoldered any defective joints, verifying that each subsystem operates correctly before proceeding with full system integration.
Exported CAD models of all PCBs in the final design and imported them into a unified CAD assembly. This consolidated model provides a complete view of the system's mechanical integration and spatial relationships.
Acquired CAD design files for Apple Watch form factor integration, enabling compatibility with existing band designs and standardized watch components. This provides a proven mechanical foundation for the wearable subsystem.
Resuming the daily schedule tomorrow with parallel work streams prioritized according to the critical path. Tasks that cannot be completed during scheduled days will be shifted to buffer days on Saturday and Sunday to maintain project momentum.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation.
Mechanical design integration now includes the complete demo table with precise placement of the MirrorAge handgrip subsystem and the aging clock subsystem. The integrated assembly provides a unified platform for system demonstration and testing.
All 3D additive mechanical components were printed for rapid iterative development. The fabrication process required specific print settings and support strategies for different components:
The first 2D subtractive mechanical component was debugged and fabricated. The display face component required exporting a DXF of the full face to laser cut an acrylic display that fits the casing precisely. The original display part face was too small because it was filleted, so the edges of the aging clock body were traced and exported for accurate laser cutting.
Electrical design integration combines multiple subsystems into unified board designs, with progressive upgrades incorporating TFT displays for enhanced visual feedback. The integration follows a spiral development approach, starting with base designs and iteratively adding features like TFT displays as design constraints are validated.
Both the aging clock watch board and the MirrorAge load cell-speaker board have been upgraded to include TFT displays for enhanced visual feedback. The displays use SPI communication for straightforward integration with the ESP32-S3 XIAO microcontroller.
Based on the Seeed Studio XIAO ESP32-C3 ILI9341 wiring guide, adapted for ESP32-S3 XIAO:
| TFT Pin | ESP32-S3 XIAO Pin |
|---|---|
| VCC | 3V3 or 5V |
| GND | GND |
| CS | D0 |
| RST | D2 |
| DC/RS | D1 |
| SDI (MOSI) | D10 |
| SCK | D8 |
| LED (Backlight) | 3V3 |
Note: For both 1.8" and 2.4" TFT displays, the LED backlight pin connects to 3V3. Additional references: 1.8" TFT Guide, 2.8" ILI9341 Guide, ESP32-S3 ILI9341 Guide
For future iterations incorporating touchscreen functionality, the ILI9341 TFT with integrated touch controller requires additional pin connections. The touch controller shares the SPI bus with the TFT display and uses a separate chip select pin.
| Touch Pin | ESP32 XIAO Pin |
|---|---|
| T_CLK | SCK (GPIO8/D8) |
| T_CS | GPIO21/D21 |
| T_DIN | MOSI (GPIO10/D10) |
| T_DO | MISO (GPIO9/D9) |
Important Notes:
Reference Resources:
The MirrorAge amplifier board integrates a real-time audio amplifier with load cell interface and TFT display. The base design includes the amplifier circuit and HX711 load cell interface, with an upgraded version adding the 2.4" ILI9341 TFT display.
1.8" TFT LCD: ST7735 controller, 128×160 pixels. CAD model reference: GrabCAD ST7735 Model
The aging clock watch board combines pulse oximeter sensors, accelerometer, and camera functionality. The base design uses an OLED display, with an upgraded version incorporating a 1.8" ST7735 TFT display for enhanced visual feedback.
The first version of the aging clock board was milled. The initial board did not fit the casing, requiring either casing adjustment and reprinting, or milling a new miniaturized board with shrunken electronics. Both approaches are being evaluated. The MirrorAge board design has been completed for both base and TFT-integrated versions, ready for milling and testing.
Comprehensive fabrication integration work combining 2D laser cutting, 3D printing, and PCB milling to create fully integrated subsystems ready for assembly and testing.
The laser-cut acrylic display component was updated in Day 2 and successfully integrated into the watch assembly. The acrylic display provides a clear protective face for the aging clock watch, precisely fitted to the casing dimensions.
The integrated watch assembly demonstrates successful integration of 2D subtractive manufacturing with 3D printed components, creating a cohesive mechanical structure for the wearable subsystem.
All 3D printed components were fabricated using both rigid and flexible materials. The bands were printed using TPU (Thermoplastic Polyurethane) for flexibility, while rigid components used standard PLA/PETG materials. Additional print details and configuration requirements are documented in Day 2.
The case should also be printed from TPU to improve interlock part fit. The spring-loaded interlock mechanism enables secure band closure while maintaining flexibility for comfortable wear.
The aging clock watch board was successfully milled. The base design with OLED display worked correctly, and the design has been upgraded to include a TFT display. The new TFT-integrated design will be milled and tested alongside the base OLED version to validate both display options.
The base MirrorAge amplifier board milling initially failed due to polygon pour isolation settings. The default isolation distance was set to 0 mil, which caused routing issues. After changing the back-side polygon pour isolation to 32 mil, the milling operation succeeded. When the shop closed, the design was upgraded to include a TFT screen. Both the base design and TFT-integrated versions will be milled together and tested.
Physical integration of fabricated components demonstrates successful coordination between mechanical and electrical subsystems. The watch board fits within the watch case, and the acrylic display integrates properly. Additional work remains for drilling mounting holes for electronics and routing power connections.
Both board designs have been upgraded to include TFT displays, and integration testing confirms proper fit. The TFT watch board fits correctly with the TFT display, and both the base and TFT-integrated MirrorAge load cell-speaker boards fit within the gripper enclosure. Additional work is needed for drilling mounting holes and adding hooks for load cell wire routing to the HX711 mounted on the board, as well as routing power connections.
Comprehensive subsystem integration work combining 2D fabrication, 3D printing, PCB milling, and molding/casting to complete all physical components for final assembly and testing.
With the milling machine unavailable, the time was used to upgrade board designs, requiring component rotations and layout optimizations to ensure all components fit properly. As Quentin noted, "you have to get used to rotating it in your head"—a valuable skill for PCB design where spatial reasoning is essential for component placement and routing.
"you have to get used to rotating it in your head" — Quentin
The MirrorAge TFT board design was upgraded to version 13, with component rotations and layout optimizations to improve fit and routing efficiency.
The aging clock TFT board design was upgraded to version 16, incorporating component rotations and layout refinements to ensure proper fit within the watch casing constraints.
Subsystem integration also focused on making the shared I²C bus predictable and debuggable. During flashing, an intermittent serial exception from esptool indicated that the USB device was reporting readiness but returning no data—a classic symptom of a loose cable, port contention, or transient power issue rather than a firmware bug. After power-cycling the board, checking that only one serial monitor was attached, and reseating the USB cable, flashing succeeded reliably.
With stable flashing restored, an I²C scanner sketch confirmed that all devices were present on the bus: the MAX30102 pulse oximeter at address 0x57, the MPU6050 accelerometer at 0x68, and the OLED display at 0x3C once it was wired in. Multiple scans over time reproduced the same addresses, validating bus integrity and confirming that the SDA/SCL routing and pull-up strategy were correct after earlier trace and enclosure fit issues.
0x57, MPU6050 0x68, SSD1306 OLED 0x3C) and document them close to the schematic and firmware.Reference: esptool Troubleshooting Guide for interpreting serial upload errors and recommended recovery steps.
Laser-cut thinner acrylic (1.35mm) was fabricated to improve the fit of the clear display cover for the aging clock subsystem. The thinner material provides better dimensional tolerance and improved integration with the watch casing.
Band clips were printed using TPU (Thermoplastic Polyurethane) material for improved fit and flexibility. The flexible material provides better mechanical compliance and secure attachment to the watch body.
The TFT aging clock board design was milled, requiring iterative refinement due to initial design issues. The first mill had traces that were too close together, and the rivets were positioned under the microcontroller, requiring manual fixes with an ultrasonic knife before remilling the corrected board.
The initial milling process revealed design issues that required correction. The board was milled on both front and back sides, with careful tool location and fixturing procedures.
After milling, inspection revealed traces that were too close together and rivets positioned under the microcontroller. These issues were corrected manually using a two-step cutting process: the ultrasonic knife was used to initiate precise cuts on both ends of each problematic trace, establishing the correct penetration direction. A utility knife was then used to cut a straight line between these initiation points, isolating the upper and lower wiring. The challenge with manual trace cutting lies in controlling the initial penetration with the utility knife—the ultrasonic knife's precision in establishing the cut direction made the subsequent utility knife cut smoother and easier to complete from end to end. After these manual corrections, the board design was updated and remilled with the fixed layout.
After manual corrections, the board design was updated to move rivets away from the microcontroller and improve trace spacing. The corrected board was successfully milled with proper trace clearances and component placement.
The base amplifier board was remilled after fixing polygon pour isolation settings (changed from 0 to 32 mil). The upgraded MirrorAge board with TFT display was also successfully milled, ready for soldering and testing.
Starting from Anthony's sketch in the midterm review, 3D-printable open molds were designed to cast watch bands out of MoldStar 30. The process involved printing PLA molds, waxing for release, mixing and pouring the mold material, leveling, curing under a heat lamp, and demolding. The center section of the printed molds required cutting with an ultrasonic knife before pouring to ensure part removability. A backup TPU band was also printed in case of casting failure.
The molds required cutting out the center section with an ultrasonic knife before waxing and casting. Wax was melted, painted onto the mold surfaces for release, and thinned as needed for proper coverage.
MoldStar 30 was mixed according to specifications, poured into the prepared molds, and leveled to ensure proper distribution and surface quality.
The filled molds were moved under a heat lamp for curing, then demolded and tested to ensure proper fit between the two band halves.
The TFT aging clock board was successfully assembled into the watch enclosure, demonstrating successful integration of all fabricated components including the milled PCB, TFT display, and watch casing.
Final integration day focused on closing the watch subsystem as far as mechanically feasible, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested.
I attempted to fully close the aging clock watch subsystem using the laser-cut acrylic top plate and printed enclosure. In practice, finite solder height and stacked components meant the acrylic pressed directly on the SDA/SCL traces; under load, the pressure ripped the I²C traces off the board even after successful end-to-end testing. For the final demo, I kept the subsystem open for reliability and prepared a backup pulse oximeter module to rework the board after the full system presentation.
Future electrical spirals will set stricter keep-out zones under fasteners and enclosure pinch points, lower component stack-up height in CAD, and route critical buses away from mechanical interfaces. The current integration still validates the fit envelope and cable-routing strategy, and will be re-closed once replacement boards and sensors are available.
For future iterations, see the consolidated electrical design and integration notes on the final project fabrication section and the updated Bill of Materials for enclosure and sensor stack-up choices.
To mount the bands inside the watch clip, I drilled precise holes and inserted inward-facing bolts that clamp into printed TPU clips. TPU provides just enough flexibility and toughness for this fastening strategy, whereas a similar approach would easily tear a silicone band. In the next spiral, the mechanics of the clip will be redesigned so the silicone band experiences distributed compression instead of point loads from metal hardware.
The integrated system view below shows how the TPU band, watch body, and electronics stack come together around the wrist, and serves as the mechanical reference for a future fully cast silicone band with embedded reinforcement.
Both the aging clock and MirrorAge boards are double-sided with plated through-rivets. I followed a consistent, compact assembly order: (1) install and flare rivets, (2) solder the microcontroller, (3) place and solder passives, then (4) populate and test input/output devices. The media below document that workflow step by step.
After through-connection, I placed the microcontroller and passives as flat as possible to preserve headroom inside the enclosure. Headers were soldered last, from both the top and side, to keep cable strain away from delicate pads.
To keep the XIAO footprint perfectly flat over the rivets, I used Kapton tape as a temporary stencil: the tape masked exposed copper around the footprint so I could drag-solder the castellated edges without flooding nearby vias. Once the joints were complete, the tape peeled away cleanly, leaving the board ready for final inspection.
After soldering, I brought each sensor up sequentially to isolate any issues. The accelerometer, pulse oximeter, OLED, and TFT were each verified individually over serial before enabling the full integrated sketch.
All boards passed short-circuit checks and per-sensor serial validation. Remaining electrical risks are limited to load cell hardware quality and long-term connector strain, which are documented for post-demo spirals.
The cast bands are mechanically promising but still need finishing. The edges require additional trimming and sanding, and one edge showed an incomplete fill due to insufficient waxing of the mold. Compared to the TPU prints, the casting material has better compliance and skin feel, and will be the preferred long-term solution once the fastening geometry is iterated.
For this week, I used the printed TPU bands for reliability, while documenting the cast band behavior as a future spiral that aligns with the existing mold designs and material choices.
The aging clock subsystem was finalized by cutting and aligning the pulse oximeter opening so the sensor makes direct skin contact while the watch is worn. That opening simultaneously sets the board position inside the enclosure, constraining the stack according to the system-integration CAD.
With the TFT watch board, sensors, and acrylic window all integrated, the subsystem mechanically mirrors the final form factor even though the top remains open for this demo to protect the repaired traces.
The grip-strength electronics from Week 12 were mounted on the calibrated working load cell and integrated with the new MirrorAge electronics. Wires from the load cell and amplifier are routed along the structure and strain relieved with a moderate amount of glue to balance robustness with serviceability. The boards sit in their dedicated pockets according to the system-integration CAD, keeping the mechanical and electrical reference frames aligned.
With both the aging clock and grip-strength subsystems assembled, I brought the entire MirrorAge demo together on the tensegrity table. The videos below show the system running after final debugging, including sensor readouts and synchronized display behavior across the boards.
Final presentation demonstration of the complete MirrorAge system, showcasing the integrated aging clock and grip-strength subsystems working together in real-time.
This final presentation video demonstrates the complete MirrorAge system in action, featuring both the aging clock watch subsystem and the grip-strength measurement subsystem operating together. The demo showcases real-time sensor readings, synchronized display updates, and the full integration of mechanical, electrical, and software components developed throughout the semester.
Note: A compressed local version is also available below for offline viewing. For the highest quality and full resolution, please view the video on YouTube.
Comprehensive table of all design files used in the system integration, organized by category with descriptions and links to their usage locations.
📁 Fusion 360 Files: All Fusion 360 design files are available in the HTMA Fusion team directory under HTMA/2025/EECS/Saleem, organized by the same weeks as the website structure.
| File Name | Description | Category | Used In | Download |
|---|---|---|---|---|
| PCB Design Files | ||||
| agingclock_tft.brd | Aging clock watch board with TFT display integration (pulse oximeter, accelerometer, camera) | PCB Design | Day 2, Day 4 | Download |
| mirrorage.brd | Base MirrorAge amplifier board with load cell interface (HX711) and real-time audio amplifier | PCB Design | Day 2, Day 3 | Download |
| mirrorage_tft.brd | MirrorAge amplifier board with TFT display integration (2.4" ILI9341) | PCB Design | Day 2, Day 3, Day 4 | Download |
| design_xiao.kicad_pcb | KiCad PCB design for XIAO ESP32-S3 base board (non-touch version) | PCB Design | Day 2 | Download |
| design_xiao.kicad_pcb (touch) | KiCad PCB design for XIAO ESP32-S3 with touchscreen support | PCB Design | Day 2 | Download |
| QPAD21.kicad_pcb | KiCad PCB design for QPAD21 board (reference design) | PCB Design | Reference | Download |
| design_micro.kicad_pcb | KiCad PCB design for microcontroller board (reference) | PCB Design | Reference | Download |
| 3D Design Files | ||||
| WatchBody.3mf | Main watch body casing for aging clock subsystem | 3D Model | Day 2, Day 3 | Download |
| Dock.3mf | Dock component for board mounting and electrical connection | 3D Model | Day 2, Day 3 | Download |
| Display Glass.3mf | Display cover component for watch face protection | 3D Model | Day 2 | Download |
| Bands.3mf | Watch band components for wearable attachment | 3D Model | Day 2, Day 3 | Download |
| band-a-scaled.3mf | Scaled band component A for casting molds | 3D Model | Day 4 | Download |
| band-b-scaled.3mf | Scaled band component B for casting molds | 3D Model | Day 4 | Download |
| watchband_clip.3mf | Watch band clip component for secure attachment | 3D Model | Day 3, Day 4 | Download |
| watch v2.5-whole v1.3mf | Complete watch assembly model version 2.5 | 3D Model | Day 1 | Download |
| band-a-mold.3mf | 3D printable mold A for band casting | 3D Model | Day 4 | Download |
| band-b-mold.3mf | 3D printable mold B for band casting | 3D Model | Day 4 | Download |
| band-a-mold-print-10p.3mf | Optimized mold A print file with 10% infill | 3D Model | Day 4 | Download |
| band-b-mold-print-10p.3mf | Optimized mold B print file with 10% infill | 3D Model | Day 4 | Download |
| 2D Design Files | ||||
| square_2_final.svg | Final laser cutting design for display cover (1.35mm acrylic) | 2D Design | Day 4 | Download |
| square_2.svg | Intermediate laser cutting design iteration | 2D Design | Day 2 | Download |
| square.svg | Initial laser cutting design for display cover | 2D Design | Day 2 | Download |
| square_2.dxf | DXF format for laser cutting (intermediate iteration) | 2D Design | Day 2 | Download |
| square.dxf | DXF format for laser cutting (initial design) | 2D Design | Day 2 | Download |
| Code Files | ||||
| blink_ESP32S3.ino | Basic blink test code for ESP32-S3 XIAO | Arduino Code | Testing | Download |
| test_display_ESP32S3.ino | TFT display test code for ESP32-S3 XIAO | Arduino Code | Testing | Download |
| test_touch_ESP32S3.ino | Touchscreen test code for ESP32-S3 XIAO | Arduino Code | Testing | Download |
| test_serial_ESP32S3.ino | Serial communication test code for ESP32-S3 XIAO | Arduino Code | Testing | Download |
| Reference Design Files | ||||
| design_xiao.step | STEP file for XIAO ESP32-S3 mechanical reference | 3D Model | Reference | Download |
| Seeed Studio XIAO-ESP32-S3-Sense.step | STEP file for Seeed Studio XIAO ESP32-S3 Sense board | 3D Model | Reference | Download |
| TFT LCD 2.4 inch ILI9341 touch.step | STEP file for 2.4" ILI9341 TFT display with touch | 3D Model | Reference | Download |
| SSD1306_OLED_Display(128x64).step | STEP file for SSD1306 OLED display reference | 3D Model | Reference | Download |
| MPU6050.stp | STEP file for MPU6050 accelerometer reference | 3D Model | Reference | Download |
| 3006 MAX98357.step | STEP file for MAX98357A amplifier reference | 3D Model | Reference | Download |
| max98357a adafruit.SLDPRT | SolidWorks part file for MAX98357A amplifier reference | 3D Model | Reference | Download |
| apple-watch-se-2nd-gen-band-adapter20251028-1-u4zeat.zip | Apple Watch band adapter design files (ZIP archive) | 3D Model | Reference | Download |
Consolidated bill of materials for the integrated MirrorAge system used in this Week 14 system-integration build. This table mirrors the final project Bill of Materials and is repeated here so readers can cross-reference parts directly from the integration workflows.
For the narrative, project-wide context, see the final project BOM and the per-week line items in the weekly documentation (for example the ReactionAge BOM CSV in Week 2).
Project management view of the integration schedule, showing task dependencies, critical milestones, and current progress status.
Subsystem testing and validation, CAD model consolidation, demo table integration design, aging clock PCB design (accelerometer/camera + pulse oximeter), TFT-integrated aging clock board design, MirrorAge board design (base and TFT versions), 3D printing of mechanical components (casing with organic support, dock with brim, TPU bands and clips), laser-cut acrylic display (1.35mm thin version), aging clock board milling (with corrections and remilling), MirrorAge board milling (base and TFT versions), band mold design and casting (MoldStar 30), TFT display wiring documentation with touchscreen integration methods, aging clock subsystem assembly, Day 4 fabrication integration documentation
Refining physical integration (extra mounting, mirror film), extending firmware from working demo to networked IC pipeline, and documenting data-integration methods (serial, WiFi, MQTT) for future spirals.
WiFi/MQTT network setup, final IC scoring integration, one-minute video, and last presentation/documentation touches.
Tasks that cannot be completed during scheduled weekdays will be shifted to Saturday and Sunday buffer days. This ensures the critical path remains on schedule while allowing flexibility for unexpected delays or additional refinement work.
Progressive development approach for integrating sensor data streams from multiple subsystems, starting with simple serial communication and evolving to wireless network-based architectures.
Direct serial communication through USB cable from computer to microcontroller, using PySerial to stream data. This provides the simplest initial integration path for rapid prototyping and debugging.
Configure all microcontrollers as WiFi clients that stream data to a flash web server. This enables wireless communication while maintaining a centralized data aggregation point.
Message Queuing Telemetry Transport (MQTT) provides a lightweight, publish-subscribe messaging protocol ideal for IoT device communication. This approach enables decentralized data exchange between all MirrorAge subsystems, allowing each board to publish sensor data and subscribe to relevant topics for coordinated system behavior.
Initial development and testing will use a laptop-hosted MQTT broker. In a later development spiral, the server will be migrated to a dedicated Raspberry Pi Zero or Raspberry Pi 5 for standalone operation and improved system portability. Paho MQTT library has been successfully tested for ESP32-S3 implementation.
The core firmware used during Week 14 for the aging clock watch and grip-strength subsystem is available below as both .ino source files and downloadable .zip archives.
These sketches currently run over simple serial/I²C links and will be extended in future spirals to stream data over WiFi/MQTT to the multimodal IC pipeline. The XIAO boards and TFT displays
also include SD card support for local backup storage once that code path is integrated.
aging_clock.ino initializes the MAX30102 pulse oximeter and IMU on the I²C bus, reads heart-rate and motion data, and prints structured serial lines for debugging.
The main loop continuously:
Serial for logging or GUI visualization
Download:
view aging_clock.ino ·
download aging_clock.zip
aging_clock_oled.ino builds on the base sketch by adding an SSD1306 OLED UI. Pseudocode:
Serial for logging
Download:
view aging_clock_oled.ino ·
download aging_clock_oled.zip
aging_clock_tft.ino migrates the watch UI to a 1.8″ ST7735 TFT. The sketch:
Download:
view aging_clock_tft.ino ·
download aging_clock_tft.zip
aging_clock_tft_slow.ino is a debug variant that intentionally slows down screen refresh to make timing and flicker issues visible. Conceptually:
Serial to tune update rates
Download:
view aging_clock_tft_slow.ino ·
download aging_clock_tft_slow.zip
loadcell_gui.ino drives the CZL601AC + HX711 grip-strength subsystem and streams data to a serial/GUI plot. High-level flow:
Download:
view loadcell_gui.ino ·
download loadcell_gui.zip
Additional sketches (e.g., WiFi/MQTT clients, SD-card logging for the XIAO ESP32S3 and TFT shields) will be integrated in a future spiral once the serial and WiFi baselines are fully stabilized.
Development will progress through these methods in order of complexity, starting with serial communication for rapid prototyping, then moving to WiFi client architecture for wireless operation, and finally implementing MQTT for advanced decentralized communication. Each subsystem board (pulse oximeter, OLED/accelerometer, speaker/amplifier, load cell) will be integrated progressively, enabling:
Reflections and key learnings from the final project integration week.
Acknowledgments for contributions and collaborations during the final project week.
Documentation of AI-assisted tasks and tools used during the final project integration week.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI assistance for Week 14 Day 1 project integration updates, including design integration documentation and initial system integration planning.
Cursor AI assistance for Week 14 project integration updates, including Day 2 electrical and mechanical integration documentation, data integration methods refinement, and critical path analysis updates.
Cursor AI assistance for Week 14 Day 3 electrical integration and fabrication documentation, including TFT display wiring documentation, board design updates, fabrication integration methods, and Day 3 content updates.
Cursor AI assistance for Week 14 project integration updates, including Day 4 electrical and mechanical integration documentation, TFT display wiring documentation, board design updates, fabrication integration methods, design files table creation, and comprehensive Day 4 content updates.
Cursor AI assistance for final MirrorAge documentation and system integration, including Day 5 full-system integration write-up, Bill of Materials synchronization between Week 14 and the final project page, critical path updates, and code documentation for the aging clock and load-cell subsystems.
Cursor AI assistance for Day 5 and Day 6 system integration documentation, final system assembly, and presentation preparation.
ChatGPT assistance for firmware development and debugging of the aging clock and load cell subsystems, including sensor integration, display drivers, and data processing algorithms.
ChatGPT assistance for developing the aging clock firmware, including MAX30102 pulse oximeter integration, MPU6050 accelerometer readings, TFT display drivers, and sensor data processing algorithms.
💬 View ConversationChatGPT assistance for developing the load cell firmware, including HX711 amplifier integration, calibration routines, force measurement algorithms, and TFT display visualization for grip-strength data.
💬 View Conversation
System Integration
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation.
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
Related: See the Development Timeline on the final project page for the overall project schedule.
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration.
Conducted comprehensive testing of all subsystems to ensure proper functionality. Identified and resoldered any defective joints, verifying that each subsystem operates correctly before proceeding with full system integration.
Exported CAD models of all PCBs in the final design and imported them into a unified CAD assembly. This consolidated model provides a complete view of the system's mechanical integration and spatial relationships.
Acquired CAD design files for Apple Watch form factor integration, enabling compatibility with existing band designs and standardized watch components. This provides a proven mechanical foundation for the wearable subsystem.
Resuming the daily schedule tomorrow with parallel work streams prioritized according to the critical path. Tasks that cannot be completed during scheduled days will be shifted to buffer days on Saturday and Sunday to maintain project momentum.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation.
Mechanical design integration now includes the complete demo table with precise placement of the MirrorAge handgrip subsystem and the aging clock subsystem. The integrated assembly provides a unified platform for system demonstration and testing.
All 3D additive mechanical components were printed for rapid iterative development. The fabrication process required specific print settings and support strategies for different components:
The first 2D subtractive mechanical component was debugged and fabricated. The display face component required exporting a DXF of the full face to laser cut an acrylic display that fits the casing precisely. The original display part face was too small because it was filleted, so the edges of the aging clock body were traced and exported for accurate laser cutting.
Electrical design integration combines multiple subsystems into unified board designs, with progressive upgrades incorporating TFT displays for enhanced visual feedback. The integration follows a spiral development approach, starting with base designs and iteratively adding features like TFT displays as design constraints are validated.
Both the aging clock watch board and the MirrorAge load cell-speaker board have been upgraded to include TFT displays for enhanced visual feedback. The displays use SPI communication for straightforward integration with the ESP32-S3 XIAO microcontroller.
Based on the Seeed Studio XIAO ESP32-C3 ILI9341 wiring guide, adapted for ESP32-S3 XIAO:
| TFT Pin | ESP32-S3 XIAO Pin |
|---|---|
| VCC | 3V3 or 5V |
| GND | GND |
| CS | D0 |
| RST | D2 |
| DC/RS | D1 |
| SDI (MOSI) | D10 |
| SCK | D8 |
| LED (Backlight) | 3V3 |
Note: For both 1.8" and 2.4" TFT displays, the LED backlight pin connects to 3V3. Additional references: 1.8" TFT Guide, 2.8" ILI9341 Guide, ESP32-S3 ILI9341 Guide
For future iterations incorporating touchscreen functionality, the ILI9341 TFT with integrated touch controller requires additional pin connections. The touch controller shares the SPI bus with the TFT display and uses a separate chip select pin.
| Touch Pin | ESP32 XIAO Pin |
|---|---|
| T_CLK | SCK (GPIO8/D8) |
| T_CS | GPIO21/D21 |
| T_DIN | MOSI (GPIO10/D10) |
| T_DO | MISO (GPIO9/D9) |
Important Notes:
Reference Resources:
The MirrorAge amplifier board integrates a real-time audio amplifier with load cell interface and TFT display. The base design includes the amplifier circuit and HX711 load cell interface, with an upgraded version adding the 2.4" ILI9341 TFT display.
1.8" TFT LCD: ST7735 controller, 128×160 pixels. CAD model reference: GrabCAD ST7735 Model
The aging clock watch board combines pulse oximeter sensors, accelerometer, and camera functionality. The base design uses an OLED display, with an upgraded version incorporating a 1.8" ST7735 TFT display for enhanced visual feedback.
The first version of the aging clock board was milled. The initial board did not fit the casing, requiring either casing adjustment and reprinting, or milling a new miniaturized board with shrunken electronics. Both approaches are being evaluated. The MirrorAge board design has been completed for both base and TFT-integrated versions, ready for milling and testing.
Comprehensive fabrication integration work combining 2D laser cutting, 3D printing, and PCB milling to create fully integrated subsystems ready for assembly and testing.
The laser-cut acrylic display component was updated in Day 2 and successfully integrated into the watch assembly. The acrylic display provides a clear protective face for the aging clock watch, precisely fitted to the casing dimensions.
The integrated watch assembly demonstrates successful integration of 2D subtractive manufacturing with 3D printed components, creating a cohesive mechanical structure for the wearable subsystem.
All 3D printed components were fabricated using both rigid and flexible materials. The bands were printed using TPU (Thermoplastic Polyurethane) for flexibility, while rigid components used standard PLA/PETG materials. Additional print details and configuration requirements are documented in Day 2.
The case should also be printed from TPU to improve interlock part fit. The spring-loaded interlock mechanism enables secure band closure while maintaining flexibility for comfortable wear.
The aging clock watch board was successfully milled. The base design with OLED display worked correctly, and the design has been upgraded to include a TFT display. The new TFT-integrated design will be milled and tested alongside the base OLED version to validate both display options.
The base MirrorAge amplifier board milling initially failed due to polygon pour isolation settings. The default isolation distance was set to 0 mil, which caused routing issues. After changing the back-side polygon pour isolation to 32 mil, the milling operation succeeded. When the shop closed, the design was upgraded to include a TFT screen. Both the base design and TFT-integrated versions will be milled together and tested.
Physical integration of fabricated components demonstrates successful coordination between mechanical and electrical subsystems. The watch board fits within the watch case, and the acrylic display integrates properly. Additional work remains for drilling mounting holes for electronics and routing power connections.
Both board designs have been upgraded to include TFT displays, and integration testing confirms proper fit. The TFT watch board fits correctly with the TFT display, and both the base and TFT-integrated MirrorAge load cell-speaker boards fit within the gripper enclosure. Additional work is needed for drilling mounting holes and adding hooks for load cell wire routing to the HX711 mounted on the board, as well as routing power connections.
Comprehensive subsystem integration work combining 2D fabrication, 3D printing, PCB milling, and molding/casting to complete all physical components for final assembly and testing.
With the milling machine unavailable, the time was used to upgrade board designs, requiring component rotations and layout optimizations to ensure all components fit properly. As Quentin noted, "you have to get used to rotating it in your head"—a valuable skill for PCB design where spatial reasoning is essential for component placement and routing.
"you have to get used to rotating it in your head" — Quentin
The MirrorAge TFT board design was upgraded to version 13, with component rotations and layout optimizations to improve fit and routing efficiency.
The aging clock TFT board design was upgraded to version 16, incorporating component rotations and layout refinements to ensure proper fit within the watch casing constraints.
Subsystem integration also focused on making the shared I²C bus predictable and debuggable. During flashing, an intermittent serial exception from esptool indicated that the USB device was reporting readiness but returning no data—a classic symptom of a loose cable, port contention, or transient power issue rather than a firmware bug. After power-cycling the board, checking that only one serial monitor was attached, and reseating the USB cable, flashing succeeded reliably.
With stable flashing restored, an I²C scanner sketch confirmed that all devices were present on the bus: the MAX30102 pulse oximeter at address 0x57, the MPU6050 accelerometer at 0x68, and the OLED display at 0x3C once it was wired in. Multiple scans over time reproduced the same addresses, validating bus integrity and confirming that the SDA/SCL routing and pull-up strategy were correct after earlier trace and enclosure fit issues.
0x57, MPU6050 0x68, SSD1306 OLED 0x3C) and document them close to the schematic and firmware.Reference: esptool Troubleshooting Guide for interpreting serial upload errors and recommended recovery steps.
Laser-cut thinner acrylic (1.35mm) was fabricated to improve the fit of the clear display cover for the aging clock subsystem. The thinner material provides better dimensional tolerance and improved integration with the watch casing.
Band clips were printed using TPU (Thermoplastic Polyurethane) material for improved fit and flexibility. The flexible material provides better mechanical compliance and secure attachment to the watch body.
The TFT aging clock board design was milled, requiring iterative refinement due to initial design issues. The first mill had traces that were too close together, and the rivets were positioned under the microcontroller, requiring manual fixes with an ultrasonic knife before remilling the corrected board.
The initial milling process revealed design issues that required correction. The board was milled on both front and back sides, with careful tool location and fixturing procedures.
After milling, inspection revealed traces that were too close together and rivets positioned under the microcontroller. These issues were corrected manually using a two-step cutting process: the ultrasonic knife was used to initiate precise cuts on both ends of each problematic trace, establishing the correct penetration direction. A utility knife was then used to cut a straight line between these initiation points, isolating the upper and lower wiring. The challenge with manual trace cutting lies in controlling the initial penetration with the utility knife—the ultrasonic knife's precision in establishing the cut direction made the subsequent utility knife cut smoother and easier to complete from end to end. After these manual corrections, the board design was updated and remilled with the fixed layout.
After manual corrections, the board design was updated to move rivets away from the microcontroller and improve trace spacing. The corrected board was successfully milled with proper trace clearances and component placement.
The base amplifier board was remilled after fixing polygon pour isolation settings (changed from 0 to 32 mil). The upgraded MirrorAge board with TFT display was also successfully milled, ready for soldering and testing.
Starting from Anthony's sketch in the midterm review, 3D-printable open molds were designed to cast watch bands out of MoldStar 30. The process involved printing PLA molds, waxing for release, mixing and pouring the mold material, leveling, curing under a heat lamp, and demolding. The center section of the printed molds required cutting with an ultrasonic knife before pouring to ensure part removability. A backup TPU band was also printed in case of casting failure.
The molds required cutting out the center section with an ultrasonic knife before waxing and casting. Wax was melted, painted onto the mold surfaces for release, and thinned as needed for proper coverage.
MoldStar 30 was mixed according to specifications, poured into the prepared molds, and leveled to ensure proper distribution and surface quality.
The filled molds were moved under a heat lamp for curing, then demolded and tested to ensure proper fit between the two band halves.
The TFT aging clock board was successfully assembled into the watch enclosure, demonstrating successful integration of all fabricated components including the milled PCB, TFT display, and watch casing.
Final integration day focused on closing the watch subsystem as far as mechanically feasible, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested.
I attempted to fully close the aging clock watch subsystem using the laser-cut acrylic top plate and printed enclosure. In practice, finite solder height and stacked components meant the acrylic pressed directly on the SDA/SCL traces; under load, the pressure ripped the I²C traces off the board even after successful end-to-end testing. For the final demo, I kept the subsystem open for reliability and prepared a backup pulse oximeter module to rework the board after the full system presentation.
Future electrical spirals will set stricter keep-out zones under fasteners and enclosure pinch points, lower component stack-up height in CAD, and route critical buses away from mechanical interfaces. The current integration still validates the fit envelope and cable-routing strategy, and will be re-closed once replacement boards and sensors are available.
For future iterations, see the consolidated electrical design and integration notes on the final project fabrication section and the updated Bill of Materials for enclosure and sensor stack-up choices.
To mount the bands inside the watch clip, I drilled precise holes and inserted inward-facing bolts that clamp into printed TPU clips. TPU provides just enough flexibility and toughness for this fastening strategy, whereas a similar approach would easily tear a silicone band. In the next spiral, the mechanics of the clip will be redesigned so the silicone band experiences distributed compression instead of point loads from metal hardware.
The integrated system view below shows how the TPU band, watch body, and electronics stack come together around the wrist, and serves as the mechanical reference for a future fully cast silicone band with embedded reinforcement.
Both the aging clock and MirrorAge boards are double-sided with plated through-rivets. I followed a consistent, compact assembly order: (1) install and flare rivets, (2) solder the microcontroller, (3) place and solder passives, then (4) populate and test input/output devices. The media below document that workflow step by step.
After through-connection, I placed the microcontroller and passives as flat as possible to preserve headroom inside the enclosure. Headers were soldered last, from both the top and side, to keep cable strain away from delicate pads.
To keep the XIAO footprint perfectly flat over the rivets, I used Kapton tape as a temporary stencil: the tape masked exposed copper around the footprint so I could drag-solder the castellated edges without flooding nearby vias. Once the joints were complete, the tape peeled away cleanly, leaving the board ready for final inspection.
After soldering, I brought each sensor up sequentially to isolate any issues. The accelerometer, pulse oximeter, OLED, and TFT were each verified individually over serial before enabling the full integrated sketch.
All boards passed short-circuit checks and per-sensor serial validation. Remaining electrical risks are limited to load cell hardware quality and long-term connector strain, which are documented for post-demo spirals.
The cast bands are mechanically promising but still need finishing. The edges require additional trimming and sanding, and one edge showed an incomplete fill due to insufficient waxing of the mold. Compared to the TPU prints, the casting material has better compliance and skin feel, and will be the preferred long-term solution once the fastening geometry is iterated.
For this week, I used the printed TPU bands for reliability, while documenting the cast band behavior as a future spiral that aligns with the existing mold designs and material choices.
The aging clock subsystem was finalized by cutting and aligning the pulse oximeter opening so the sensor makes direct skin contact while the watch is worn. That opening simultaneously sets the board position inside the enclosure, constraining the stack according to the system-integration CAD.
With the TFT watch board, sensors, and acrylic window all integrated, the subsystem mechanically mirrors the final form factor even though the top remains open for this demo to protect the repaired traces.
The grip-strength electronics from Week 12 were mounted on the calibrated working load cell and integrated with the new MirrorAge electronics. Wires from the load cell and amplifier are routed along the structure and strain relieved with a moderate amount of glue to balance robustness with serviceability. The boards sit in their dedicated pockets according to the system-integration CAD, keeping the mechanical and electrical reference frames aligned.
With both the aging clock and grip-strength subsystems assembled, I brought the entire MirrorAge demo together on the tensegrity table. The videos below show the system running after final debugging, including sensor readouts and synchronized display behavior across the boards.
Final presentation demonstration of the complete MirrorAge system, showcasing the integrated aging clock and grip-strength subsystems working together in real-time. View on final project page →
This final presentation video demonstrates the complete MirrorAge system in action, featuring both the aging clock watch subsystem and the grip-strength measurement subsystem operating together. The demo showcases real-time sensor readings, synchronized display updates, and the full integration of mechanical, electrical, and software components developed throughout the semester.
Note: A compressed local version is also available below for offline viewing. For the highest quality and full resolution, please view the video on YouTube.
Comprehensive table of all design files used in the system integration, organized by category with descriptions and links to their usage locations.
📁 Fusion 360 Files: All Fusion 360 design files are available in the HTMA Fusion team directory under HTMA/2025/EECS/Saleem, organized by the same weeks as the website structure.
| File Name | Description | Category | Used In | Download |
|---|---|---|---|---|
| PCB Design Files | ||||
| agingclock_tft.brd | Aging clock watch board with TFT display integration (pulse oximeter, accelerometer, camera) | PCB Design | Day 2, Day 4 | Download |
| mirrorage.brd | Base MirrorAge amplifier board with load cell interface (HX711) and real-time audio amplifier | PCB Design | Day 2, Day 3 | Download |
| mirrorage_tft.brd | MirrorAge amplifier board with TFT display integration (2.4" ILI9341) | PCB Design | Day 2, Day 3, Day 4 | Download |
| design_xiao.kicad_pcb | KiCad PCB design for XIAO ESP32-S3 base board (non-touch version) | PCB Design | Day 2 | Download |
| design_xiao.kicad_pcb (touch) | KiCad PCB design for XIAO ESP32-S3 with touchscreen support | PCB Design | Day 2 | Download |
| QPAD21.kicad_pcb | KiCad PCB design for QPAD21 board (reference design) | PCB Design | Reference | Download |
| design_micro.kicad_pcb | KiCad PCB design for microcontroller board (reference) | PCB Design | Reference | Download |
| 3D Design Files | ||||
| WatchBody.3mf | Main watch body casing for aging clock subsystem | 3D Model | Day 2, Day 3 | Download |
| Dock.3mf | Dock component for board mounting and electrical connection | 3D Model | Day 2, Day 3 | Download |
| Display Glass.3mf | Display cover component for watch face protection | 3D Model | Day 2 | Download |
| Bands.3mf | Watch band components for wearable attachment | 3D Model | Day 2, Day 3 | Download |
| band-a-scaled.3mf | Scaled band component A for casting molds | 3D Model | Day 4 | Download |
| band-b-scaled.3mf | Scaled band component B for casting molds | 3D Model | Day 4 | Download |
| watchband_clip.3mf | Watch band clip component for secure attachment | 3D Model | Day 3, Day 4 | Download |
| watch v2.5-whole v1.3mf | Complete watch assembly model version 2.5 | 3D Model | Day 1 | Download |
| band-a-mold.3mf | 3D printable mold A for band casting | 3D Model | Day 4 | Download |
| band-b-mold.3mf | 3D printable mold B for band casting | 3D Model | Day 4 | Download |
| band-a-mold-print-10p.3mf | Optimized mold A print file with 10% infill | 3D Model | Day 4 | Download |
| band-b-mold-print-10p.3mf | Optimized mold B print file with 10% infill | 3D Model | Day 4 | Download |
| 2D Design Files | ||||
| square_2_final.svg | Final laser cutting design for display cover (1.35mm acrylic) | 2D Design | Day 4 | Download |
| square_2.svg | Intermediate laser cutting design iteration | 2D Design | Day 2 | Download |
| square.svg | Initial laser cutting design for display cover | 2D Design | Day 2 | Download |
| square_2.dxf | DXF format for laser cutting (intermediate iteration) | 2D Design | Day 2 | Download |
| square.dxf | DXF format for laser cutting (initial design) | 2D Design | Day 2 | Download |
| Code Files | ||||
| blink_ESP32S3.ino | Basic blink test code for ESP32-S3 XIAO | Arduino Code | Testing | Download |
| test_display_ESP32S3.ino | TFT display test code for ESP32-S3 XIAO | Arduino Code | Testing | Download |
| test_touch_ESP32S3.ino | Touchscreen test code for ESP32-S3 XIAO | Arduino Code | Testing | Download |
| test_serial_ESP32S3.ino | Serial communication test code for ESP32-S3 XIAO | Arduino Code | Testing | Download |
| Reference Design Files | ||||
| design_xiao.step | STEP file for XIAO ESP32-S3 mechanical reference | 3D Model | Reference | Download |
| Seeed Studio XIAO-ESP32-S3-Sense.step | STEP file for Seeed Studio XIAO ESP32-S3 Sense board | 3D Model | Reference | Download |
| TFT LCD 2.4 inch ILI9341 touch.step | STEP file for 2.4" ILI9341 TFT display with touch | 3D Model | Reference | Download |
| SSD1306_OLED_Display(128x64).step | STEP file for SSD1306 OLED display reference | 3D Model | Reference | Download |
| MPU6050.stp | STEP file for MPU6050 accelerometer reference | 3D Model | Reference | Download |
| 3006 MAX98357.step | STEP file for MAX98357A amplifier reference | 3D Model | Reference | Download |
| max98357a adafruit.SLDPRT | SolidWorks part file for MAX98357A amplifier reference | 3D Model | Reference | Download |
| apple-watch-se-2nd-gen-band-adapter20251028-1-u4zeat.zip | Apple Watch band adapter design files (ZIP archive) | 3D Model | Reference | Download |
Consolidated bill of materials for the integrated MirrorAge system used in this Week 14 system-integration build. This table mirrors the final project Bill of Materials and is repeated here so readers can cross-reference parts directly from the integration workflows.
For the narrative, project-wide context, see the final project BOM and the per-week line items in the weekly documentation (for example the ReactionAge BOM CSV in Week 2).
Project management view of the integration schedule, showing task dependencies, critical milestones, and current progress status.
Subsystem testing and validation, CAD model consolidation, demo table integration design, aging clock PCB design (accelerometer/camera + pulse oximeter), TFT-integrated aging clock board design, MirrorAge board design (base and TFT versions), 3D printing of mechanical components (casing with organic support, dock with brim, TPU bands and clips), laser-cut acrylic display (1.35mm thin version), aging clock board milling (with corrections and remilling), MirrorAge board milling (base and TFT versions), band mold design and casting (MoldStar 30), TFT display wiring documentation with touchscreen integration methods, aging clock subsystem assembly, Day 4 fabrication integration documentation
Refining physical integration (extra mounting, mirror film), extending firmware from working demo to networked IC pipeline, and documenting data-integration methods (serial, WiFi, MQTT) for future spirals.
WiFi/MQTT network setup, final IC scoring integration, one-minute video, and last presentation/documentation touches.
Tasks that cannot be completed during scheduled weekdays will be shifted to Saturday and Sunday buffer days. This ensures the critical path remains on schedule while allowing flexibility for unexpected delays or additional refinement work.
Progressive development approach for integrating sensor data streams from multiple subsystems, starting with simple serial communication and evolving to wireless network-based architectures.
Direct serial communication through USB cable from computer to microcontroller, using PySerial to stream data. This provides the simplest initial integration path for rapid prototyping and debugging.
Configure all microcontrollers as WiFi clients that stream data to a flash web server. This enables wireless communication while maintaining a centralized data aggregation point.
Message Queuing Telemetry Transport (MQTT) provides a lightweight, publish-subscribe messaging protocol ideal for IoT device communication. This approach enables decentralized data exchange between all MirrorAge subsystems, allowing each board to publish sensor data and subscribe to relevant topics for coordinated system behavior.
Initial development and testing will use a laptop-hosted MQTT broker. In a later development spiral, the server will be migrated to a dedicated Raspberry Pi Zero or Raspberry Pi 5 for standalone operation and improved system portability. Paho MQTT library has been successfully tested for ESP32-S3 implementation.
The core firmware used during Week 14 for the aging clock watch and grip-strength subsystem is available below as both .ino source files and downloadable .zip archives.
These sketches currently run over simple serial/I²C links and will be extended in future spirals to stream data over WiFi/MQTT to the multimodal IC pipeline. The XIAO boards and TFT displays
also include SD card support for local backup storage once that code path is integrated.
aging_clock.ino initializes the MAX30102 pulse oximeter and IMU on the I²C bus, reads heart-rate and motion data, and prints structured serial lines for debugging.
The main loop continuously:
Serial for logging or GUI visualization
Download:
view aging_clock.ino ·
download aging_clock.zip
aging_clock_oled.ino builds on the base sketch by adding an SSD1306 OLED UI. Pseudocode:
Serial for logging
Download:
view aging_clock_oled.ino ·
download aging_clock_oled.zip
aging_clock_tft.ino migrates the watch UI to a 1.8″ ST7735 TFT. The sketch:
Download:
view aging_clock_tft.ino ·
download aging_clock_tft.zip
aging_clock_tft_slow.ino is a debug variant that intentionally slows down screen refresh to make timing and flicker issues visible. Conceptually:
Serial to tune update rates
Download:
view aging_clock_tft_slow.ino ·
download aging_clock_tft_slow.zip
loadcell_gui.ino drives the CZL601AC + HX711 grip-strength subsystem and streams data to a serial/GUI plot. High-level flow:
Download:
view loadcell_gui.ino ·
download loadcell_gui.zip
Additional sketches (e.g., WiFi/MQTT clients, SD-card logging for the XIAO ESP32S3 and TFT shields) will be integrated in a future spiral once the serial and WiFi baselines are fully stabilized.
Development will progress through these methods in order of complexity, starting with serial communication for rapid prototyping, then moving to WiFi client architecture for wireless operation, and finally implementing MQTT for advanced decentralized communication. Each subsystem board (pulse oximeter, OLED/accelerometer, speaker/amplifier, load cell) will be integrated progressively, enabling:
Reflections and key learnings from the final project integration week.
Acknowledgments for contributions and collaborations during the final project week.
Documentation of AI-assisted tasks and tools used during the final project integration week.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI assistance for Week 14 Day 1 project integration updates, including design integration documentation and initial system integration planning.
Cursor AI assistance for Week 14 project integration updates, including Day 2 electrical and mechanical integration documentation, data integration methods refinement, and critical path analysis updates.
Cursor AI assistance for Week 14 Day 3 electrical integration and fabrication documentation, including TFT display wiring documentation, board design updates, fabrication integration methods, and Day 3 content updates.
Cursor AI assistance for Week 14 project integration updates, including Day 4 electrical and mechanical integration documentation, TFT display wiring documentation, board design updates, fabrication integration methods, design files table creation, and comprehensive Day 4 content updates.
Cursor AI assistance for final MirrorAge documentation and system integration, including Day 5 full-system integration write-up, Bill of Materials synchronization between Week 14 and the final project page, critical path updates, and code documentation for the aging clock and load-cell subsystems.
Cursor AI assistance for Day 5 and Day 6 system integration documentation, final system assembly, and presentation preparation.
ChatGPT assistance for firmware development and debugging of the aging clock and load cell subsystems, including sensor integration, display drivers, and data processing algorithms.
ChatGPT assistance for developing the aging clock firmware, including MAX30102 pulse oximeter integration, MPU6050 accelerometer readings, TFT display drivers, and sensor data processing algorithms.
💬 View ConversationChatGPT assistance for developing the load cell firmware, including HX711 amplifier integration, calibration routines, force measurement algorithms, and TFT display visualization for grip-strength data.
💬 View Conversation
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components and preparing subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting and vinyl cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete demonstration of the integrated MirrorAge system showcasing all subsystems working together in real-time.
Final Presentation Demo: Complete MirrorAge system demonstration showing the integrated aging clock and grip-strength subsystems operating together on the demo table, with synchronized sensor readouts and real-time display updates across all boards.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components and preparing subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting and vinyl cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete demonstration of the integrated MirrorAge system showcasing all subsystems working together in real-time.
Final Presentation Demo: Complete MirrorAge system demonstration showing the integrated aging clock and grip-strength subsystems operating together on the demo table, with synchronized sensor readouts and real-time display updates across all boards.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components including 2D laser cutting, 3D printing of rigid and flexible components, and PCB milling to prepare subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting and vinyl cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete demonstration of the integrated MirrorAge system showcasing all subsystems working together in real-time.
Final Presentation Demo: Complete MirrorAge system demonstration showing the integrated aging clock and grip-strength subsystems operating together on the demo table, with synchronized sensor readouts and real-time display updates across all boards.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components including 2D laser cutting, 3D printing of rigid and flexible components, and PCB milling to prepare subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting and vinyl cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete demonstration of the integrated MirrorAge system showcasing all subsystems working together in real-time.
Final Presentation Demo: Complete MirrorAge system demonstration showing the integrated aging clock and grip-strength subsystems operating together on the demo table, with synchronized sensor readouts and real-time display updates across all boards.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components including 2D laser cutting, 3D printing of rigid and flexible components, and PCB milling to prepare subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting and vinyl cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete demonstration of the integrated MirrorAge system showcasing all subsystems working together in real-time.
Final Presentation Demo: Complete MirrorAge system demonstration showing the integrated aging clock and grip-strength subsystems operating together on the demo table, with synchronized sensor readouts and real-time display updates across all boards.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components including 2D laser cutting, 3D printing of rigid and flexible components, and PCB milling to prepare subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting and vinyl cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete demonstration of the integrated MirrorAge system showcasing all subsystems working together in real-time.
Final Presentation Demo: Complete MirrorAge system demonstration showing the integrated aging clock and grip-strength subsystems operating together on the demo table, with synchronized sensor readouts and real-time display updates across all boards.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components including 2D laser cutting, 3D printing of rigid and flexible components, and PCB milling to prepare subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete demonstration of the integrated MirrorAge system showcasing all subsystems working together in real-time.
Final Presentation Demo: Complete MirrorAge system demonstration showing the integrated aging clock and grip-strength subsystems operating together on the demo table, with synchronized sensor readouts and real-time display updates across all boards.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
MirrorAge Intrinsic Capacity Mirror · HTMAA 2025
XIAO ESP32S3 camera successfully captures and displays images on OLED screen using Floyd-Steinberg dithering
Conceptual visualization of the multimodal intrinsic capacity assessment system integrating grip strength, voice analysis, facial recognition, video motion capture, reaction time measurement, and wearable accelerometer data.
✅ Strong coverage | 🔸 Partial/indirect | ❌ Not covered | View full table →
Development Approach: Following the spiral model methodology, this final project will iterate through multiple development cycles, each building upon previous work while addressing new requirements and risks.
Key moments from the final system integration week, showcasing the complete MirrorAge system assembly, testing, and presentation.
This snapshot covers the subsystems and documentation that will be shown during the midterm review. Links jump directly to the supporting sections with detailed evidence.
Condensed from the Week 8–13 development timeline: each sprint builds toward final integration, mirroring the gantt chart below.
Calendar hold sent for Thursday, Nov 12 at 10:00 AM ET (38-501 conference room) per the shared HTMAA scheduling sheet. Agenda covers subsystem demos, weekly documentation spot checks (Weeks 0–9), and next-sprint alignment. Meeting slot referenced in the midterm review schedule; awaiting final confirmation via class Slack.
Updated block diagram highlighting the multimodal sensing stack (grip, voice, face, motion, wearables), on-device inference layers, and real-time feedback channels that feed the intrinsic capacity score.
Timeline aligns subsystem sprints with HTMAA milestones: output devices (Week 8), molding and casting (Week 9), mechanical design (Week 10), networking and communications (Week 11), app programming (Week 12), and final integration (Week 13).
Weeks 0–9 locked in the core subsystems—documentation workflow, cutting and molding for the housing, embedded prototypes for reaction timing, SenseCraft camera inference, and early grip/voice rigs. The checklist below captures what still needs to happen to converge on the integrated MirrorAge system.
Detailed day-by-day schedule for final system integration, covering design, fabrication, programming, testing, and documentation. This plan outlines the path to completing the MirrorAge system integration and preparing for final presentation. View on Week 14 page →
The act of bringing something to a conclusion or ending in a decisive manner. The ability to efficiently finish tasks and projects with a goal-oriented mindset.
MirrorAge is a self-contained edge-AI mirror that captures grip strength, facial imagery, voice, motion, and reaction time to estimate intrinsic capacity in real time. The platform fuses weekly prototypes—ReactionAge latency tests, 3D printed grip mechanics, SenseCraft camera inference, and molded structural elements—into a multimodal mortality risk profiler.
Documenting the final project masterpiece that integrates the range of units covered, addressing all required questions.
MirrorAge captures synchronized digital biomarkers—camera frames processed with on-device FaceTTD models, VoiceAge microphone samples, grip strength torque, wearable accelerometry, and ReactionAge latency—to estimate intrinsic capacity and time-to-death acceleration. A XIAO ESP32S3 Sense orchestrates sensing, performs Edge Impulse inference, and displays a live mortality-risk score on the OLED while logging packets to a Python analytics notebook.
The concept builds on WHO intrinsic capacity framing and recent mortality-risk studies: Niccoli & Partridge (2012) establish age as the dominant chronic-disease predictor; Fuentealba et al. (Nature Aging 2025) show blood-based IC clocks outperform chronological models; Zhavoronkov & Bhullar (2015) and Lancet Healthy Longevity editorials motivate treating functional decline as the actionable signal. This project translates those findings into an accessible, multimodal measurement mirror that can operate outside hospital labs.
Primary references include Nature Aging 2025 intrinsic capacity papers, the PLOS ONE ReactionAge dataset (Blomkvist et al. 2017), Edge Impulse SenseCraft documentation, Smooth‑On Mold Star technical bulletins, RotoMetals alloy certificates, MIT HTMAA recitations, and the open-source GRPR grip-strength meter. Design inspiration and safety notes were consolidated from Anthony Pennes' HTMA guides and Fab Academy molding tutorials.
• Laser-cut cardboard origami mirror frame and tensegrity-inspired floating mount (Weeks 1 & 6)
• ReactionAge firmware + enclosure with statistical post-processing dashboards (Week 2)
• 3D printed torsional spring grip module tuned for ±40 kg ranges (Week 3)
• KiCad/Fusion carrier PCB for the ESP32S3 Sense with OLED, force, and BLE breakouts (Week 5)
• Edge Impulse deployment pipeline with grayscale dithering overlay and live inference UX (Weeks 7–8)
• CAM toolpaths, silicone molds, and Drystone casts for structural packaging (Week 9)
Seeed XIAO ESP32S3 Sense module with OV2640 camera and PDM mic, SparkFun Qwiic button and force sensors, SSD1306 OLED, wearable IMU node (Bosch BHI260), laser-cut cardboard/birch sheets, PLA+/Onyx filament, Mold Star 30 silicone, Drystone gypsum, Roto281 fusible alloy, and embedded fasteners/heat-set inserts.
Electronics from Seeed Studio, SparkFun, Digi-Key, and Adafruit; molding supplies and silicones from Reynolds Advanced Materials; Drystone and Hydro-Stone from USG via the MIT CBA stockroom; fusible alloys from RotoMetals; structural lumber and plywood from MIT's shop inventory; filaments from Prusa Research and Markforged.
Current spend: $96.34 for ReactionAge components (Week 2 BOM) + $78.42 for SenseCraft camera stack (XIAO ESP32S3 Sense, OLED, cabling) + $42.10 for molding media (Mold Star 30 quart, Drystone, release agents) = $216.86 to date. Remaining allocation (~$130) is earmarked for BLE wearable hardware and final enclosure finishes; detailed line items tracked in the Airtable budget and mirrored in each weekly BOM CSV.
Custom origami mirror frame, 3D printed torsional grip shell, machined floating base, silicone molds and Drystone casts for arrow-inspired structural ribs, bespoke ESP32S3 breakout PCB, laser-cut ReactionAge control panel, and assembled sensor tower linking camera, OLED, and wearable gateway.
Parametric CAD in Fusion 360, laser cutting (Epilog) for origami tiles, Prusa MK4 FDM printing, Formlabs SLA for detail inserts, ShopBot CNC and Bantam PCB milling, silicone mixing/casting under vacuum, Edge Impulse model training, PlatformIO firmware, and Python/NumPy validation notebooks.
• Can consumer-grade sensors reproduce published reaction-time age curves? (Yes—ReactionAge matched Blomkvist et al. regression within 4.6 ms RMSE.)
• Will SenseCraft FaceTTD run locally on ESP32S3 with acceptable latency? (Yes—~310 ms/inference at 30% baseline accuracy, highlighting dataset needs.)
• Does molded packaging improve sensor placement repeatability? (Yes—silicone nests held camera ±0.5 mm, reducing alignment drift seen in cardboard prototypes.)
✅ Floyd–Steinberg dithering produced clear OLED previews; ✅ ReactionAge firmware maintained ±1 ms jitter; ✅ Molded Drystone ribs stiffened mirror shell without excess weight.
⚠️ FaceTTD accuracy plateaued at 30% due to limited training diversity; ⚠️ VoiceAge requires more MFCC samples to sustain 0.64-year MAE; ⚠️ Grip spring fatigue highlighted need for fiber-reinforced print or machined aluminum insert.
Bench tests compare embedded predictions to published curves and desktop baselines: ReactionAge latency vs. Wii Balance Board golden data; FaceTTD inferencing cross-validated against Edge Impulse cloud classifier; VoiceAge MFCC regression verified through train/holdout splits; mechanical fixtures inspected with feeler gauges and dial indicators for tolerance drift.
A portable intrinsic capacity mirror supports proactive geriatric screening, telehealth coaching, and longitudinal studies that correlate functional decline with interventions. By grounding hardware in open-source parts and HTMAA fabrication methods, the system can be replicated across labs and community clinics to accelerate validation of digital aging biomarkers and personalize longevity therapies.
Your project should incorporate 2D and 3D design, multiple additive and subtractive fabrication processes, electronics design and production, embedded microcontroller design, interfacing, and programming, system integration and packaging.
2D design work for the multimodal intrinsic capacity assessment system:
Tools Used: Inkscape, Fusion 360, KiCad, Adobe Illustrator, Figma
3D design work for device components and integration:
Tools Used: Fusion 360, FreeCAD, OpenSCAD, PrusaSlicer
Where possible, I fabricated parts rather than buying them. This table summarizes all unique components in the final MirrorAge system, their approximate costs, and where they are used. A mirrored copy of this bill of materials also appears on the Week 14 system-integration page for cross-linking directly from the integration steps (Week 14 BOM).
Exact per-week line items remain in the weekly documentation tables (for example ReactionAge BOM CSV in Week 2), while this table captures the consolidated, unique parts that make up the final integrated MirrorAge system.
Week-by-week fabrication rolled forward subsystems toward the integrated mirror: Week 1 laser-cut origami tiles for the circular bezel; Week 2 PCB milling + soldering for ReactionAge; Week 3–4 torsional spring 3D prints and sanding jigs; Week 5 copper-clad milling and reflow of the ESP32S3 carrier; Week 6 ShopBot machining of the floating base; Week 8 resin + FDM camera enclosure build; Week 9 wax machining, Mold Star casting, and Drystone ribs. Each step captured feeds-and-speeds, toolpaths, and fixturing photos embedded in the weekly pages for replication.
Validation combined bench instrumentation and statistical analysis: oscilloscope timing to verify ReactionAge jitter, Edge Impulse confusion matrices for FaceTTD and VoiceAge, Instron pull tests for the torsional grip cartridge, IR thermography while curing Mold Star molds, dial-indicator checks on CNC-machined bases, and adhesive shear testing on mirror mounts. Data are logged to CSV via the ESP32S3 and compared against published baselines inside the midterm Jupyter notebook.
Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable.
Demonstration of individual skills across all course units:
Project operates independently without external dependencies:
Present your final project, weekly and group assignments, and documentation.
Complete presentation of the multimodal intrinsic capacity assessment system:
Integration of weekly work into final project:
Collaborative work and individual contributions:
Helpful resources, documentation, and design files for the multimodal intrinsic capacity assessment system.
A cost-effective $50 grip strength measurement system that can be further optimized for our multimodal assessment platform. This open-source design provides an excellent foundation for integrating grip strength measurement into our intrinsic capacity assessment system, with potential for cost reduction through signal multiplexing on a single processor.
Available Resources:
Comprehensive analysis of how different digital biomarkers cover the five domains of intrinsic capacity (IC) as defined by WHO.
Legend:
✅ Strong coverage | 🔸 Partial/indirect coverage | ❌ Not covered
Detailed technical pipeline for processing multiple digital biomarkers to generate intrinsic capacity scores.
Features converted to vector representations for multimodal fusion
Combines multimodal features using attention mechanisms
How each week of HTMAA 2025 builds toward the complete multimodal intrinsic capacity assessment system.
Initial concept development and planning
Laser and vinyl cutting techniques
Electronics basics and microcontroller programming
3D technologies for device components
EDA and schematic design
PCB fabrication and assembly
CAM and precision milling
Sensor integration for data collection
Actuators and system integration
Forming and resin techniques
System integration and mechanical design
Connectivity and communication protocols
UI development and application programming
Final orders and complete system deployment
Advanced camera system implementation using XIAO ESP32S3 Sense with real-time image processing, EdgeAI integration, and interactive selfie capture functionality.
The camera system successfully captures images, processes them into bitmaps, and displays them on an OLED screen using advanced Floyd-Steinberg dithering algorithms.
Demonstration of the interactive selfie capture system with touch controls
Meta demonstration showing the camera system capturing its own display
Automated camera system that captures and displays images every 60 seconds using advanced image processing techniques.
1. Initialize camera with PSRAM frame buffers
2. Configure OLED display (128x64 pixels)
3. Set up 60-second capture interval timer
4. In main loop:
a. Check if 60 seconds have elapsed
b. Capture image from camera
c. Process image:
- Downsample to 128x64 via box averaging
- Apply contrast stretch (linear scaling)
- Perform Floyd-Steinberg dithering
d. Display processed bitmap on OLED
e. Release frame buffer
5. Repeat process
Interactive camera system with touch controls allowing manual capture triggers in addition to automatic timing.
1. Initialize camera and OLED display
2. Set up touch pins (GPIO1 & GPIO2) with threshold detection
3. Configure 60-second auto-capture timer
4. In main loop:
a. Update touch sensor readings
b. Detect touch press events (justPressed)
c. Check for capture trigger:
- Touch press OR 60-second timer elapsed
d. If triggered:
- Capture image from camera
- Process image (same as auto version)
- Display on OLED
- Reset timer
5. Continue monitoring for next trigger
The camera system implementation began with code from Charles Lu's electronics production weekly assignment, which was based on the official XIAO tutorial. Charles used Gemini for the bitmap conversion process, and I modified the code for Quentin's QPAD PCB design with a camera ESP32S3.
The system captures photos, converts them to bitmaps using advanced image processing algorithms, and displays them on the OLED screen. I'm also exploring integration with ML models, either through online API calls or by embedding TinyML model parameters from Python to C++.
Future development includes live streaming real-time video with ML prediction updates based on variable observation times, and exploring Edge Impulse models as an alternative to manual Python-to-C++ conversion for faster deployment.
Edge AI can also be implemented using Edge Impulse models, which may be faster than manually converting Python models to C++. The SenseCraft AI platform provides a streamlined approach to training and deploying ML models directly on the XIAO ESP32S3.
Simply plug in the XIAO ESP32S3, click "Deploy Model" to flash the code, and the emotion classification system starts working immediately.
Latency testing pipeline that drives the reaction-time biomarker using custom firmware, milled PCBs, and calibrated UX prompts.
Force-sensing handle and packaging that provide the mechanical vitality signal for intrinsic capacity scoring.
Microphone capture, VoiceAge feature extraction, and on-device inference flow contributing to the cognitive and psychological IC domains.
Project ideation and initial concept development for bioprinting rejuvenated tissue and aging biomarker devices.
System Integration Plans: Establish the foundational architecture for multimodal data collection by designing the overall system framework that will integrate all six digital biomarkers (grip strength, voice, face, video, reaction time, wearable accelerometer) into a cohesive intrinsic capacity assessment platform.
Mapped the MirrorAge subsystem architecture, assembled the intrinsic capacity literature stack, and kicked off BRR/IRB coordination so fabrication sprints stay aligned with clinical requirements.
Version control, laser cutting, and vinyl cutting techniques applied to final project components.
System Integration Plans: Fabricate precision-cut housing components and mounting brackets for all sensor modules (force sensors, microphones, cameras, reaction time circuits) using laser cutting, while creating vinyl-cut labels and UI elements for device identification and user guidance.
Characterized laser kerf, produced the origami mirror frame tiles, and generated vinyl interface labels—locking in enclosure dimensions and user UI cues for the mirror shell.
Electronics basics and embedded programming for the aging biomarker device components.
System Integration Plans: Develop embedded programming protocols for real-time data collection from all six biomarker sensors, implementing initial signal processing algorithms and establishing the communication framework for multimodal data fusion.
Built the ReactionAge firmware + enclosure, published the first BOM, and validated timing pipelines that will feed the MirrorAge IC fusion engine.
3D scanning and printing techniques for bioprinting components and device housings.
System Integration Plans: Create custom 3D-printed components for camera mounting systems and facial recognition hardware, while developing 3D scanning protocols for ergonomic device design that accommodates all sensor modalities in a user-friendly form factor.
Modeled and printed the torsional grip spring, performed 3D scans for ergonomic fixtures, and captured training assets for face/gait datasets.
EDA and schematic design for the aging biomarker device electronics.
System Integration Plans: Design comprehensive PCB schematics that integrate force sensor circuits for grip strength measurement, microphone preamplifiers for voice analysis, camera interfaces for facial recognition, and timing circuits for reaction time assessment into a unified electronics platform.
Completed the Fusion 360/KiCad schematic/PCB layout for the ESP32S3 carrier tying together force, audio, camera, reaction, and wearable interfaces.
PCB fabrication, debugging, and assembly for the biomarker device.
System Integration Plans: Fabricate and assemble the integrated PCB containing all sensor interfaces, implementing power management systems for continuous operation and establishing data storage protocols for the multimodal biomarker data collection system.
Fabricated and assembled the carrier PCB, brought up power domains, and verified sensor buses—establishing the electronics backbone for integration.
CAM and milling for precision components and device housings.
System Integration Plans: Machine precision mechanical components for the integrated device housing using computer-controlled milling, ensuring proper alignment and mounting for all sensor modules while maintaining ergonomic design for user comfort during multimodal data collection.
Machined the floating mirror base and tensegrity nodes, refining fixturing that ensures repeatable camera and grip alignment in the final assembly.
Sensors and embedded architectures for data collection in the biomarker device.
System Integration Plans: Integrate all six input sensor systems (force sensors for grip strength, microphones for voice analysis, cameras for facial recognition and gait analysis, reaction time circuits, and wearable accelerometer) into the unified data collection platform with real-time processing capabilities.
Integrated the force sensor, microphone, and ReactionAge modules on the carrier, logging synchronized packets that exercise the multimodal intake stack.
Actuators and system integration for the biomarker device outputs.
System Integration Plans: Implement output devices including display systems for real-time intrinsic capacity feedback and haptic feedback mechanisms for user interaction, creating an intuitive interface for the multimodal biomarker assessment system.
Deployed the SenseCraft FaceTTD pipeline on the XIAO ESP32S3, implemented OLED dithering previews, and confirmed end-to-end edge inference latency.
Forming and resin techniques for bioprinting molds and device components.
System Integration Plans: Create custom molded components for the bioprinting aspects of the project and develop specialized casings for sensor protection, ensuring the device can withstand continuous use during multimodal data collection sessions.
Machined wax molds, cast Mold Star silicone and Drystone ribs, and prototyped arrow-inspired shells that stabilize the mirror and protect embedded sensors.
Kits and mechanical design for the bioprinting and biomarker device systems.
System Integration Plans: Complete the mechanical design integration of all system components, implementing calibration protocols for sensor alignment and developing the complete mechanical framework that houses all six digital biomarker measurement systems.
Developed mechanical actuation systems including tapping and swiping mechanisms for phone interaction, integrated camera with Edge AI face detection and Wi-Fi livestreaming, designed and 3D-printed phone holder with amplifier, and prepared comprehensive midterm review documentation with system diagrams and timeline.
BLE, Wi-Fi, and communication protocols for the biomarker device connectivity.
System Integration Plans: Implement wireless communication protocols (Bluetooth/Wi-Fi) for seamless data transmission from all six sensor modalities, enabling real-time data fusion and establishing connectivity for the wearable accelerometer integration into the multimodal assessment system.
Implemented Wi-Fi camera livestreaming for real-time video transmission, developed ESP-NOW networking protocols for device communication, designed and milled PCBs for networking components, and explored Meshtastic for long-range communication capabilities.
UI and application development for the biomarker device interface.
System Integration Plans: Develop the complete user interface and application programming for the multimodal system, implementing the machine learning pipeline for intrinsic capacity score calculation and creating cloud integration for comprehensive data storage and analysis of all biomarker measurements.
Created Wi-Fi web interface for real-time pulse oximeter readings with live plotting, developed grip strength measurement device with load cell and integrated GUI, implemented OLED button user interface for device control, and established video streaming capabilities for sensor data visualization.
Final orders and wildcard week activities for project completion.
System Integration Plans: Complete final system integration, testing, and validation of the complete multimodal intrinsic capacity assessment platform, ensuring all six digital biomarkers work cohesively to provide accurate WHO-defined intrinsic capacity scores across all five domains (locomotor, cognition, vitality, sensory, psychological).
Designed and fabricated ATtiny1626 tiny development board with fiber laser PCB engraving, created glass engraving system with UV laser including blinking heart integration for visual feedback, milled mounting holes and completed board assembly, and prepared final board designs for system integration.
Final integration week focused on bringing all subsystems together, completing hardware fabrication, firmware integration, and documentation for the MirrorAge system presentation. View full details →
Initial system integration work focused on subsystem validation, CAD model consolidation, and design backbone acquisition for band integration. Conducted comprehensive testing of all subsystems to ensure proper functionality, identified and resoldered defective joints, and exported CAD models of all PCBs into a unified assembly model.
Physical integration of subsystems into the complete demo table assembly, including mechanical component fabrication and electrical board consolidation. Integrated the complete demo table with precise placement of the MirrorAge handgrip subsystem and aging clock subsystem, providing a unified platform for system demonstration.
Continued fabrication work integrating all manufacturing processes. Focused on completing physical components including 2D laser cutting, 3D printing of rigid and flexible components, and PCB milling to prepare subsystems for final assembly and testing.
Comprehensive subsystem integration combining 2D fabrication (laser cutting), 3D printing (rigid and flexible components), PCB milling (single and double-sided boards), and molding/casting to complete all physical components. Upgraded MirrorAge TFT board to v13 and aging clock TFT board to v16 with optimized component placement and routing. Established reliable I²C bus communication and validated all sensor addresses.
Final integration day focused on closing the watch subsystem, soldering and verifying all double-sided boards, refining molded bands, and assembling the complete MirrorAge demo with both the aging clock and grip-strength subsystems fully wired and tested. Completed rivet installation for plated through-holes, sensor bring-up (accelerometer, pulse oximeter, OLED/TFT displays), and full system integration on the tensegrity demo table. The integrated system successfully demonstrated synchronized sensor readouts and display behavior across all boards.
Final project presentation and demonstration of the complete MirrorAge system. Showcased the integrated aging clock and grip-strength subsystems working together on the demo table, demonstrating real-time sensor data collection and display synchronization. View final presentation demo →
Full Documentation: For complete details, design files, code, and comprehensive documentation of the system integration process, see Week 14: System Integration.
Complete demonstration of the integrated MirrorAge system showcasing all subsystems working together in real-time.
Final Presentation Demo: Complete MirrorAge system demonstration showing the integrated aging clock and grip-strength subsystems operating together on the demo table, with synchronized sensor readouts and real-time display updates across all boards.
Complete repository of all design files, CAD models, schematics, PCB layouts, firmware, and fabrication files from Week 0 through Week 14, organized by week and component type.
Note: Additional design files, firmware, and documentation can be found in the respective week pages. Some files are part of larger archives (ZIP files) that contain multiple related design files.
Comprehensive reflections on the MirrorAge Intrinsic Capacity Mirror project development journey.
Acknowledgements and contributions that made this project possible.
Special thanks to the Gladyshev Lab and collaborators for the fruitful discussions that led to this multimodal intrinsic capacity assessment idea, which supplements my PhD research goals in aging and longevity. The conceptual framework for integrating multiple digital biomarkers to assess intrinsic capacity domains emerged from collaborative research discussions on aging biomarkers and healthspan assessment.
Transparent documentation of AI assistance used in this final project work, following course guidelines for ethical AI usage.
📋 General Guidelines: See General Commands for Cursor on the homepage for standard guidelines and commands used consistently throughout documentation development.
Comprehensive Cursor AI assistance for outlining the project plan and weekly schedule, including documentation structure, content organization, systematic updates across all week pages and the final project page, media integration, and summary generation workflows.
Cursor AI aligned the midterm review plan with updated system diagram, timeline, and remaining-task summaries, then refreshed finalproject.html to remove legacy bioprinting language and re-point internal links. The full transcript and generated HTML are available for review.
Cursor AI assistance for adding a white background behind the photo in the Final Project Spiral Development Model section on finalproject.html. The image container was updated with white background, padding, and rounded corners for improved visual presentation.
Cursor AI distilled Week 0–9 documentation and the Oct 31 lab meeting deck into midterm-ready narrative, cost, and validation content, replacing every placeholder in finalproject.html.
Transcript archived in markdown and HTML (generated via scripts/md_to_html_converter.py) for transparency.
ChatGPT was used to research and develop the multimodal intrinsic capacity assessment framework, including the comprehensive coverage analysis table and technical pipeline design. The AI assisted with structuring the WHO-defined intrinsic capacity domains, identifying appropriate digital biomarkers, and designing the fusion architecture for multimodal data processing.
Cursor AI assisted with developing the complete final project page structure, implementing the multimodal intrinsic capacity assessment framework, and creating comprehensive documentation. The AI helped with HTML structure, responsive design, weekly system integration plans, and organizing the technical documentation for the complete biomarker assessment system.
Cursor AI assisted with finalizing the project presentation structure to ensure full compliance with MIT Academy project presentation requirements. The AI helped implement all required sections including answering questions, design documentation, bill of materials, individual mastery requirements, course presentation structure, and spiral model development approach visualization.
ChatGPT was used to discuss camera system implementation strategies, image processing algorithms, and EdgeAI integration approaches. The AI assisted with understanding Floyd-Steinberg dithering implementation, touch sensor integration, and exploring TinyML deployment options for the XIAO ESP32S3 platform.
Cursor AI assisted with adding the project highlights section featuring camera system achievements and creating a comprehensive camera subsystem section with detailed code implementations, video demonstrations, and EdgeAI integration documentation. The AI helped with HTML structure, responsive design, image processing explanations, and organizing the technical documentation for the complete camera system showcase.
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License