Blog - Matti Gruener

Week 3 - Embedded Programming

I spent this week on both the week’s assignment as well as my final project. We’ll look at both below.

This week’s assignment

Here is the assignment as per the course website.

group assignment:

• demonstrate and compare the toolchains and development workflows
  for available embedded architectures

individual assignment:

• browse through the data sheet for a microcontroller
• write and test a program for an embedded system using a microcontroller
  to interact (with local input &/or output devices)
  and communicate (with remote wired or wireless connections)
• extra credit: try different languages &/or development environments

Group Assignment

Ben, Jaqueline and I talked about different microcontrollers and ways to program them. Jaqueline told us about how she’s using the Arduino IDE to program her SAMD21 board in C++. Ben is using both a Teensy 4.0 and an Adafruit Feather. He’s programming them with PlatformIO (a VS Code extension) and using CircuitPython (with the board mounted as a file system). I shared my experience with micropython and two ways I’ve used in the past to program ESP32 dev boards:

  1. Using VS Code and PyMakr (I don’t like this approach, it’s not reliable and I spend more time debugging than programming)
  2. Mounting the board into WSL and using mpremote to directly use the REPL on the board or pushing python files onto it. I’ll outline that approach a bit more further down in the section of my progress on the final project.

Ben put together a pdf outlining our conversation: PDF - Group Assignment

Group Assignment

Individual Assignment

This week was, again, a lot of fun. I made a board (or rather multiple boards) that can communicate with a web page to imitate message sending from Stranger Things' Upside Down. The video above gives you a first impression.

I had experience with both soldering as well as embedded programming from before my time at the Media Lab. More so with the latter than the former, though. I specifically did not have any experience with soldering very small components so when it came to chosing a board for this week, I opted for the board with a SAMD21 footprint as it would allow me to practice soldering fairly small joints.

Soldering

Admittedly, soldering Quentin’s QPad board was a bit of an emotional rollercoaster. I was very frustrated initially when I used too much solder and essentially created one big short on each of the 4 sides of the microcontroller. It gave me an opportunity to learn how to use the desoldering wick and pump, though. Initially it felt like all I was accomplishing was soldering the desoldering wick onto the board. Eventually, I got the hang of it and successfully finished the soldering of the SAMD21 microcontroller onto the board. The other components were very easy in comparison - with the exception of the micro-usb receptacle. It has a few very small contacts, but having learned from previous mistakes, I was able to solder it with less solder and a good amount of flux.

Quentin then helped me with flashing the required firmware onto the chip. Loading his code examples showed that the assembly of the board was successful and all components worked as expected. The initial frustration was gone and I left the lab very happy. I’m very keen on improving my soldering skills in the coming weeks and will look for opportunities to practice. Thank you, Quentin, for designing the practice board for us and helping me with questions and the final tests. Thank you, Alan, for your help with the equipment and soldering advice. I appreciate both of you.

Assembled board

Programming

I felt much more at home when it came to programming the board. My idea for this week leans on a pop culture reference. In the first season of the show “Stranger Things” one of the characters, Will Byers, gets lost in a parallel dimension. This place is later named “The Upside Down” by the characters in the show. His mom never gives up on him and they find a way to communicate to each other: His mom hangs up a string of lights with a letter attached to each light bulb. Will is then able to switch on these light bulbs from the Upside Down and can communicate letters and words to his mom that way.

I had hoped that I could use an ESP32 for this week’s assignment, but at the time of soldering none was available. The narrative of my idea is a little stronger if we imagine a wireless bluetooth connection (ESP32s support bluetooth, the SAMD21 I ended up using does not), but I think it still works and is a cute idea.

Imagine getting lost in the Upside Down and you could whip out your handy QPad board to communicate with your home dimension! Wouldn’t that be great?

For the assignment I made use of AI for image generation. Below you can see a) the reference image I used, b) the image ChatGPT gave me and c) the resulting image I made myself by adding additional light bulbs (not every letter had one, adding more lights for decoration and adjusting image brightness for mood).

Results Results Results
Prompt used for image transformation, ChatGPT with GPT-5

Can you please look at the attached image and transform the image for me in the following ways:

  • I want a orthographic frontal view of the wall with the letters and the lights.
  • The lights need to be switched off
  • Details like the shape of the light bulbs and the pattern on the wallpaper etc should be preserved.
  • The shape and order of the letters also needs to be preserved.

I then also used AI to give me a starting point for the HTML and JS code required to make the light bulb message board work.

Prompt Ok, cool. Can you give me some basic HTML that loads this image and then also places little blurry dots over the light bulbs and uses add or screen or so as a combination method? I want to be able to “switch on” the individual light bulbs.

The resulting setup was very decent already and helped a lot with getting this up and running fairly quickly.

Result
<!doctype html>
<meta charset="utf-8" />
<title>Stranger Wall – bulb toggles</title>
<style>
  html,body{height:100%;margin:0;background:#111;display:grid;place-items:center}
  #stage{position:relative;image-rendering:auto}
  #wall{display:block;max-width:min(90vw,1100px);height:auto}
  canvas{position:absolute;inset:0;pointer-events:none}
  /* optional: a quick legend */
  .hint{position:fixed;bottom:10px;left:10px;color:#aaa;font:12px system-ui}
</style>

<div id="stage">
  <!-- Replace src with your exported ortho, lights-off image -->
  <img id="wall" src="wall_ortho_off.jpg" alt="Alphabet wall" />
  <canvas id="fx"></canvas>
</div>
<div class="hint">Click bulb to toggle • Alt-click logs coords for adding bulbs</div>

<script>
const img   = document.getElementById('wall');
const cvs   = document.getElementById('fx');
const ctx   = cvs.getContext('2d', { alpha: true, desynchronized: true });

// Example bulb list (x,y in image pixels). Add as many as you want.
// You can mix colors per bulb.
const bulbs = [
  {x:120, y:115, r:36, color:'#ffd27a', on:false},
  {x:270, y:130, r:36, color:'#7ad6ff', on:false},
  {x:420, y:120, r:36, color:'#ffa0ff', on:false},
  {x:565, y:135, r:36, color:'#ffd27a', on:false},
  {x:720, y:120, r:36, color:'#7ad6ff', on:false},
  {x:870, y:135, r:36, color:'#ffd27a', on:false},

  {x:140, y:255, r:36, color:'#7ad6ff', on:false},
  {x:310, y:260, r:36, color:'#ffd27a', on:false},
  {x:490, y:270, r:36, color:'#ffa0ff', on:false},
  {x:660, y:260, r:36, color:'#ffd27a', on:false},
  {x:840, y:270, r:36, color:'#7ad6ff', on:false},

  {x:160, y:405, r:36, color:'#ffd27a', on:false},
  {x:360, y:410, r:36, color:'#7ad6ff', on:false},
  {x:560, y:415, r:36, color:'#ffd27a', on:false},
  {x:760, y:410, r:36, color:'#ffa0ff', on:false}
];

let scale = 1, dpr = Math.max(1, window.devicePixelRatio || 1);

// size canvas to the image on load/resize
function fit() {
  const rect = img.getBoundingClientRect();
  cvs.style.width  = rect.width + 'px';
  cvs.style.height = rect.height + 'px';
  cvs.width  = Math.round(rect.width  * dpr);
  cvs.height = Math.round(rect.height * dpr);
  scale = rect.width / img.naturalWidth;
  draw();
}

function draw() {
  ctx.clearRect(0,0,cvs.width,cvs.height);
  ctx.save();
  ctx.scale(dpr, dpr);
  ctx.globalCompositeOperation = 'screen';        // additive-like
  for (const b of bulbs) if (b.on) glow(b);
  ctx.restore();
}

function glow(b) {
  // position in CSS pixels
  const x = b.x * scale, y = b.y * scale;
  const r = (b.r || 36) * scale;

  // outer soft aura
  let g = ctx.createRadialGradient(x, y, 0, x, y, r*2.4);
  g.addColorStop(0, hexA(b.color, 0.36));
  g.addColorStop(0.35, hexA(b.color, 0.14));
  g.addColorStop(1, hexA(b.color, 0));
  ctx.fillStyle = g; ctx.beginPath(); ctx.arc(x,y,r*2.4,0,Math.PI*2); ctx.fill();

  // inner bright core
  g = ctx.createRadialGradient(x, y, 0, x, y, r);
  g.addColorStop(0, hexA('#ffffff', 0.85));
  g.addColorStop(1, hexA(b.color, 0));
  ctx.fillStyle = g; ctx.beginPath(); ctx.arc(x,y,r,0,Math.PI*2); ctx.fill();
}

// click to toggle nearest bulb
img.addEventListener('click', (e) => {
  const rect = img.getBoundingClientRect();
  const x = (e.clientX - rect.left) / scale;
  const y = (e.clientY - rect.top ) / scale;

  if (e.altKey) { console.log(`{x:${x|0}, y:${y|0}, r:36, color:'#ffd27a', on:false},`); return; }

  let iBest = -1, dBest = 1e9;
  bulbs.forEach((b,i) => {
    const dx = b.x - x, dy = b.y - y;
    const d2 = dx*dx + dy*dy;
    if (d2 < dBest) { dBest = d2; iBest = i; }
  });
  if (iBest >= 0) { bulbs[iBest].on = !bulbs[iBest].on; draw(); }
});

img.decode ? img.decode().then(fit) : img.complete ? fit() : img.addEventListener('load', fit);
window.addEventListener('resize', fit);

// helper: hex -> rgba with alpha
function hexA(hex, a=1){
  const h = hex.replace('#','');
  const [r,g,b] = h.length===3
    ? [...h].map(s => parseInt(s+s,16))
    : [h.slice(0,2),h.slice(2,4),h.slice(4,6)].map(s=>parseInt(s,16));
  return `rgba(${r},${g},${b},${a})`;
}
</script>

This setup allowed me to quickly identify the locations of the light bulbs for initial testing. I then refactored the code manually to make it easier to address specific light bulbs that should be dynamic (as opposed to the static bulbs on the purely decorative strings). I also played with different blending modes and sizes of the lights. Additionally, I added an animation loop that allowed for a more dynamic feel. I was quite happy with the final look and feel of the page.

I then proceeded to add the JS Web Serial API to my setup so that I could send commands from the SAMD21 microcontroller to activate individual lights. Note that this API will probably only work in Chrome and Edge. The communication across this interface is very simple. The board needs to send the character (letter) that needs switching on and the JavaScript on the page then figures out which of the bulbs needs to be addressed. The logic on the page also makes it so that switching on one light will automatically switch off all the others.

The code for the board was largely straight-forward. I used the Arduino IDE and wrote C++ code for the logic on the board. I based my code on Quentin’s examples for both the screen and touch pad handling.

The only unintuitive part of this process was building a menu for the screen. I wanted a menu that allowed the following:

  1. Send pre-defined messages for demos
  2. Send individual letters for when someone is actually lost in the Upside Down and needs to send a custom message
  3. A settings section that lets the user change the speed at which the characters are sent in the case of pre-defined messages

I used ArduinoMenu for the menu definition and ran into a couple of issues. The menu declarations in code are based on C macros which obfuscate errors a little bit. If you make a mistake, you will most likely receive error messages that have more to do with the macro system than your actual mistake. I eventually figured out (with the help of ChatGPT) that each menu declaration needs a minimum of 2 entries and can have a maximum of 16. I violated both of these in my tests and was presented with unhelpful messages that certain symbols weren’t declared. So, if you use ArduinoMenu, make sure that you add at least two menu entries, even if you’re just testing. I wanted to have all letters in a single menu, but due to the size constrains, I had to split up the letters across two menus. Now, the user needs to first select if they want to send a letter in the range of A-M (first menu) or N-Z (second menu). I’m not particularly happy with that setup, but it works for now. There are probably much better ways to handle the letter selection, though.

Below is a video of the final result. I’m sending the message “howtomakealmostanything” in this case - which is unlikely to save anyone from the Upside Down, but there are 2 additional messages stored on the board: “iamlost” and “helpme”. The speed of the message can be controlled with a simple setting accessible via the menu.

To interactively test the final result you can visit the Message Wall page. You can use your own board to send these messages, but to make things easier for you, I left the code that switched lights on based on mouse clicks.

Update

Quentin was able to give me an ESP32-S3 board so I could test a bluetooth connection to my web page. Adding the respective JS code was easy enough. I’m using the Web Bluetooth API to connect to the board and have a simple eventListener set up that receives characters and passes them over to my function that handles the light bulbs. I have written JS code for these sorts of applications before, so I used ChatGPT to produce the code and save me some time.

Prompt Can you give me some basic JS code that allows for making a web bluetooth connection to receive some data from an esp32 board?
Result
// Replace with your GATT UUIDs (example: Nordic UART Service)
const SERVICE_UUID = '6e400001-b5a3-f393-e0a9-e50e24dcca9e';
const RX_CHAR_UUID = '6e400003-b5a3-f393-e0a9-e50e24dcca9e'; // notifications from ESP32

let charRx;

async function connectESP32() {
  const device = await navigator.bluetooth.requestDevice({
    filters: [{ services: [SERVICE_UUID] }]
  });
  const server = await device.gatt.connect();
  const service = await server.getPrimaryService(SERVICE_UUID);
  charRx = await service.getCharacteristic(RX_CHAR_UUID);

  // receive notifications
  await charRx.startNotifications();
  charRx.addEventListener('characteristicvaluechanged', (e) => {
    const v = e.target.value;           // DataView
    // decode as UTF-8 (or parse bytes as needed)
    const text = new TextDecoder().decode(v.buffer);
    console.log('ESP32>', text);
  });
}

// call connectESP32() from a user gesture (e.g., button click)

I did the same for the micropython side of things. Again, I’ve done this sort of setup before and since this was supposed to be just a quick test of the board, I enlisted ChatGPT for help.

Prompt Ok, let’s also add some micropython code that can connect to that bluetooth receiver we just made and then send some characters like ‘a" to it.
Result
# MicroPython on ESP32: BLE UART-style peripheral that notifies the browser.
import bluetooth, time
from micropython import const
from struct import pack

_IRQ_CENTRAL_CONNECT    = const(1)
_IRQ_CENTRAL_DISCONNECT = const(2)
_IRQ_GATTS_WRITE        = const(3)

# Nordic UART Service UUIDs (your values)
UART_SERVICE_UUID = bluetooth.UUID('6e400001-b5a3-f393-e0a9-e50e24dcca9e')
UART_TX_UUID      = bluetooth.UUID('6e400003-b5a3-f393-e0a9-e50e24dcca9e')  # notify: ESP32 -> browser
UART_RX_UUID      = bluetooth.UUID('6e400002-b5a3-f393-e0a9-e50e24dcca9e')  # write: browser -> ESP32

# Simple advertising payload (name + 128-bit service UUIDs)
def adv_payload(name=None, services=None):
    p = bytearray()
    if name:
        nb = name.encode()
        p += bytes((len(nb)+1, 0x09)) + nb                  # Complete Local Name
    if services:
        for u in services:
            b = bytes(u)
            p += bytes((len(b)+1, 0x07)) + b                # Complete List of 128-bit Service UUIDs
    return p

ble = bluetooth.BLE()
ble.active(True)

# GATT table: one service with TX(notify) and RX(write) chars
UART_SERVICE = (
    UART_SERVICE_UUID,
    (
        (UART_TX_UUID, bluetooth.FLAG_NOTIFY),
        (UART_RX_UUID, bluetooth.FLAG_WRITE),
    ),
)
(tx_handle, rx_handle) = ble.gatts_register_services((UART_SERVICE,))[0]
conn_handle = None

def on_ble(event, data):
    global conn_handle
    if event == _IRQ_CENTRAL_CONNECT:
        conn_handle, _, _ = data
        print("Central connected:", conn_handle)
    elif event == _IRQ_CENTRAL_DISCONNECT:
        print("Central disconnected")
        conn_handle = None
        start_advertising()
    elif event == _IRQ_GATTS_WRITE:
        handle, = data
        if handle == rx_handle:
            msg = ble.gatts_read(rx_handle)
            print("RX from browser:", msg)

ble.irq(on_ble)

def start_advertising():
    name = "ESP32-UART"
    ble.gap_advertise(100_000, adv_payload(name=name, services=[UART_SERVICE_UUID]))
    print("Advertising as", name)

start_advertising()

# Demo: once connected (and after the browser enables notifications),
# send 'a' every second.
while True:
    if conn_handle is not None:
        try:
            ble.gatts_notify(conn_handle, tx_handle, b'a')
        except Exception as e:
            # If notifications aren't enabled yet, this may no-op or raise; keep trying.
            pass
        time.sleep(1)
    else:
        time.sleep(0.2)

I manually extended the micropython code it produced to send the string “helpme” instead of just a single letter.

Setting up the ESP32-S3 board with micropython was done using esptool. You can grab a micropython release from here and then run:

python -m esptool --chip esp32s3 --port COM9 --baud 460800 write_flash -z 0x0 <Path To>\ESP32_GENERIC_S3-20250911-v1.26.1.bin

To then use the board in WSL, I ran this in Powershell:

usbipd bind --busid 2-2
usbipd attach --wsl --busid 2-2

Where the busid can be obtained via:

usbipd list

In WSL, I connected to the board via mpremote and pushed the bluetooth script onto it as main.py using:

mpremote connect auto fs cp ./bluetooth_sender.py :main.py

Now, when connecting a powerbank to the board it starts advertising and is available to be paired with. Once the web page pairs with the board, the board starts sending the “helpme” string character-by-character. Works as expected and is a much better way to venture out into the Upside Down!

Progress on the Final Project

E-ink E-ink

For my Final Project I need to send images (like QR codes) to an e-ink screen. I want to use micropython for the logic in my final project and so I was looking for a micropython driver that I could use for the board. I ran into some problems. The micropython driver that I could find was targeting V1 of the Waveshare board only and didn’t work on the V2 screen that I purchased. I couldn’t find a V2 driver written in micropython, but I ended up playing with a combination of C++ libraries (GxEPD2 and Adafruit GFX Library) that allowed me to verify that the screen itself actually worked fine. This video was very helpful when narrowing down what libraries to try.

I still didn’t have a micropython driver obviously and the C++ libraries were obfuscated enough (lots of inheritance of templated base classes to support a wide range of screen models) that deriving the byte sequence needed to initialize and operate the board was a little outside of what I could realistically achieve this week.

The capabilities of modern LLMs astound me on a weekly basis. To solve my “problem”, I opened the GxEPD2 library in Visual Studio Code and asked GPT-5-mini to help me figure out how the communication with the screen is handled for my particular screen model.

Prompt #codebase I’d love for you to analyze the exact control seuqence needed to init a waveshare 1inch54 board. I know that the code here works on my display so let’s figure out how exactly that works. Afterwards I’d like to translate it to micropython, but let’s figure out how the communication with the display is done and what’s required to show pixels on the display please.

It came back giving me a few options for a few different display types. I clarified which screen I want to use with the following prompt.

Prompt I know that the one we want to use is the D67 variant. I ran that on the display (via ESP32 dev boad) and it worked well. Let’s use what we know about that variant and produce a micropython equivalent to how the c++ code works with that display, please.

It then produced micropython driver code that worked flawlessly for my test:

Driver
"""
MicroPython driver for GDEH0154D67 (1.54" 200x200 SSD1681) based on GxEPD2 Arduino sequences.
Target: ESP32 (adjust SPI init pins for other boards)

Wiring (example):
- CS  -> any GPIO (active low)
- DC  -> any GPIO
- RST -> any GPIO
- BUSY-> any GPIO (input)
- SCK -> HSPI SCK
- MOSI-> HSPI MOSI
- MISO-> optional (not used)

Notes:
- Panel requires 3.3V for VCC and IO. Do NOT use 5V data lines.
- Busy polarity for this variant: HIGH when busy.

Usage:
  import machine, time
  from micropython_gdeh0154d67 import EPD
  epd = EPD(spi_id=1, sck=14, mosi=13, cs=15, dc=27, rst=26, busy=25)
  epd.init()
  buf = bytearray([0xFF]*(200*200//8))  # white
  epd.write_image(0,0,200,200,buf)
  epd.update_full()
  epd.sleep()

"""

import time
from machine import Pin, SPI

# Display parameters
WIDTH = 200
HEIGHT = 200
BUSY_ACTIVE_LEVEL = 1  # busy HIGH for SSD1681

class EPD:
    def __init__(self, spi_id=1, sck=14, mosi=13, miso=-1, baudrate=4000000, cs=15, dc=27, rst=26, busy=25):
        # pins
        self.cs = Pin(cs, Pin.OUT, value=1)
        self.dc = Pin(dc, Pin.OUT, value=1)
        self.rst = Pin(rst, Pin.OUT, value=1)
        self.busy = Pin(busy, Pin.IN)
        # SPI
        if miso >= 0:
            self.spi = SPI(spi_id, baudrate=baudrate, polarity=0, phase=0, sck=Pin(sck), mosi=Pin(mosi), miso=Pin(miso))
        else:
            self.spi = SPI(spi_id, baudrate=baudrate, polarity=0, phase=0, sck=Pin(sck), mosi=Pin(mosi))
        # internal
        self._inited = False

    # low-level helpers
    def _cs_low(self):
        self.cs.value(0)
    def _cs_high(self):
        self.cs.value(1)
    def _dc_command(self):
        self.dc.value(0)
    def _dc_data(self):
        self.dc.value(1)

    def send_command(self, cmd):
        self._dc_command()
        self._cs_low()
        self.spi.write(bytes([cmd]))
        self._cs_high()
        self._dc_data()

    def send_data(self, data):
        # data can be int or bytes/bytearray
        self._dc_data()
        self._cs_low()
        if isinstance(data, int):
            self.spi.write(bytes([data]))
        else:
            # assume bytes-like
            self.spi.write(data)
        self._cs_high()

    def _start_data(self):
        self._dc_data()
        self._cs_low()

    def _end_data(self):
        self._cs_high()

    def reset(self):
        # follow Waveshare style: drive RST high, then low, then high
        self.rst.value(1)
        time.sleep_ms(10)
        self.rst.value(0)
        time.sleep_ms(10)
        self.rst.value(1)
        time.sleep_ms(10)

    def wait_while_busy(self, timeout_ms=10000):
        start = time.ticks_ms()
        while self.busy.value() == BUSY_ACTIVE_LEVEL:
            time.sleep_ms(1)
            if time.ticks_diff(time.ticks_ms(), start) > timeout_ms:
                raise OSError('EPD busy timeout')

    # partial ram area like in Arduino driver
    def set_partial_ram_area(self, x, y, w, h):
        # _writeCommand(0x11); _writeData(0x03);
        self.send_command(0x11)
        self.send_data(0x03)
        # 0x44 x start/end in bytes
        self.send_command(0x44)
        self.send_data(x // 8)
        self.send_data((x + w - 1) // 8)
        # 0x45 y start/end
        self.send_command(0x45)
        self.send_data(y & 0xFF)
        self.send_data((y >> 8) & 0xFF)
        yend = y + h - 1
        self.send_data(yend & 0xFF)
        self.send_data((yend >> 8) & 0xFF)
        # 0x4E ram x address
        self.send_command(0x4E)
        self.send_data(x // 8)
        # 0x4F ram y address
        self.send_command(0x4F)
        self.send_data(y & 0xFF)
        self.send_data((y >> 8) & 0xFF)

    # init sequence mirrored from _InitDisplay() in GxEPD2_154_D67.cpp
    def init(self):
        if self._inited:
            return
        self.reset()
        time.sleep_ms(10)
        # soft reset
        self.send_command(0x12)
        time.sleep_ms(10)
        # Driver output control
        self.send_command(0x01)
        self.send_data(0xC7)
        self.send_data(0x00)
        self.send_data(0x00)
        # Border Waveform
        self.send_command(0x3C)
        self.send_data(0x05)
        # Read built-in temp sensor
        self.send_command(0x18)
        self.send_data(0x80)
        # set full ram area
        self.set_partial_ram_area(0, 0, WIDTH, HEIGHT)
        self._inited = True

    # power on sequence (PowerOn in Arduino code)
    def power_on(self):
        # _writeCommand(0x22); _writeData(0xe0); _writeCommand(0x20); _waitWhileBusy
        self.send_command(0x22)
        self.send_data(0xE0)
        self.send_command(0x20)
        self.wait_while_busy(5000)

    def power_off(self):
        # _writeCommand(0x22); _writeData(0x83); _writeCommand(0x20); _waitWhileBusy
        self.send_command(0x22)
        self.send_data(0x83)
        self.send_command(0x20)
        self.wait_while_busy(2000)

    # write whole buffer (current) to RAM at specified rectangle and optionally do not refresh
    def write_image(self, x, y, w, h, buf):
        if not self._inited:
            self.init()
        # initial write handling in Arduino ensures previous/full buffers are managed; here we just write current
        self.set_partial_ram_area(x, y, w, h)
        self.send_command(0x24)  # write RAM (current)
        # stream data
        self._start_data()
        # buf should be bytes/bytearray length w*h/8
        self.spi.write(buf)
        self._end_data()

    # helper similar to Arduino's _Update_Full/_Update_Part
    def update_full(self):
        # full update: 0x22 0xF7, 0x20 then wait
        self.send_command(0x22)
        self.send_data(0xF7)
        self.send_command(0x20)
        # full refresh time in Arduino was relatively long; wait until busy releases
        self.wait_while_busy(20000)
        # after full update the Arduino sets power_is_on false; we keep state open for simplicity

    def update_partial(self):
        # partial update: 0x22 0xFC, 0x20 then wait
        self.send_command(0x22)
        self.send_data(0xFC)
        self.send_command(0x20)
        self.wait_while_busy(5000)

    def sleep(self):
        # deep sleep: 0x10 0x01 per Arduino
        self.power_off()
        self.send_command(0x10)
        self.send_data(0x01)


# Simple test function to draw a checker pattern (for quick visible test)
def example_test():
    epd = EPD(spi_id=1, sck=14, mosi=13, cs=15, dc=27, rst=26, busy=25)
    epd.init()
    epd.power_on()
    # white buffer
    buf = bytearray([0xFF] * (WIDTH * HEIGHT // 8))
    # paint a small black rectangle in top-left 40x40 (black = 0x00)
    for row in range(40):
        for col_byte in range(40//8):
            idx = (row * (WIDTH//8)) + col_byte
            buf[idx] = 0x00
    epd.write_image(0, 0, WIDTH, HEIGHT, buf)
    epd.update_full()
    epd.sleep()


if __name__ == '__main__':
    example_test()

I then extended the python code to test images. Again, I used ChatGPT to give me a quick tool that can convert png files (I downloaded a QR code that I wanted to use) to a .bin file with contents that I can directly send to the waveshare screen.

Prompt If I want to get a bytearray in python of an image, what’s the best way for me to store that image so it’s easiest to read it as a bytearray? Can you give me the code to store a png as .bin?
Python
# png2bin.py
# pip install pillow
from PIL import Image
import argparse, sys

def pack_1bpp(im, invert=False, msb_first=True):
    im = im.convert("1")  # 1-bit BW
    w, h = im.size
    px = im.getdata()
    # Pillow "1" gives 0 or 255. Map to 0/1 bits.
    bits = [(0 if p else 1) if invert else (1 if p else 0) for p in px]
    out = bytearray()
    for i in range(0, len(bits), 8):
        chunk = bits[i:i+8]
        if len(chunk) < 8:
            chunk += [0]*(8 - len(chunk))
        b = 0
        if msb_first:
            for bit in chunk: b = (b << 1) | bit
        else:
            for k, bit in enumerate(chunk): b |= (bit & 1) << k
        out.append(b)
    return out, w, h

def main():
    ap = argparse.ArgumentParser()
    ap.add_argument("input_png")
    ap.add_argument("output_bin")
    ap.add_argument("--size", type=str, help="WxH (e.g. 200x200)")
    ap.add_argument("--invert", action="store_true", help="flip black/white bits")
    ap.add_argument("--lsbfirst", action="store_true", help="pack LSB-first in each byte")
    args = ap.parse_args()

    im = Image.open(args.input_png)
    if args.size:
        w, h = map(int, args.size.lower().split("x"))
        im = im.resize((w, h))
    buf, w, h = pack_1bpp(im, invert=args.invert, msb_first=not args.lsbfirst)

    # sanity: bytes should be w*h/8
    expected = (w*h + 7)//8
    if len(buf) != expected:
        print(f"Warning: got {len(buf)} bytes, expected {expected}", file=sys.stderr)

    with open(args.output_bin, "wb") as f:
        f.write(buf)

    print(f"Wrote {args.output_bin} ({len(buf)} bytes) for {w}x{h}")

if __name__ == "__main__":
    main()

In one of the next weeks, we’ll learn how to use a Logic Analyzer. I will then use it to look at the communication between microcontroller and screen to verify a) that I understand what’s going on and b) that the communication is equivalent to the established C++ library. Once I’ve gone through those steps, I’m planning on open sourcing the above driver code on GitHub so others can easily drive their V2 screens, as well.

Bonus: Notes on using micropython on ESP32 dev boards (on Windows, via WSL)

In a previous life I had already played with ESP32 boards attached to my Windows machine. At the time it took me a while to figure out how to make the boards show up as resources for mpremote in WSL. My notes from when I first figured out how to connect the board came in handy this week and I’m reproducing them here in case someone else runs into similar issues. I used the ESP32 dev board to connect to the Waveshare screen in my experiments.

Windows

In Powershell, make sure the board is connected and shows up. Note its bus id for the next command.

usbipd list

Then, make sure that WSL has access to this board (as identified through its bus id).

usbipd attach --wsl --busid 2-1

WSL

Make sure that the board shows up now.

ls /dev/ttyUSB*

Load the CP210x USB serial driver. Without it, you will probably not be able to access the board.

sudo modprobe cp210x

Confirm that the driver is loaded.

lsmod | grep cp210x

Temporarily fix permissions so non-root users can access the serial port.

sudo chmod 666 /dev/ttyUSB0

Connect the board.

mpremote connect /dev/ttyUSB0

You can also use

mpremote connect auto mount . 

to establish the connection and automatically make the contents of the current directory available to the interpreter running on the board. I’m using this a lot for testing and then just import whatever python test file I want to play with.

Note on Web Development

I’m documenting the below in case anyone else uses Hugo and needs to host custom HTML/JS for their assignments.

I produced some custom HTML with more or less complicated JavaScript this week (the wall with the light bulbs). Piping that through Hugo in a way that makes it work when hosted through Neil’s setup on the CBA server is a little tricky. Generally, the trick to make static files work with his setup which hosts student pages in a fairly nested path on the official CBA website is to use Hugo shortcodes (to be able to run Hugo functions like absURL and replace file paths) and to then use these shortcodes in .md files (so that the resulting HTML is actually built and included in /public/).

For this week, I created a shortcode in a file called messagewall.html that contains the custom HTML and JS I developed. Putting it in the shortcode allows me to create a variable for the background image at the top of the file:

  {{ $img := "week3/board_background_clean.jpg" | absURL }}

and to then include it in the html code like so:

  <img id="wall" src="{{ $img }}" alt="Alphabet wall">

All that’s left to do now is to create a .md file that includes nothing but a reference to this shortcode:

---
title: "Message Wall"
---

\{\{< messagewall >\}\}

The final build will then include an html page that has an absolute filepath to the image.

#electronics #soldering #embedded #programming