How to Make Something That Makes (almost) Anything

Spring 2021
Erik Strand

A little more about me. This website is made with Jekyll. Source lives here.

Summer Plans

Due to research and funding requirements this past semester I haven’t made as much progress as intended on my projects for this class. So I will be continuing to pursue my projects over the summer. My primary goal is to develop and compare two similar physical layers for machine networks, one based on doing Manchester encoding on FPGAs, and one based on commercially available ethernet PHYs.

The first option is conceptually simpler and retains more control over all levels of the networking stack. The latter involves heavyweight networking standards that were developed with a far wider range of use cases in mind than machine networks. But because it is so widely adopted, the commercially available hardware is cheap, plentiful, and powerful. I aim to compare data transfer latencies and throughputs of these systems with each other, and with the simpler physical/data link layers that CBA commonly uses for machine networks at this time (e.g. the UART peripheral on an MCU, with an RS-485 transceiver).

Both systems should enable much faster data transfers. This could allow higher bandwidth signals like realtime video data to be shared over the same network as, say, temperature sensor data without saturating the network. This could have implications for the capabilities of Machines that Make, which have generally seen more intelligence distributed throughout the machine network. It could also expand the capabilities of systems like Urumbu that take the opposite approach, in which the actuators have no intelligence and simply process a (necessarily somewhat higher bandwidth) stream of instructions.

Initial work on this project is documented in the PHY for Modular Machines repo, which I will continue to update as I make progress.

End of Summer Update

Over the summer I continued my exploration of networks and FPGAs. I developed a data link layer based on a subset of the DICE token passing model, and implemented it for MCUs and FPGAs. (The MCU implementation came from a subset of existing DICE code, i.e. doesn’t represent new work, while the FPGA implementation is new.) I also tested the performance of these implementations.

On top of this data link layer, I also designed a networking layer for line and loop topologies. I have a basic MCU implementation of this network that I used for initial performance tests, and am working on a hybrid FPGA/MCU implementation that should be much faster. It will rely on FPGAs as the network “backbone”, which will allow a lot of work to be offloaded from the MCUs, and much faster base bitrates to boot. In particular, the FPGAs will be able to inspect the packet headers and start forwarding relevant packets before they have been completely received. This allows an EtherCAT style of fast data “trains” around a loop, where each node only reads/modifies data addressed to it. This FPGA/MCU hybrid design requires new custom circuitry. Jake, Zach, and I collaboratively specced out a dev board that would be of interest for Jake’s machines, DICE, and my new networking infrastructure. Jake designed the board and sent it to a fab house; we just need to assemble it now and I will be able to begin tests. Ultimately I would like to aim to retrofit a clank or other Jake machine with an FPGA based network and compare network throughput and jitter.

I did not end up exploring the use of ethernet hardware. I think that avenue would have been less educational for me, since it would have been more an exercise in system integration than understanding/designing networking layers from scratch. I admit I was also uneager to return to the drawing board after realizing that all the Ethernet PHYs I had specced out during the semester were no longer available.

Reviews

Components

Systems

Weekly Updates