Week 12: machine building week

Overview

In progress

My individual role was the z-axis mechanism and the initial site setup on the software side.

Writeup below by Ozgun:

The mayhem week. All of us in the CBA section had to build something together. The ideas were interesting ones:

We all voted and it came to the sand garden. After some brainstorming..

we divided ourselves into the following teams.

I volunteered to join the software team, and our goal was to allow the user to draw on a canvas in the browser, take the drawing and convert to gcode, and send the gcode to the XY plotter.

Node JS and Fastify 🔗

There was only less than one week for the whole project, so we need to be as fast as possible. Luckily Node JS, a framework to run javascript code, and fastify to serve it as a URL came to the rescue.

The first framework (sans css) with only HTML did not look good, understandibly but it was functional.

Gcode generation 🔗

This was the part I was most involved in, there was a package in javascript for converting img2gcode. It took a png image, so I converted the canvas to a png image. Then converted to gcode and used this viewer to look at it. It was a complicated gcode and it was important to set the canvas size and appropriate parameters for it.

It was still not good, additionally we were quick to realize that even if it was a one line drawing on the image (which was necessary for sand garden purposes), the gcode is not necessarily continuous.

Therefore, we had to think of something different…

We proceeded to realize that gcode is just a format which literally had the points one by one. We could generate them while the user draws on the canvas, using the event listener.

AI generation 🔗

Further we tried to push it by allowing a user to enter a prompt and generate an AI drawing of a single line, and then convert it to gcode.

For that, to generate our own gcode, we did some image processing.

Here is a example generated for a ‘Corgi’

Image processing 🔗

  1. Threshold the image
  2. Get the contours of the image
  3. Get the contour with the largest perimeter
  4. Use the points of the contour to draw the gcode for it

This how the gcode looked liked after the image processing.

Final project results 🔗

The whole thing came together quite well, with a lot of effort from everyone on the team. We showed in class and while the demo did not go as well, the whole thing looked great.