I’ve been having fun printing with the various colors of ABS that MakerBot offers, but have always been somewhat envious of folks that have been printing successfully with PLA. I bought 5lb roll of the original 4032D that MakerBot sold, but ended up putting it on the shelf after reports from other operators that it was destroying their MK4 extruders.
Getting the new PLA printing was surprisingly easy, given the challenge of using a new extruder (which needed temperature, PID, and flow rate calibration) and it’s the first non-ABS plastic I’ve printed, so it will have different optimal printing temperatures and more.
I haven’t carefully calibrated the thermistor on my MK5, and I wasn’t sure of the right temperature to extrude PLA, so I started by setting the temperature to 180ºC and attempting to push some filament through by hand. I raised the temperature slowly until it became easy to push through by hand, around 195ºC. I had not yet locked down my PID settings, so I was getting some wild temperature swings. To be safe, I set the temp to 200ºC and started printing my favorite bottle opener from Thingiverse.
It turns out the flow rate for the MK5 is a lot higher than it was for the MK4. After putting down the raft, I was having trouble with the filament stripping inside the MK5 due to backpressure. Still, by paying attention to the print and tightening the thumbwheel whenever the filament slipped, I was able to get a completed bottle opener.
It was then that I noticed two things:
The top two layers of the object sagged deep into the honeycomb fill layers below, giving a terrible finish on top.
PLA has no give, so there was no way that a penny would fit into the slot. I have some nice bruises from trying to make it fit.
To fix the stripping and sagging problems, I figured that I should increase my Feedrate – the speed at which the platform moves to catch the extruded plastic. I figured that a too-low Feedrate would cause some back pressure when printing the raft (leading to stripping), and would contribute to sagging overhangs. I also guessed that the sagging is due in part to the high thermal mass of liquid PLA allowing it to sag before it cooled, so a lower extrusion temperature would let it solidify sooner, leading to less sagging. I still use skeinforge-0006, so these settings are in “raft.csv” (various temperatures) and “speed.csv” (Feedrate), respectively.
So, some calibration prints:
Starting with my first successful print in the upper-lefthand corner, with the temp of 200ºC and a feedrate of 26.5mm/s (which was working for my MK4), I slowly lowered my temp and increased feedrate. At 180ºC I had a failed print due to the PLA freezing up, so I am going to stick with 185ºC going forward. Increasing the feedrate by 25% immediately solved my filament stripping problem, but still left a pretty nasty top layer. Increasing beyond that smoothed out the top pretty well, and left clean enough slots that I could actually insert some coins, albeit dimes rather than pennies.
I may try increasing my feedrate further, but I found an odd result when going from 36.4375mm/s (slower, should have thicker walls) to 38.26mm/s (faster, should have thinner walls). Namely, they both seem like very solid objects, but the dime slid nicely into the slower-printed version using the edge of a desk, while I had to take a hammer to the faster-printed version, and actually ended up bending the dime rather than driving it into the plastic (PLA is tough stuff!). I would have expected the opposite.
Anyway, I hope these results are useful for some folks. I hope to improve my calibration a bit more, and trying out the MK5 with my old roll of PLA 4032D.
Actually, all sorts of new and exciting nonsense has happened to MakerBot #131.
I was excited to order get my MK5 Plastruder kit and join all of the cool people who have left the pinch-wheel and nichrome behind. Unfortunately, I ran into some problems early on, and after buying some cool thermocouple parts to try calibrating everything, finally determined that I had a bad thermistor.
It’s been awhile, but I’ve finally gotten around to creating a WordPress plugin that I’m calling Thingiverse Embed.
The plugin has two features. First, you can embed a little “wallet-sized” view of a Thing into an individual blog post or page, with the Thing’s title, creator, image, description, and links back to Thingiverse with a simple shortcode:
The plugin also includes a Thingiverse Stream widget, for embedding streams like “Things I’ve Made” as a simple sidebar widget. It just needs to be configured with the title, the type of stream you want to use, the Thingiverse username (for certain streams), and the maximum number of Things to display.
It’s been fun a fun weekend developing this plugin, as it’s my first WordPress plugin, and the first “serious” PHP I’ve written. Of course, it is filled with nasty HTML parsing and XPath tricks, and could use lots of cleanup, so please give me feedback if you use it!
[ReplicatorG] is the software that will drive your CupCake CNC, RepRap machine, or generic CNC machine. You can give it a GCode or STL file to process, and it takes it from there. Its cross platform, easily installed, and is based on the familiar Arduino / Processing environments.
For my purposes, ReplicatorG provides two things. First, RepG is a user interface for controlling the MakerBot hardware:
Second, RepG reads G-code files describing how to build an object, and transmits them to the MakerBot over the USB:
Of course, ReplicatorG is open source, and the code is available on GitHub! So, it was simple to clone their repository and start hacking on it myself.
Camera Control via ReplicatorG
While it was relatively simple to update the extruder controller firmware to make it camera-aware, ReplicatorG is a bit more complicated. My first goal was to hack a new “Camera” checkbox into the control panel. Whenever the box was checked, the camera would take pictures. Whenever the box was unchecked, the camera would be idle.
You can find the code required for these changes in this commit on GitHub, but I will try to briefly break them down here:
Define a new machine. In the machines.xml.dist file, I defined an experimental MakerBot configuration named “EXPERIMENTAL – Cupcake CNC w/HBP and remote camera”. It is essentially a copy of the typical MakerBot configuration with a heated build platform, but in the <tool> definition, I also added a camera="true" attribute.
Update the tool model. In ToolModel.java, I added code to represent whether the tool has an attached camera, whether the camera is activated, and how to parse the camera attribute out of machines.xml.
Update the machine driver model. In Driver.java, DriverBaseImplementation.java, and Sanguino3GDriver.java, I added the definitions and implementations to triggerCamera() and stopTriggeringCamera(). This is the code that actually sends the TOGGLE_CAMERA serial command to the extruder controller, which I also defined in ToolCommandCode.java.
Update the control panel interface. In ExtruderPanel.java, I added the code to draw a new label and checkbox named “Camera”, if the machine is configured for a camera, and to respond to check/uncheck events by calling triggerCamera() or stopTriggeringCamera().
Compiling and Running the new ReplicatorG
Compiling ReplicatorG is pretty simple, so long as you have a reasonable JDK environment and have Ant on your path. There are basically two steps:
Copy machines.xml.dist to machines.xml.
Run the proper dist-linux.sh, dist-mac.sh, or dist-windows.sh.
ReplicatorG will be compiled and packaged up into the dist/ directory in two forms: an installable package for the chosen platform, and an unpacked version that you can run directly.
Opening up my modified version of ReplicatorG, I selected the “EXPERIMENTAL – Cupcake CNC w/HBP and remote camera” profile from the Machine -> Driver menu, opened up the control panel, and was happy to see this:
After hooking up my camera to the extruder controller’s D9 port, and starting the Remote Button script on the camera, I was able to take pictures by quickly toggling the camera checkbox on and off. I could also leave the checkbox activated to make the camera take pictures continuously.
Automatic Triggering with G-codes
Being able to trigger the camera by hand is all well and good, but my goal was to take pictures automatically at the end of every layer. To do this, I needed to be able to embed camera trigger commands in the G-code for building each individual object.
I may have to change these in the future, as the main ReplicatorG development team claim G- and M-codes for other features, but these work for now.
Modifying ReplicatorG to accept these M-codes (GitHub commit here) was straightforward: update GCodeParser.java to recognize the codes, and call the appropriate triggerCamera() and stopTriggeringCamera() methods.
I could now construct a G-code file which, when “built” in ReplicatorG, would take a picture on demand:
M150 (trigger the camera)
G4 P700 (wait 0.7 seconds for the camera to activate)
M151 (stop triggering)
G4 P1300 (wait 1.3 seconds for the camera to finish)
Finally, it was time to edit up the G-code for the models I want to photograph.
Typically, G-code is generated by taking a 3D object in STL format and running it through the Skeinforge tool. Skeinforge is a set of Python scripts, which means it is not too difficult to insert your own code.
For now, however, I decided to make a simple hack using a Perl script I wrote called add_camera_events.pl. It works by looking for (</layer>) comments, which signal the end of a layer of printing, and inserts lines to:
And with that, the computer aspect of this system was finally done!
Phew! So far I’ve hacked a camera, wired it to the MakerBot, updated the MakerBot firmware to trigger it, updated ReplicatorG to trigger it, and written a script to update G-code files with camera triggers at the end of each layer.
So… does it work? You bet! Stay tuned for more examples and a breakdown video of this whole project in the final post in this series!
While I know I should be finishing my MakerBottime-lapsecameraseries, I took some time for another project to play with some Processing. The above image was rendered in Processing, in real time in just couple of minutes!
Basically, I wanted to take a simple shape, defined by an SVG path, and fill it with images of 3DobjectsloadedfromSTLfiles. Specifically, many wonderful MakerBot-printable objects from Thingiverse!
After some Googling around, I found out that this problem is basically a space-filling problem, similar to an excellent Processing sketch named Scattered Letters by Algirdas Rascius, but with a twist.
The basic algorithm is:
Load an SVG and render it to an off-screen buffer
Set curr_size, the size that STLs should be rendered, to max_size
Choose a random STL model, give it a random orientation, and render it at the current size to an off-screen buffer
Try several times to place this model by giving it a random x,y position and checking it for a good fit:
Each non-background pixel of the model’s off-screen image should fit within the non-background pixels of the SVG’s off-screen image.
Each non-background pixel of the model’s off-screen image should NOT overlap with any non-background pixel of the main display.
If a fitting position is found, render the model to the display.
Otherwise, shrink curr_size by a step and choose a new model.
A note on STL files: unlekkerLib only loads STL files in the binary format. It chokes dramatically on ASCII STL files, such as those exported from OpenSCAD. I was able to use Zaggo’s excellent Pleasant3D to load ASCII STLs and re-save them, which converts them to binary STLs. As a bonus, Pleasant3D also allows you to orient objects in a way that will make them look most interesting when they are rendered down to 2D in the final image.
An example M.svg, as well as several objects from Thingiverse are included with the code to get started. To use your own SVGs, I have had good luck using Inkscape to draw or import shapes, and save them as the native “Inkscape SVG” or “Plain SVG” formats. Some files might require hand-tweaking; for example, if the width and height header values are something like "100%" instead of a pixel value.
There is also some simple configuration in the sketch to allow the export of PDF files. This is nice because the resulting PDF has full vector data, making it easily rescaled to any size you wish. Unfortunately, the current PDF renderer for Processing renders each triangle of each STL model as a separate path, generating very complicated vector output, which tends to bring Inkscape to its knees. I have had some luck with importing those files, rastering them out to PNG at a high resolution (e.g. 600 dpi), and using Inkscape’s “Trace Bitmap” functionality to re-vectorize them, though this requires some cleanup by hand.
Anyway, this has been a fun little diversion for me for the last couple of days. I hope that you folks find it useful! Post your awesome pictures in the comments, here!
The MakerBot electronics ecosystem is comprised of 3 parts: your computer, the MakerBot’s motherboard, and the extruder controller board. Your computer talks to the motherboard via a USB<->TTL interface (such as this FTDI cable from SparkFun). In turn, the motherboard communicates with the extruder using another serial protocol, RS-485, over an ethernet cable. Finally, the extruder triggers the camera via the custom cable I made in the previous post.
To send a message to the extruder – in this case, to activate or deactivate the camera – we must create a packet for the motherboard. The HOST_CMD_TOOL_QUERY code allows us to send the motherboard a packet which it will then pass along to the extruder controller.
That’s great, because it means the motherboard part of this software hack is done!
In fact, we’ve already hacked the camera, as well, so we’re halfway there!
Hacking a camera into the extruder controller
Since the motherboard already does everything we need (passes along packets from the computer to the extruder controller), we only need to update the ArduinoSlaveExtruder code.
If you followed the 4 links above, you’ll notice that they go to my own G3Firmware GitHub repository. You can download it yourself to play along by cloning the repository and checking out the ECv2.3rc0-camera branch.
To build the firmware and upload it to the extruder controller, we need some common development tools (make, in this case), and the Arduino development environment. With those things installed, we can compile everything by setting the ARDUINO_HOME environment variable to the path to our Arduino install’s java directory (e.g. on OS X this would be /Applications/Arduino.app/Contents/Resources/Java/), and simply run make.
Once the firmware has been compiled, we can upload it to the extruder controller by using the USB<->TTL cable that usually connects the motherboard to our computer. Plug the cable into the extruder controller, and run the make upload command. You’ll need to make sure that ARDUINO_HOME is set, and you will probably need to alter the Makefile to specify the correct serial port, and maybe to update the call to avrdude to include the path to the Arduino avrdude config file. You can see an example of that in this commit.
Once the firmware is uploaded to the extruder controller, the MakerBot is all set to take pictures!
… Of course, we still have no way to tell the MakerBot to take a picture, so stay tuned for that information in the next update:
Like many MakerBot owners, I feel compelled to help spread desktop 3D printing throughout the world. So, for the past several months, MakerBot #131 has been hard at work printing parts in 3D to make another 3D printer!
The Mendel is the second (and current) design for the RepRap project, whose goal is to create rapid-prototyping machines that can replicate themselves. As an Open Source Hardware project, everything about the Mendel’s design is available online via Subversion, from the mechanical parts to the electronics schematics, to the source code for the device and its host machine. Additionally, there is a fantastic community of very smart people who are constantly improving the design, trying new things, and helping others get their RepRaps working!
While the Mendel requires various hardware bits such as motors, electronics, nuts and bolts, etc., its structure is about 51% 3D-printed parts. This works out to about 98 individual pieces that need to be printed, and represents a huge number of printing hours.
To get started, I used a .zip file full of the 3D STL files for these parts that someone very nicely prepared and uploaded to the MakerBot Operators group. These files were from the 1.0 release of Mendel, so some of them ended up being out of date, and a few had issues that made them unprintable. Thankfully, another kind MakerBot operator uploaded a fully prepared set to Thingiverse, so I could go there for a replacement whenever I found a part that wouldn’t print.
Here, column N47 contains the “completeness” of the Mendel as a value of 0.0 – 1.0 in terms of number of hours printed so far divided by the expected number of hours total. This data could be used on an HTML page with an “update_mendel_progress” function by loading it with a script tag:
At any rate, after a lot of tweaking, many hair-raising moments, a required upgrade with the MakerBot Heated Build Platform v2.0, and hours and hours of printing, the parts were finally complete! I gave them to Matt Mets, a member of HackPittsburgh, and you can see the photos he took of the parts, above!
Despite some of the parts belonging to a slightly out-of-date design, Matt has been making progress on getting everything together!
On Thursday, March 25th, I spoke about desktop fabrication and the MakerBot Cupcake CNC 3D printer at Dorkbot Pittsburgh. After some slides, I gave a printing demo with my Cupcake, Makerbot #131. You can find my slides above. I’ll post the video when it becomes available.
Like many enthusiastic makers, I recently got my hands on a copy of the excellent Make: Electronics book from O’Reilly. It’s an excellent bottom-up, experiment-based introduction to electronics, but sourcing all of the parts required to complete each experiment can be an adventure.
Thankfully, the Maker Shed is now offering two componentspacks to help you work through the book without having to order from a half-dozen parts vendors!
Tonight at HackPittsburgh, Matt Mets and I made a little unboxing video for the first components pack, which you can see embedded above. Let me know what you think!