Presentation at GAFFTA Creative Code Meetup VII

I gave a talk at the GAFFTA Creative Code Meetup back in November. It was a brief overview of my recent work including Zen Photon Garden, High Quality Zen, the Ardent Mobile Cloud Platform, and Fadecandy.

It includes a live demo of Fadecandy, starting at 18:43.

Creative Code Meetup VII- Micah Elizabeth Scott from GAFFTA on Vimeo.


Fadecandy Controller available from Adafruit


The Fadecandy controller (initially announced here) is a new USB interface board for making more expressive art installations using the widely available WS2811 “NeoPixel” LED strips. It controls up to 8 strips of 64 LEDs, and it includes a unique dithering algorithm to help you quickly get the best quality color from each of your LEDs.

I’m happy to announce that the Fadecandy controller board is now being manufactured and sold by Adafruit! You can get your own from the Adafruit store.

What can you do with it? Here’s an LED triangle running a Processing sketch based on chaotic gravitational attraction:

Or make something a bit larger! The Ardent Mobile Cloud Platform is a Burning Man project using a Raspberry Pi and five Fadecandy controller boards:


Fadecandy: Easier, tastier, and more creative LED art


I’ve been working on a project lately that I’m really eager to share with the world: A kit of hardware and software parts to make LED art projects easier to build and better-looking, so sculptors and makers and multimedia artists can concentrate on building beautiful things instead of reinventing the wheel. I call it Fadecandy.


Fadecandy isn’t a one-size-fits-all solution. It’s an easy way to get started and an advanced tool for professionals. It’s a collection of simple parts that work well together:

  • Firmware that uses unique dithering and color correction algorithms to raise the bar for quality while getting out of the way of your creativity.
  • Open source hardware for connecting cheap and popular WS2811 based LEDs to a laptop, desktop, or Raspberry Pi over USB.
  • The Fadecandy server software, which communicates with one Fadecandy board or dozens. It runs on Windows, Linux, and Mac OS, and on embedded platforms like Raspberry Pi.
  • The Open Pixel Control protocol, a simple way of getting pixel data from your creative tools into the Fadecandy server.
  • Libraries and examples for popular languages. We have Python and Processing already, with Javascript and Max coming soon.
  • And of course, the LEDs themselves! Fadecandy works with popular WS2811/WS2812 LEDs available from Adafruit, SparkFun, and AliExpress. Each controller board supports up to 512 LEDs, arranged as 8 strips of 64 each.

fadecandy-diagramFadecandy is designed to enable art that is subtle, interactive, and playful, exploring the interplay between light, form, and shadow. If you’re tired of seeing project after project with frenetic blinky rainbow fades, you’ll appreciate how easy it is to create expressive lighting with Fadecandy.

Fadecandy is battle-tested. The firmware was originally developed to run the Ardent Mobile Cloud Platform, a Burning Man project which used 2500 LEDs to project ever-changing rolling cloud patterns onto the interior of a translucent plastic sculpture. It used five Fadecandy boards, a single Raspberry Pi, and the effects were written in a mixture of C and Python. The lighting on this project blew people away, and it made me realize just how much potential there is for creative lighting, but it takes significant technical drudgery to get beyond frenetic-rainbow-fade into territory where the lighting can really add to an art piece instead of distracting from it.


Fadecandy is designed to be really easy to build good-looking effects with. Here’s a really simple example of what you can do with only a few lines of Processing code:

OPC opc;
PImage dot;
void setup()
  size(640, 360);
  dot = loadImage("dot.png");
  // Connect to the local instance of fcserver
  opc = new OPC(this, "", 7890);
  // Map an 8x8 grid of LEDs to the center of the window
  float spacing = height / 16.0;
  opc.ledGrid8x8(0, width/2, height/2, spacing, 0);
  // Put two more 8x8 grids to the left and to the right of that one.
  opc.ledGrid8x8(64, width/2 - spacing * 8, height/2, spacing, 0);
  opc.ledGrid8x8(128, width/2 + spacing * 8, height/2, spacing, 0);
void draw()
  // Change the dot size as a function of time, to make it "throb"
  float dotSize = height * 0.6 * (1.0 + 0.2 * sin(millis() * 0.01));
  // Draw it centered at the mouse location
  image(dot, mouseX - dotSize/2, mouseY - dotSize/2, dotSize, dotSize);

You can help!

Fadecandy is still in its infancy. I’ve been building it as fast as I can, but what it really needs now is community. This is you!

Ways you can help:

Where is this going? I’m currently polishing the software, making examples, and writing documentation. I have a small number of prototype boards at the moment, but my plan is to do a larger manufacturing run soon and retail the boards online. Maybe this will be a Kickstarter, or maybe they’ll show up in popular hobbyist electronics shops. Time will tell, and I need everyone’s help.

2013-10-02 15.39.19

Slides from Arse Elektronika 2013

2013-10-02 15.39.19

Arse Elektronika is a yearly conference that focuses on the intersection of sex and technology. It’s almost too San Francisco, but it actually comes from the weird Austrian art group monochrom. And it’s a lot of fun!

This year I was invited to give a talk on my sonar-controlled vibrator. I even won a really weird award for it, the Golden Kleene!

Slides from the talk are now available:


The Ardent Mobile Cloud Platform

This year I had the pleasure of working on a big art project for Burning Man with a wonderfully talented group of artists and engineers in my community. It started with a simple idea: Let’s bring a variable reach forklift to Burning Man, and put a cloud on it. The project’s tongue-in-cheek Kickstarter video illustrates the concept we were going for:

(View on YouTube)

In only about two months, we worked hard and this vision took shape. The skin of the cloud was made from thousands of squares of HDPE plastic zip-tied together. I love projects that import a simple and rigid digital aesthetic into the imprecise real world, where the constraints of manufacturing and assembly leave their own unique marks on the result. This was especially true at Burning Man, where we assembled our art in the desert heat, contorting our bodies to reach inaccessible internal bolts, soldering upside-down, and dealing with unexpected equipment failures.

9594693517_b926288656_o(Photo: Making final adjustments on the Ardent Mobile Cloud Platform, by Aaron Muszalski)

I’m used to projects that involve some level of technical complexity that leaves me feeling like I just accomplished something challenging. This feeling eventually subsides, and leaves me wondering why it was I that actually built the thing at all. This project did still involve some technical challenge, but it was about so much more than that. Art projects like this bring my community together, and we all enjoy using the art to create weird and delightful experiences for those around us. For me the peak of this experience was Tuesday night False Profit party, riding the cloud 40 feet in the air with my friends and collaborators, taking in the spectacle of Burning Man around us, people far below us dancing around an art project I helped create.

The Ardent Mobile Cloud Platform rains on the DPW Parade, Burnin(Photo: The Ardent Mobile Cloud Platform rains on the DPW Parade, by Neil Girling)

As is the tradition of Ardent Heavy Industries, we took a dumb idea way too far. This wasn’t just a static sculpture, it sported an interactive control panel via WiFi and an iPad running TouchOSC. This cloud was equipped with a water pump and pressure vessel, two computer controlled valves, a sound system, and 2368 individually addressable RGB LEDs.

2013-08-16 18.29.35 2013-08-16 18.29.48

I was in charge of the electronics and software for the project, and I designed the volumetric lighting. The effect is a little like mapping a 3D light texture onto the outside of our cloud. I built modules with addressable WS2812 LED strips aimed outward at the translucent cloud skin. These were controlled by a Raspberry Pi and five of my own Fadecandy LED controller boards. The Fadecandy boards used high-speed temporal dithering and gamma correction to very smoothly interpolate between keyframes generated by the Raspberry Pi. The Pi’s software has a rough idea of where each LED is in 3D space, and it uses that location to sample from a 4D fractional Brownian motion function.

The result was really beautiful at night:

(Video: J Gingold)

Working on AMCP has been a great experience, and I’m looking forward to doing more projects that create weird experiences and bring people together.


Zen photon garden

Lately I’ve been spending more time working on creative projects. I would say focusing on creative projects, though sometimes focus seems counterproductive. As a project hurtles toward one goal, it leaves a trail of sharp edges and unanswered questions that lead to completely new projects.

This happened recently. In the course of designing a triangular pixel grid prototype for a larger project, I found myself building a little raytracing toy in Processing. I wanted to optimize the shape of my reflectors in order to get the smoothest and most uniform LED light. Before I could even use this tool for its intended purpose, I got distracted with using it to create art. Then I got distracted with making it run in web browsers, so anyone could use it to create art.

This became Zen photon garden:

There are plenty of other JavaScript raytracers out there. It makes a great tech demo; an impressive way to show that web browsers can be a serious platform for even the most CPU-intensive applications. And, no doubt, zenphoton needed to invoke some very modern web technologies to run as fast as it does.

But I didn’t write it as a tech demo, I wrote it to try out an interaction model: sculpting with light.

For example, a screencast:

And the fully rendered result:


I’ve been enjoying using Zen photon garden myself, and I’ve been posting most of my work on Flickr:

Nuts and Bolts

Zen photon garden isn’t a typical raytracer. Normal raytracers operate in 3D space, and they simulate the paths taken by light as it travels from lamp to camera. Like in the real world, you only see light once it reaches the camera.

In Zen photon, we trace rays in flat 2D space, but the entire path of each ray is visible. It’s as if you’re looking down at a table, a lamp in the center, in a room covered with fog. But unlike real fog which would absorb and scatter light, Zen photon traces the path of light rays through free space without disturbing them.

This gives you a sense for how light behaves everywhere in the scene, not just where we can see it. In fact, the image you create can be thought of as a histogram of probabilities. Each pixel represents a small square of space. Brighter pixels are squares of space that, on average, are likely to contain more photons than darker pixels.

Zen photon garden is entirely probabilistic. When it’s too complicated to find an exact solution to a problem, sometimes it’s easier to find a large number of terrible solutions. That’s the basis behind the Monte Carlo method. For each light ray, I simulate just a single way that this ray could have bounced. The light source generates rays in random directions, and diffuse reflections also randomize a ray’s angle.

Each ray by itself is a really lousy approximation of the full raytraced scene, but if you collect enough of these bad solutions and average them together, you can get a pretty good solution. Since it’s very difficult to find a deterministic mathematical solution to the raytracing problem, Zen photon is built to cast many rays really quickly, and display the average solution in real-time.

This job of repeatedly sending out random rays into the scene is what Zen photon spends almost all of its time doing. It uses several modern web technologies to keep this process fast:

  • Web Workers, for rendering on multiple CPU cores.
  • Canvas, for drawing the results.
  • Typed Arrays, for efficiently operating on large blocks of data.
  • asm.js when available, for compiling a subset of JavaScript to fast native code.

Zen photon garden is open source. Find it on GitHub.

Where to?

There are a bunch of directions I can imagine taking Zen photon garden in.

If it gets popular, I’d love to create an iOS app. The web site works on iOS in a minimal capacity, but it’s slow and not at all fun to use. Sculpting with light on the iPad seems like a natural fit.

The programmer in me would love to keep adding features. More shapes, like circles or polygons. Color, by simulating the wavelength of each light ray. A resizable canvas. I’ve been trying to keep this site simple, though, in an effort to make it approachable and keep it focused on aesthetic exploration rather than tools and features.

I’d like to track upcoming web technologies. I’m excited about asm.js, which as of now is available only in nightly builds of Firefox. This is exactly the kind of HTML5 feature that really shines for creating art apps like this, especially when WebGL wouldn’t necessarily help.

If you’re at all interested in hacking on Zen photon garden, please fork it on GitHub, post your art, and post your patches!


AVR RFID, Optimized and Ported to C

Way back in 2008, I posted a writeup about using an AVR microcontroller as an RFID tag. Since then, it’s been great to see many people pick up this code and build their own DIY RFID tags.

In my original project, I was just interested in using an AVR as a way of emulating any tag protocol I wanted, even proprietary protocols like the HID cards that are so common for door entry. But a general purpose microcontroller really lends itself to making even more interesting tags. For example, imagine an action figure that has different poses which trigger microswitches that can be read by the AVR. It could report a different RFID code depending on which pose the action figure is in. This kind of very low-power physical computing is really interesting to me.

Trammell Hudson recently took a big step in this direction, in the name of creating a “multipass” card which could stay in his pocket and pretend to be any number of other cards. His original idea didn’t quite work out, due to limitations in the HID readers. But along the way, he created an optimized version of the AVRFID firmware which uses much less flash memory, and he ported it to C so that it can be more easily extended and modified.

He made this posible by very carefully choosing the instructions in the inner loops, creating a state machine that just barely fits within the available clock cycles:

One issue with programming HID Prox compatible cards is that the AVR’s RCALL and RET instructions are quite slow — 3 and 4 clocks respectively — so making a function call and returning from it requires seven clocks and would cause errors in the RF waveform. To get around this, Beth expanded all of the code inline to produce a single function that bit-bangs the coil loading with NOP‘s between each cycle. The 20-bit manufacturer ID (0x01002), 8-bit faciity code and 16-bit unique ID, all Manchester encoded, required 80 instructions per bit for a total of 3700 instructions out of the Tiny85′s maximum of 4096. Supporting 34-bit cards would not be possible with this design, much less multiple card IDs!

While RCALL/RET are out of the question, I noticed that IJMP is only 2 clocks. This means that the CPU can do an indirect jump to the value in the 16-bit Z register in enough time to be ready for the next FSK cycle. If we know where to go, that is… The LPM instruction takes three cycles to read a byte from flash into a register, which just barely fits during the idle time during a FSK baseband one. Loading the Z register for LPM takes at least two clocks (since it is really the two 8-bit registers r31:r30), which means the pgm_read_word() macro in avr/progmem.h won’t work. While the rest of the firmware is in mostly normal C, I resorted to writing assembly to interleave the coil toggling with the operations to determine the next output state and make the appropriate jump. If you want to follow along, the source for the RFID firmware is available in rfid/avrfid2.c.

His post covers a lot of ground, including how to connect an off-the-shelf HID card reader to a computer, and how to repeatedly program the AVR using a Bus Pirate.

Go check out the full article already!


How we built a Super Nintendo out of a wireless keyboard

I wrote a guest article for Adafruit about the story behind the new Sifteo cubes:

In today’s world, video game consoles have become increasingly complex virtual worlds unto themselves. Shiny, high polygon count, immersive, but ultimately indirect. A video game controller is your gateway to the game’s world—but the gateway itself can be a constant reminder that you’re outside that world, looking in.

Likewise, the technology in these game consoles has become increasingly opaque. Decades ago, platforms like the Commodore 64 encouraged tinkerers and do-it-yourselfers of all kinds. You could buy commercial games, sure, but the manual that shipped with the C-64 also told you what you needed to know to make your own games, tools, or even robots. The manual included a full schematic, the components were in large through-hole packages, and most of them were commonly-available chips with published data sheets.

Fast forward three decades. Today’s video game consoles are as powerful and as complex as a personal computer, with elaborate security systems designed specifically to keep do-it-yourselfers out. They contain dozens of customized or special-purpose parts, and it takes some serious wizardry to do anything with them other than exactly what the manufacturer intended. These systems are discouragingly complicated. It’s so hard to see any common link between the circuits you can build at home, and the complex electrical engineering that goes into an Xbox 360 or Playstation 3.

We wanted to build something different. Our platform has no controller, no television. The system itself is the game world. To make this happen, we had to take our engineering back to basics too. This is a game platform built using parts that aren’t fundamentally different from the Arduino or Maple boards that tens of thousands of makers are using right now.

This is the story of how we built the hardware behind the new Sifteo Cubes, our second generation of a gaming platform that’s all about tactile sensation and real, physical objects.

Read the full article at Adafruit.


Hacking My Vagina

Making a Vibrator That Listens to Your Body

This project has been an astonishing little journey. Many of my previous projects were characterized by an amazing outpouring of effort to build something highly intricate and ultimately invisible.

This is the opposite kind of project. A little bit of work and a little custom design to create something new and exciting that I can immediately use in my everyday life. It also happens to be a sex toy.

In other words, I wanted to hack something I actually use: my vagina.

The Inspiration

I love feedback loops. Servo motors, thermostats, op-amps, DC-DC converters, social networks, flocking behavior. Our bodies. Massages, cuddling, sex– these are all ways of bringing a partner into your body’s most fun closed-loop systems.

To me, a good sex toy helps form feedback loops. It doesn’t get in the way. A good toy gives you simple ways of exchanging signals with a partner or with your own body. It acts as a conduit. A good sex toy is analog.

I was in the market for a remote-controlled vibrator recently, and I ended up with LELO’s Lyla vibe:

(Update: LELO was nice enough to send me their updated model, the Lyla 2, which also works with this remote. The Lyla, Lyla 2, and Tiani 2 are all known to use the same radio protocol.)

There are some things I really like about this toy. The vibrator itself is reasonably strong, rechargeable, waterproof, and quite comfortable. I was much less happy with the remote. The radio range was rather lackluster, and using the controls made me feel more like I was programming a VCR (remember those?) than having sex.

The optional accelerometer input on the remote was a good idea, but I feel like the execution leaves much to be desired. The controls are laggy, and controlling the vibrator by tilting the controller never really felt right to me.

So, naturally, I wanted to see if I could do better. I really had no idea where the project would go.

Choosing a Sensor

My early prototype used a simple knob attached to a variable resistor, and that already seemed like a big improvement over the original LELO remote. I wanted to build a simple proof-of-concept remote that would just demonstrate the improved radio range and responsiveness, without doing anything particularly fancy. After that, I planned to dabble in more esoteric input devices. Audio spectra, conductive fabric, capacitive sensors embedded in lingerie, and so on. The Arduino library I ended up writing for this project offers some great opportunities for further tinkering.

But I was rummaging through my sensor drawer, and thought: Why not sonar? A few quick tests, and it seemed more than responsive enough. And for myself, often lazy about mechanical things, a small sensor with no moving parts was much easier to build a robust prototype around.

Invisible Walls

At first, the benefits seemed pretty easy to understand. It was hands-free. This meant that, unlike the original remote, it doesn’t need a bright pink silicone jacket to stay clean. Well that sounds convenient. Then I start to play with it more, and I discover something really unique about this configuration.

Something about this toy really does become more than the sum of its parts. More than a simple remote control, it starts to feel a little like virtual reality. Haptic technology, it’s called. Interfacing with computers through our sense of touch.

This toy serves as a kind of analog bridge between two remote spaces: the column of ultrasonically-oscillating air in front of the remote, and whatever body part happens to be in contact with the vibrator. Touch that invisible space above the remote, and the vibrator touches you.

In fact, it does start to feel like there’s a palpable object in space above the remote’s sensors. Move your body close to it, and it reacts. Press into it lightly, or tease the edges. Flick your hand through it, or make graceful waves back and forth. You can use your whole body to touch it, almost like a big fuzzy vibrating cone floating in air.

If the sensor can see your body’s rhythms, it responds in kind, effortlessly synchronizing to its frequency. This is exactly the sort of closed-loop control I was after.

You can even use multiple vibrators. There’s no unique address or channel assigned to a particular vibrator, so any vibes that are turned on and within radio range will respond.

So, what does it look like?

The two black circles are ultrasonic transducers. One of them transmits short “chirps” at a frequency too high for humans to hear. The other listens for echoes. The 4-digit display gives another satisfying bit of feedback, in visceral high-contrast blue LED light. The external antenna gives it quite a bit more radio range than the original remote, and the exposed serial port on the left makes it easy to reprogram the remote using the Arduino IDE.

You can get an idea for how the prototype works in this short video:

That’s the end result, for now. The rest of this post will share the journey I took in building this toy. Perhaps it will inspire you to follow along, or to build something unique.

Reverse Engineering

In order to replace the original remote control, first I had to understand it. My first stop was the FCC ID database, to see if they had any info that would help me know if it was even worthwhile to crack the remote open. I was in luck. The internal photos clearly showed an MSP430 microcontroller and CC2500 radio. Hackability was looking good so far.

The CC2500 is a really nice configurable 2.4 GHz radio and modem chip with an SPI interface. It’s part of a line of highly integrated radios made by Chipcon, now owned by Texas Instruments. They’re quite similar to the competing nRF24L01 radio made by Nordic Semiconductor. The CC2500 also has a popular sub-1 GHz sibling, the CC1100. This chip was featured in the ToorCon 14 badge, and the imminently hackable IM-ME toy.

I wouldn’t even bother with trying to read or reprogram the original microcontroller. By sniffing this SPI bus, I could reverse engineer the proper radio settings and protocol to use. Then I could wire up any CC2500 to any microcontroller I want, and control the vibrator over the air.

Unfortunately, opening the remote turned out to be somewhat messier. The pink silicone jacket is glued to the white plastic shell, and I failed to remove it without tearing the fragile silicone. I soon discovered that the shell itself was also glued shut, and it required quite a lot of cutting and prying to open. I was willing to sacrifice this remote for science, but I really wouldn’t advise ever opening one of these if you want it to stay nice and watertight.

Once I had the remote open and the circuit board extracted, I started making a test jig. I’ll often do this by using a short length of snappable 0.1″ headers as a mechanical anchor. I solder it to some big sturdy pads on the PCB. In this case, I used the pads for the battery contacts. Then, I break out the microscope and run thin AWG 32 magnet wire from the headers to whatever I want to probe. In this case, I wanted the SPI bus. I also replaced the original remote’s small vibrator motor with an LED, so I could see when it was on without it shaking my whole setup.

At this point there are all sorts of options for snooping on the communications between these radio and microprocessor. If you have room in your budget and toolbox for a special-purpose device, Total Phase makes a pretty sweet little SPI and I2C sniffer device. There are logic analyzers like the Saleae Logic, and open source tools like the Logic Shrimp.

Unfortunately, I had none of these handy. My Saleae Logic was far away, and my trusty Bitscope isn’t really that helpful for protocol reverse engineering once you get above the physical layer. So, I improvised. Often in this situation I’ll break out something like the Saxo board, with an FPGA and a thick USB 2.0 pipe. In this case, I was dealing with low enough data rates that I could do something even simpler. I plugged the remote into my Propeller Demo Board and wrote a quick program to capture the SPI traffic and send it back in ASCII over the serial port.

The full traces are available in Git. I was looking for two kinds of traffic: an initialization sequence for the radio, and SPI transfers which actually transmitted packets over the radio. I was also keeping my eye out for any clues as to how complex the protocol was. Do I need to pair with the vibrator? Exchange keys? Search for a radio channel to use?

When the remote turns “off”, it’s actually entering a low-power standby mode. It doesn’t fully shut down the radio in this mode, it just sends a standby command. As a result, waking up the remote doesn’t fully initialize the radio. To capture a complete init sequence, I would cut and reapply power, as if fresh batteries were just inserted. When I did this, I was greeted with a nice burst of configuration traffic:

    300F            SRES        Strobe: Soft reset
    0B0F 0A0F       FSCTRL1     IF frequency of 253.9 kHz
    0C0F 000F       FSCTRL0     No frequency offset (default)
    0D0F 5D0F       FREQ2       FREQ = 0x5d13b1 = 2420 MHz
    0E0F 130F       FREQ1
    0F0F B10F       FREQ0
    100F 2D0F       MDMCFG4     CHANBW = 541.666 kHz
    110F 3B0F       MDMCFG3     DRATE = 249.94 kBaud
    120F 730F       MDMCFG2     MSK modulation, 30/32 sync word bits
    130F 220F       MDMCFG1     FEC disabled, 2 preamble bytes
    140F F80F       MDMCFG0     CHANSPC = 199.951 kHz

The hex numbers on the left come from the SPI sniffer. The text to the right is my own annotation, starting with the name of the radio register in question. Each line is one SPI transaction, and each group of four digits represents one byte going across the SPI bus. The first two digits are command data from the microcontroller (MOSI), the last two digits are the radio’s simultaneous response byte (MISO).

The log continues on like this a bit longer, but you can already see the most important radio parameters: A base frequency of 2.420 GHz, MSK modulation, 250 kBps data rate.

When I looked at the SPI trace for waking the remote up from sleep, I was greeted with a rather large red herring. It spends some time scanning ten different channels, numbered 0 through 9. This represents 2 MHz of spectrum. On each channel, it spends some time polling the Received Signal Strength Indication (RSSI) register. Is it listening to see if the channel is clear? Is it searching for other remotes? Listening for an initialization sequence from the vibrator?

As far as I can tell, none of these things are true. I’ve never seen the vibrator transmit or the remote receive, which rules out any kind of pairing sequence. To verify this, you can turn on a vibrator while the remote is already transmitting. The vibrator picks up the signal and starts moving, without any change to the remote’s routine. The original remote will even control multiple vibrators happily. There seems to be no pairing sequence at all: the address and radio channel are both hardcoded. Why scan through channels then? It feels like either a vestige of some library code the folks at LELO adopted. Or maybe it’s scaffolding for future functionality. Either way, it doesn’t seem to matter for the vibrators I have.

How about the actual packets? Well, they also seem to have a lot of vestigial content. It’s possible this is LELO being super clever and leaving room for future products to be protocol-compatible. Or it’s possible they just hacked together all the example code they could find until they had a working product. It’s hard to say.

The original remote transmits packets packets at a measly 9 Hz. This low update rate most definitely contributes to the laggy feeling and lacking radio range of the original remote. Why so slow? It was probably a battery life tradeoff. The original remote ran off of two AAAs, whereas my replacement is going to have significantly more power available. By transmitting about 10x as often, my remote can achieve much better responsiveness, plus it can tolerate more radio noise by virtue of having a lot more redundancy. Even if many packets are corrupted, quite a few packets are likely to make it through the air unharmed.

Each packet contains a motor strength update, as an 8-bit number which seems to have a usable range of 0 through 128. The packets themselves always have 9 bytes of payload provided by the microcontroller, plus a CRC and header which is generated by the CC2500 itself. Here’s the payload of a typical packet:

    01 00 A5 28 28 00 00 00 05
               Motor Strength

In this example, the motor strength is 0x28, or about 30% of full power. I’m not sure what the other bytes are for. They seem to stay constant, and simply replaying these packets back to the vibrator always seems to work. I’m also not sure why there are always two copies of the motor power byte. It seems most likely that this is for added redundancy, so that even if the radio is very noisy and there’s a CRC collision, the vibrator is unlikely to accept a corrupted motor strength byte.

At this point, I had enough information about the protocol to try and build my own emulation of the original remote. This was before I had ordered any CC2500 breakout boards, so I used the remote itself as a dumb CC2500 radio by holding the original MSP430 microcontroller in reset. This was the setup I used to develop an Arduino library that could configure the CC2500 radio correctly and send packets like the ones above.

Now that the yak was bald, I could get on to the really fun part of the project: Designing a better, stronger, faster remote using commonly available parts.

Power Exchange

At this point, the project was seeming pretty straightforward: Off-the-shelf Arduino, CC2500 breakout board, sonar sensor, and LED display. But how would I power all of this? The sonar and LEDs are both pretty power-hungry, and I would be using the radio much more heavily than the original remote. I would need to budget nearly 100 mA, with 5v rails for the sonar and LED and 3.3v for the radio and Arduino.

I certainly could have used AA or AAA batteries. I wanted the mechanical design to be simple and compact, though. Designing my own battery holder would not have been simple, and an off-the-shelf plastic battery holder would have been bulky. I even thought about using a flashlight body as a battery holder, but I didn’t see an elegant way to attach my own mechanical parts to it. Rechargeable batteries come in much more friendly shapes. But now you need a charger.

This was getting complicated fast. Lithium polymer battery, a boost converter to raise the voltage to 5V for the sonar module, charging circuit, “fuel gauge” indicator. All of this work goes into every commercial product that runs on batteries, and we often take it for granted. As far as I’m aware, though, there isn’t a great equivalent for quick DIY prototyping. The Arduino Fio board is close to what I want: an Arduino with a built-in LiPo battery charger. But it doesn’t have the 5V boost converter or any way of monitoring the battery’s charge.

Without designing my own PCB, I’d need several separate components: battery, fuel gauge, charge/boost. All total, over $45 and a lot more bulk and complexity than I wanted. I was really hoping there was a better option.

It so happens that this sort of amalgamation of parts is already pretty commonplace in the form of portable cell-phone chargers. These devices are very little more than a boost converter, charger, lithium battery, and a very basic fuel gauge. Best of all, thanks to economy of scale, they’re really inexpensive. The 3200 mAH battery I used in this project was only $22, and it’s something I can reuse for multiple projects… or even to charge my phone.

Bill of Materials

This lists the exact parts I used in my prototype, but nearly everything here is commonly available from multiple manufacturers. In particular, many different vendors on eBay usually carry CC2500 breakout boards. Some of these have special features such as nicer antennas or power amplifiers. Keep an eye out for those.

That’s it for the list of vitamins we’ll need to make the remote. There is no “main” PCB for this project. The small boards are all held in place by a custom-designed plastic enclosure.

At this point, I did a little bit of preparation on the Arduino by soldering the FTDI-compatible serial header, and disabling the pin-13 LED by desoldering its current limiting resistor. If it was still attached, its glow would be visible through the plastic enclosure. You may also want to remove the LED on the Ping module. I forgot to do this on my prototype, and its blinking is just barely visible through the plastic.


It’s easy to neglect the non-electronic parts of any electronics project. I’ve certainly built my share of prototypes that were little more than a bare circuit board. In this project, though, the plastic parts serve multiple purposes: providing a slot for the battery pack, holding the individual circuit boards in place, and providing a smooth exterior that’s a little less unfriendly to use in the bedroom.

This seemed like a perfect job for 3D printing. I could make a plastic enclosure that exactly fits each of the circuit boards, and presents the LEDs and sonar transducers nicely while hiding all of the sharp and uncuddly circuitry inside.

I chose to model the enclosure in Blender, a wonderfully powerful (if somewhat intimidating) open source tool. It takes some care to use Blender for CAD, but I enjoy having such diverse functionality integrated into one package, and the price is right.

To keep the model as parametric as possible, I modeled the shell and the negative space separately. First, I created meshes for each of the internal parts: The circuit boards, bolts, antenna hole, battery pack, USB connector. These meshes would become the negative space inside the enclosure. I separated them into two layers, for items that would mount to the top and to the bottom half of the case. On the top-half items, I extruded them downward, and the bottom-half items were extruded upward. This leaves the space toward the inside of the case hollow.

I modeled the outer shell using subdivision surfaces. A chain of boolean modifiers will non-destructively split the case into top and bottom, then carve out all holes. The downside to this technique is that all of the boolean operations can bog down Blender quite a bit. This can be mitigated by keeping your source meshes and your final meshes in different layers. With the final layers hidden, Blender won’t continuously recalculate them as you edit the source meshes.

The end result of all this 3D modeling was a set of polygonal meshes in STL format for the top and bottom halves of the case:

I used Slic3r to convert the polygon models into G-code, a simple language that defines the actual tool path used by the printer. This program slices the 3D model into many thin layers. The 3D printer will draw out each layer with a thin filament of molten plastic. After each layer, the nozzle moves up a little and the process repeats.

This is a visualization of the tool path for just one of these thin layers. The tight zig-zag pattern is a solid fill, but much of the interior is filled with a light-weight honeycomb pattern that saves material and time:

Over the course of several hours, my printer builds each part up from nothing.

A tiny bit of post-printing cleanup helps the parts assemble smoothly. A hobby knife makes quick work of the “brim” that is added along the bottom of each part to help it stick to the build surface. Since this will be a handheld project, it’s important to sand any sharp corners.

I also sanded the entire top face of each model, to eliminate any small bumps that would prevent the two halves from sitting flat against each other. This is a good time to test-fit the pieces with a pair of bolts and nuts. The bolts should go in easily, the nuts should be held in place by the hexagonal holes on the bottom half, and there should be only a very tiny gap between the two halves.


Each component sits in its own custom-fit hole, with some epoxy to hold it in place. All of the components on the bottom half need to be robustly anchored. The radio’s antenna connector, the battery pack USB plug, and the Arduino’s serial header will all be subject to some mechanical force during normal use.

I used a liberal dose of epoxy both under and above the USB plug. The Arduino and Radio boards both have flat backsides, so I put a dab of epoxy underneath both. On the radio, I added some additional dabs of epoxy at the corners near the SMA connector.

It’s probably a good idea to avoid getting any epoxy on the RF components on the front side of the radio, as it may affect the circuit’s impedances and lead to reduced radio range. Also try to keep it off of anything you’ll be soldering. Solder won’t stick to it or burn through it, you’ll just be left with a mess that you have to scrape off with a hobby knife.

I used the battery pack itself to help line up the USB connector while the epoxy sets. You’ll notice a little bit of wiggle when the connectors are mated. To make sure the battery pack plugs and unplugs smoothly, make sure that you seat the USB plug such that it’s pressed all the way to the back of the socket. This will ensure the plug isn’t stuck at a funny angle.

The top half assembles in a similar manner. The sonar and LED modules both press into their respective slots. When they’re all the way in, the front face of the LED module and the front of each sonar transducer should be approximately flush with the front of the enclosure.

Since these components are largely held in place by the shape of the enclosure, they only need a small amount of epoxy to keep them from sliding out of their holes. Note that the underside of the sonar module is exposed to the battery compartment. Make sure you go easy on the epoxy. Any blobby epoxy or messy wiring could get in the way of the battery pack.

After the epoxy set, I soldered everything up point-to-point style with wire-wrapping wire:

USB Plug+5vArduinoRAW
ArduinoVCC (+3.3v)CC2500VCC
Arduino10CC2500CSN (SS)
Arduino11CC2500SI (MOSI)
Arduino12CC2500SO (MISO)
ArduinoRAW (+5v)LEDVCC
ArduinoRAW (+5v)Ping5V

There will be six wires running between the top and bottom half. It’s okay to leave these a little bit long; there will be room to fold them up in the hollow space above the LED module.

Firmware Time

Bolt the enclosure together, making sure the wires end up in the hollow area instead of pinched in the edges of the controller. Now it’s ready for some firmware! The Arduino sketch is in the project’s GitHub repository.

I designed the enclosure to work with common 3.3v FTDI cables. The programming slot is probably a bit too narrow for the FTDI Basic, but perhaps it works. I already had a Prop Plug handy, so I’ve been using that with a simple passive adaptor.

The firmware is currently pretty basic. It takes sonar measurements as fast as it can, feeding those into a median filter. Median filters are a little bit magical when it comes to discarding outliers from noisy-ish data. There’s a smidgen of state machinery to manage the “lock” mode. Finally, it scales the distance reading to a motor power level and sends packets to the radio and LED modules about 80 times a second.

The Next Thing

Where to go from here? Well, there is certainly room for improvement in the firmware. For different kinds of play, it may make sense to have different scaling algorithms for converting distance to intensity. I’m interested in making the firmware more versatile, but not at the expense of reducing its intuitive quality. The “Lock” mode is already way too unintuitive for my tastes. Perhaps an additional flavor of user interface, via an accelerometer or SoftPot would help.

But honestly, the thing I’m most excited about improving isn’t even technical. In some ways, this kind of toy feels like a musical instrument. It is a simple machine with very few inputs, but it interacts with your body in such a way that it opens up a broad array of techniques that can each be mastered. I’m looking forward to spending more time with it and learning how to play.

I’ve already been thinking about the vast array of other sensing technologies that may be applicable for sex toys that support your own feedback loops instead of obstructing them. What if you could use a Kinect camera to remotely detect even your body’s subtler rhythms? Imagine using a phase-locked loop to not just synchronize with your motion, but predict it. The PLL could compensate for all of the system’s lag, including the mechanical lag in the vibrator motor.

There could be a lot more to electronic sex toys than just a battery and a motor. I want the future to be full of toys that know how to play.