Hardware hackery and the Dual Shock protocol

The hardware-tinkering mood I’ve been in lately shows no sign of subsiding. The theme this time: Everything over cat5.

Backing up a bit… It’s summer! Hooray. So, how does one make the most of the season without actually doing something crazy like going outdoors? Of course the answer is to play video games in a different room which has more fresh air and a better view of the day-star.

Some people would pick up the Playstation and move it to the other room, but my solution is to run the video over inexpensive cat5 cable. This isn’t a new idea, or even a complicated one, but component-video-over-cat5 baluns still sell for around $80 each! I recently made my own pair out of a couple more altoids tins and some parts from the junk drawer.

These work really well. There is a little bit of ghosting visible on a black screen, but during normal use the quality looks perfect, or at least no worse than the quality of the TV it’s connected to. Unfortunately, and I started testing it with a little Megaman 2 and quickly noticed an unrelated problem: the distance is a little too much for our wireless controllers. Time to build a Playstation-controller-over-cat5 extender.

I can think of three different approaches for building such an extender:

  1. Purely electrical: Each PSX signal gets an RS-422 driver and a separate twisted pair. This is very simple, but it would require two cat5 cables per controller! I want at least two controllers running over a single cable.
  2. Low-level multiplexing: Compress multiple signals onto each twisted pair by multiplexing below the protocol level. This would be bit-for-bit identical to plugging a controller in locally, but it has a few big disadvantages. It requires quite a fast clock on the twisted pair, since it requires several bits of information to be sent for every bit in the original protocol. It also doesn’t add any flexibility. You couldn’t, for example, add a crossbar switch for controllers which could let you switch one controller between multiple video game consoles via a button combination.
  3. High-level multiplexing: The console-end of the link includes an FPGA which fully emulates one or more Dual Shock controllers. The controller-end consists of a microcontroller that polls a bank of controller ports and sends updates over an RS-485 link to the FPGA. This is very powerful, and it’s the method used by my old Unicone (Universal Controller Emulator) project. Unfortunately, it’s also relatively complex and it introduces a few milliseconds of latency.

I’m currently working on (3), but I haven’t ruled out (2) completely. First step, I need to know how to emulate a Dual Shock controller. The internet is full of pages that describe the very basics of the Playstation controller protocol, but I couldn’t find a single document that described the protocol in anywhere near enough detail to emulate a controller properly. Time for some reverse engineering…

This was quite a lot like the job I did a while ago to reverse engineer the Nintendo 64 controller protocol enough to build a microcontroller that could emulate a controller. Instead of an HP logic analyzer borrowed from work, this time I have a Saxo board from fpga4fun acting as a real-time Playstation packet sniffer. I should be able to use the same packet sniffer setup to finish reverse-engineering the Playstation 2 memory card protocol so that I can add PS2 support to my psxcard USB adaptor. 🙂

A Verilog core to emulate the Dual Shock controller will hopefully be coming up soon. In the mean time, I published my Notes on the Playstation 2 Dual Shock controller protocol. It’s actually surprisingly more complex than I was expecting. There are several backward-compatibility features, two rumble motors (one with 8-bit speed control), analog buttons… The protocol has several features that the Dual Shock controller doesn’t take advantage of, like support for 254 different force-feedback actuators that you can arbitrarily map into your polling packets. Crazy.

Mobile map inspiration

So, I’ve been itching to do something really cool with Python on Symbian Series 60. The first thought was a way to upload images directly from the phone to Gallery. Well, it still needs some polishing, but I wrote most of that at the last SVLUG hackfest. Right now it takes a picture with the phone’s camera, saves it locally, then beams it off directly to a Gallery server over GPRS, via an HTTP proxy and the gallery-remote protocol. Unfortunately, the Python module for the camera doesn’t give you a lot of control. A much more practical (hah!) solution would be to have the script send everything it finds in an ‘outbox’ directory- so you just save images there with any camera app, then upload them at your convenience by running a simple script.

Anyway, while that was kinda fun to write, it wasn’t really as interesting as I’d hoped. This might just be due to the extreme suckiness of phone cameras. Yesterday I found something much cooler. For a while now I’ve been interested in getting maps on my mobile devices. Google maps, of course, seem the obvious solution. Mobile web browsers aren’t fancy enough yet to support the latest AJAX applications, but I’d want a small-screen-tweaked UI anyway.

Well, MGmaps to the rescue right? It’s a pretty spiffy app. Unfortunately, being in Java it’s kinda sluggish and not readily hackable. I’d like to have it make use of my phone’s 512MB MMC card to keep a disk cache of map tiles. Doing all the browsing over a slow GPRS link with very little cache is hardly fun or useful.

Yesterday I stumbled across a Nokia forum post with a literally 100-line Python app to browse Google Maps online. It has a lot of rough edges- drawing artifacts while it’s loading, I had to hack it a bit to support HTTP proxies, and it has a ‘cache’ which will use an unbounded amount of RAM. But, it makes a great proof-of-concept and a great inspiration. I’d love to write a similar app with better cache management, a more extensible and maintainable architecture, and better responsiveness while downloading images.

If I do go through with writing such an app, I’ll finally be using Python to bring a keyhole-like system to devices you always have handy. I shall call it “pyhole”.

I’ll procrastinate after I pick your browser up off the floor

I’ve been making slow progress on packing today- got all my books boxed up, along with many of my less fragile electro-widgets and such. This type of behaviour leads to procrastination, naturally.

I’ve been running the Deer Park Alpha 2 release of Firefox for a couple days. It does seem to be faster at DHTML, though I don’t have any of the really heavy-duty Javascript I wrote for Destiny M&M handy for benchmarking purposes. The coolest features destined to end up in Firefox 1.1, from my point of view, are SVG support and the “canvas” element.

Canvas is a very misunderstood HTML extension. It’s a new element that Apple invented mostly to make it easier to implement Dashboard. That part of the story is a little silly, and results in a lot of SVG advocacy and a lot of potential users suggesting to Apple places where they might shove their nonstandard hacks.

Well, it turns out that Canvas is indeed a standard- or at least a standard draft. Furthermore, it’s been implemented by the Gecko team, and happily runs in Deer Park. If you read the API, you notice that Canvas and SVG are really solutions to two completely different problems. SVG is a very complicated and very featureful scene graph built on XML, whereas Canvas looks more like a minimal vector drawing and raster compositing library for JavaScript. Canvas uses a simple immediate-mode interface for rendering to a bitmap, which makes it ideal for games or special effects, or for client-side image manipulation applications.

Canvas is so cool I had to abuse it. A while back I tried to render the Peter de Jong map in Javascript, basically making a very slow and crippled but very portable version of Fyre. Anything scene-graph-like, such as your usual DHTML tactics, would be disgustingly slow and memory-intensive. I ended up using Pnglets, a Javascript library that encodes PNG images entirely client-side. This worked, but was only slightly less disgustingly slow.

Anyway, the result of porting this little demo to Canvas was pretty neat. It’s still no speed demon, but it’s very impressive compared to the Pnglets version. It’s fast enough to be somewhat interactive, and it has at least basic compositing of the variety that Fyre had before we discovered histogram rendering. If you have Deer Park, Firefox 1.1, or a recent version of Safari you should be able to run the demo yourself.