Mobile map inspiration

So, I’ve been itching to do something really cool with Python on Symbian Series 60. The first thought was a way to upload images directly from the phone to Gallery. Well, it still needs some polishing, but I wrote most of that at the last SVLUG hackfest. Right now it takes a picture with the phone’s camera, saves it locally, then beams it off directly to a Gallery server over GPRS, via an HTTP proxy and the gallery-remote protocol. Unfortunately, the Python module for the camera doesn’t give you a lot of control. A much more practical (hah!) solution would be to have the script send everything it finds in an ‘outbox’ directory- so you just save images there with any camera app, then upload them at your convenience by running a simple script.

Anyway, while that was kinda fun to write, it wasn’t really as interesting as I’d hoped. This might just be due to the extreme suckiness of phone cameras. Yesterday I found something much cooler. For a while now I’ve been interested in getting maps on my mobile devices. Google maps, of course, seem the obvious solution. Mobile web browsers aren’t fancy enough yet to support the latest AJAX applications, but I’d want a small-screen-tweaked UI anyway.

Well, MGmaps to the rescue right? It’s a pretty spiffy app. Unfortunately, being in Java it’s kinda sluggish and not readily hackable. I’d like to have it make use of my phone’s 512MB MMC card to keep a disk cache of map tiles. Doing all the browsing over a slow GPRS link with very little cache is hardly fun or useful.

Yesterday I stumbled across a Nokia forum post with a literally 100-line Python app to browse Google Maps online. It has a lot of rough edges- drawing artifacts while it’s loading, I had to hack it a bit to support HTTP proxies, and it has a ‘cache’ which will use an unbounded amount of RAM. But, it makes a great proof-of-concept and a great inspiration. I’d love to write a similar app with better cache management, a more extensible and maintainable architecture, and better responsiveness while downloading images.

If I do go through with writing such an app, I’ll finally be using Python to bring a keyhole-like system to devices you always have handy. I shall call it “pyhole”.

Fighting entropy

I’m still continuing the gradual process of acquiring furniture and dispersing any large piles of junk from my living room. I finally got to pick up my coffee table yesterday, which I ordered about two weeks ago. I also found a cabinet to hold my server and A/V equipment. It’s garage-grade furniture, but still fairly nice. It’s not something I mind taking a hole saw to so I can run cables 😉

So, I obviously still need a TV stand. I’m waiting until I’m a little less disgusted with their price and their suboptimal dimensions. Don’t let the pictures fool you too much, there’s still a good sized junk pile behind the camera.

Let the pixels flow

My posts have been pretty light on the photos recently- this is mostly just because I haven’t been taking a whole lot of pictures, but even when I had plenty of pictures they were safely quarantined behind gallery rather than flowing freely over my blog and all that aggregate it.

I admit, this photo has been slighly retouched. It was necessary to carefully adjust the gamma and color levels on this image to accurately capture reality, as my poor camera is just not capable of registering the sheer depth of this house’s color. I would not be surprised in the least if this house inspired the entire visual style of Edward Scissorhands.

I crossed paths with an older man just after taking this picture. He mentioned to me that the couple responsible for building this lovely little monstrosity died within just a few months of each other, and the house is on sale for a mere $1.3M or so. Of course, nobody’s buying it because it sits just north of the intersection of Sunnyvale Ave. and El Camino Real- not a particularly quiet road.

In other news, I’m at the Intel Developer Forum today, yesterday, and tomorrow. VMware had some extra tickets, and apparently my job isn’t time-critical enough that anyone thought twice about sending me. I apologize for taking almost no pictures of this event. There’s been a lot of cool hardware and a lot of slides with timing diagrams and artists’ conceptions of CPU architectures. Compared to SIGGRAPH, it’s quite visually dull. My only photo to report at the moment is of the expo floor, during their geekyness-contest that mostly involved a time trial at assembling and confuguring various computers using Intel ™ Technologies ™.

The hilight of my experience yesterday was the pair of sessions on Wireless USB. I’ve heard mention of the WUSB standard for a while now, but this was the first time I’ve heard details on the architecture. It’s great seeing them preserve nearly all software compatibility with USB, while changing some fundamental aspects of the protocol to keep radio link utilization high. Best of all, there was a whole section on WUSB on the expo floor featuring a lot of very preliminary but funcional hardware. I can’t wait for my $50 wireless EZUSB development kit in a few years.

I spent most of my time today in virtualization sessions. For those of you who haven’t been following their latest hype, Intel’s next chipset includes an extension called VT-x which basically handles virtualization of the CPU itself in hardware. This means that instead of doing lots of very complicated dynamic binary translation, like VMware does traditionally, the hardware is engineered to place the virtual machine monitor software in a completely new privilege level and fairly quickly swap out the entire CPU state when switching VMs.

It’s neat stuff, no doubt, but it’s only a small piece of the pie. It’s just a little annoying to see nearly all of Intel’s demo machines running VMware, but very little mention of the nontrivial tasks the VMM still has to perform using software like ours. Even with hardware support for CPU virtualization, the VMM needs to virtualize memory and some if not all I/O devices. Intel has plans for virtualization beyond just the CPU itself, but it will be a long long time before that comes close to the level of support VMware already gives in software.

Ah well. I’m curious what the virtual machine monitor folks at VMware have been doing with all this. I’d like to say I have a lot of inside information that I can’t share with you, but I really don’t know much at this point that didn’t become public at IDF today. I’ve been over in I/O-emulation-land where little of this is going to matter for at least a couple years.

I’ll procrastinate after I pick your browser up off the floor

I’ve been making slow progress on packing today- got all my books boxed up, along with many of my less fragile electro-widgets and such. This type of behaviour leads to procrastination, naturally.

I’ve been running the Deer Park Alpha 2 release of Firefox for a couple days. It does seem to be faster at DHTML, though I don’t have any of the really heavy-duty Javascript I wrote for Destiny M&M handy for benchmarking purposes. The coolest features destined to end up in Firefox 1.1, from my point of view, are SVG support and the “canvas” element.

Canvas is a very misunderstood HTML extension. It’s a new element that Apple invented mostly to make it easier to implement Dashboard. That part of the story is a little silly, and results in a lot of SVG advocacy and a lot of potential users suggesting to Apple places where they might shove their nonstandard hacks.

Well, it turns out that Canvas is indeed a standard- or at least a standard draft. Furthermore, it’s been implemented by the Gecko team, and happily runs in Deer Park. If you read the API, you notice that Canvas and SVG are really solutions to two completely different problems. SVG is a very complicated and very featureful scene graph built on XML, whereas Canvas looks more like a minimal vector drawing and raster compositing library for JavaScript. Canvas uses a simple immediate-mode interface for rendering to a bitmap, which makes it ideal for games or special effects, or for client-side image manipulation applications.

Canvas is so cool I had to abuse it. A while back I tried to render the Peter de Jong map in Javascript, basically making a very slow and crippled but very portable version of Fyre. Anything scene-graph-like, such as your usual DHTML tactics, would be disgustingly slow and memory-intensive. I ended up using Pnglets, a Javascript library that encodes PNG images entirely client-side. This worked, but was only slightly less disgustingly slow.

Anyway, the result of porting this little demo to Canvas was pretty neat. It’s still no speed demon, but it’s very impressive compared to the Pnglets version. It’s fast enough to be somewhat interactive, and it has at least basic compositing of the variety that Fyre had before we discovered histogram rendering. If you have Deer Park, Firefox 1.1, or a recent version of Safari you should be able to run the demo yourself.