Tempest for ATSC

Recently I have become intrigued by the little 75-ohm coax plug on my plasma TV. To me, television programming isn’t worth nearly what Comcast charges for cable. I mostly watch DVDs, or television shows downloaded over the interweb. After reading about the fairly powerful digital television coverage in the bay area, I decided to plonk down $35 on a little amplified UHF antenna.

It’s a bit finicky at times, but I can receive at least 3 or 4 channels in crisp digital-cable-like HDTV quality. It’s really amazing how much data you can pump over the airwaves with the right compression and modulation. What’s more amazing, though, is that the ATSC tuner gives me a much sharper picture than I get over component video. Sure, HDMI would be ideal, but my TV doesn’t seem content with your average DVI signal.

So, how hard would it be to generate my own ATSC signal? I’d effectively be piping MPEG-2 data directly into the television’s logic board- if it could be done cheaply, it could be a really revolutionary way for HTPC hobbyists to get high quality video into any modern digital television. After thumbing through the ATSC spec, it looks like nearly the entire modulation job could be done by a sufficiently powerful CPU. Indeed, the GNU Radio project already has code for a software-only ATSC modulator and demodulator. All you’d need would be a 3-bit or higher D/A converter at about 10 MHz, a modulator, and a couple oscillators at very particular frequencies. Heck, this could be a $20 USB dongle.

Then, I happened upon what might be the coolest hack I’ve ever seen. Fabrice Bellard, the guy behind QEMU, wrote a software modulator for PAL, SECAM, and DVB-T. Sure, the GNU Radio project has had that for a while. The really novel part about this hack is that it uses the video card as both a DAC and an RF output stage!

Here on the other side of the pond from Fabrice I only have NTSC and ATSC television equipment, but nevertheless I felt like giving this hack a try. No source code unfortunately, but the images that result from the modulation process are available. I set up the PAL test image, and stuck an oscilloscope probe into the VGA port. Hey, this looks like it might be a valid video signal. Then I cart in a little 13″ television and start tuning around. I hit channel 7, and there’s a greyscale rendition of Lena! The vertical hold is almost nonexistent, but it sure ain’t bad for an NTSC television trying to display a PAL signal coming straight out of a GeForce 6200’s VGA port.

Then, I realize I haven’t even connected the other end of the coax cable. This whole time, my cheap scope probe has been acting as a transmitter antenna and the half-connected coax as a receive antenna. Lena has been travelling over about a foot of air. I plug the other end in, and the signal gets a whole lot less noisy.

It’s unfortunate that Fabrice didn’t include source, but I’m sure he has his reasons. The demo has inspired me, though. This isn’t the first time I’ve seen video outputs used in strange and unusual ways: long ago I coerced the LCD controller on a 68EZ328 (2nd generation Palm Pilot CPU) to generate a VGA signal. The horizontal and vertical blanking signals were painted right into the framebuffer. What really tickles me about this demo is that it’s exploiting the EMI given off by the card to modulate the finished RF signal.

Anyway, I think I’ll try my hand at an NTSC version of the demo… and eventually ATSC. I’m curious how much CPU power it would take to play back an MPEG stream in real-time this way. Worst case, it might require pre-processing to turn the MPEG stream into a raw stream of 8VSB codes. Could the modulation process be accelerated with fragment programs? This would definitely be the first time I’ve heard of OpenGL used to accelerate a radio.

Metadata as personal information

Hopefully all of you have seen this spiffy little tagging library that Christian and David are working on. One of the comments we hear about it is that tags should be system-wide, rather than per-user. Some people view tags as a global attribute of a file: much like a MIME type or its permissions. We view tags as something a user should be able to personally apply to any resource, whether they “own” it or not.

Well, the recent article on metadata in Vista would seem to support our position. Sure, the article is your typical alarmist journalism tripe, but within it is a kernel of truth: people are likely to use this type of metadata for assigning their own personal categories to a file. This data fundamentally belongs to the user. As the same file is processed by another user, the tags may no longer make sense. As the article points out, they may even be harmful.

Obviously it all depends on how you define “metadata”. Mime type, file size, an icon.. these are all things that have traditionally been thought of as “belonging” to the file itself even if they aren’t stored with the file. This goes all the way back to the resource fork in MacOS, maybe farther. Today this flavor of metadata is trying to reemerge through linux extended attributes, and Windows’ new filesystem. But why? MacOS has had it for decades. It was useful for a while, but in the internet age the flat file dominates. Everything has to fit into an HTTP session, into a zip file, or it’s useless on the ‘net. Metadata like file type, image size, and such has become redundant out of necessity. This information must be encoded within the file itself, or you can’t transmit it.

Where does this leave out-of-band metadata? Well, it’s perfect for attributes that don’t need to permanently follow a file around the internet. It’s perfect for personal data, and that’s exactly what this project is using it for.

I ♥ apple


I just took my first pie out of the oven! It doesn’t really look like much- I left it in a tiny bit too long so a little bit of the turbinado sugar topping burned, and I made the bottom crust a little small in places. But, it smells great. I’ll know how it tastes tomorrow, when it’s cooled.

Mobile map inspiration

So, I’ve been itching to do something really cool with Python on Symbian Series 60. The first thought was a way to upload images directly from the phone to Gallery. Well, it still needs some polishing, but I wrote most of that at the last SVLUG hackfest. Right now it takes a picture with the phone’s camera, saves it locally, then beams it off directly to a Gallery server over GPRS, via an HTTP proxy and the gallery-remote protocol. Unfortunately, the Python module for the camera doesn’t give you a lot of control. A much more practical (hah!) solution would be to have the script send everything it finds in an ‘outbox’ directory- so you just save images there with any camera app, then upload them at your convenience by running a simple script.

Anyway, while that was kinda fun to write, it wasn’t really as interesting as I’d hoped. This might just be due to the extreme suckiness of phone cameras. Yesterday I found something much cooler. For a while now I’ve been interested in getting maps on my mobile devices. Google maps, of course, seem the obvious solution. Mobile web browsers aren’t fancy enough yet to support the latest AJAX applications, but I’d want a small-screen-tweaked UI anyway.

Well, MGmaps to the rescue right? It’s a pretty spiffy app. Unfortunately, being in Java it’s kinda sluggish and not readily hackable. I’d like to have it make use of my phone’s 512MB MMC card to keep a disk cache of map tiles. Doing all the browsing over a slow GPRS link with very little cache is hardly fun or useful.

Yesterday I stumbled across a Nokia forum post with a literally 100-line Python app to browse Google Maps online. It has a lot of rough edges- drawing artifacts while it’s loading, I had to hack it a bit to support HTTP proxies, and it has a ‘cache’ which will use an unbounded amount of RAM. But, it makes a great proof-of-concept and a great inspiration. I’d love to write a similar app with better cache management, a more extensible and maintainable architecture, and better responsiveness while downloading images.

If I do go through with writing such an app, I’ll finally be using Python to bring a keyhole-like system to devices you always have handy. I shall call it “pyhole”.

Fighting entropy

I’m still continuing the gradual process of acquiring furniture and dispersing any large piles of junk from my living room. I finally got to pick up my coffee table yesterday, which I ordered about two weeks ago. I also found a cabinet to hold my server and A/V equipment. It’s garage-grade furniture, but still fairly nice. It’s not something I mind taking a hole saw to so I can run cables 😉

So, I obviously still need a TV stand. I’m waiting until I’m a little less disgusted with their price and their suboptimal dimensions. Don’t let the pictures fool you too much, there’s still a good sized junk pile behind the camera.

I’ll procrastinate after I pick your browser up off the floor

I’ve been making slow progress on packing today- got all my books boxed up, along with many of my less fragile electro-widgets and such. This type of behaviour leads to procrastination, naturally.

I’ve been running the Deer Park Alpha 2 release of Firefox for a couple days. It does seem to be faster at DHTML, though I don’t have any of the really heavy-duty Javascript I wrote for Destiny M&M handy for benchmarking purposes. The coolest features destined to end up in Firefox 1.1, from my point of view, are SVG support and the “canvas” element.

Canvas is a very misunderstood HTML extension. It’s a new element that Apple invented mostly to make it easier to implement Dashboard. That part of the story is a little silly, and results in a lot of SVG advocacy and a lot of potential users suggesting to Apple places where they might shove their nonstandard hacks.

Well, it turns out that Canvas is indeed a standard- or at least a standard draft. Furthermore, it’s been implemented by the Gecko team, and happily runs in Deer Park. If you read the API, you notice that Canvas and SVG are really solutions to two completely different problems. SVG is a very complicated and very featureful scene graph built on XML, whereas Canvas looks more like a minimal vector drawing and raster compositing library for JavaScript. Canvas uses a simple immediate-mode interface for rendering to a bitmap, which makes it ideal for games or special effects, or for client-side image manipulation applications.

Canvas is so cool I had to abuse it. A while back I tried to render the Peter de Jong map in Javascript, basically making a very slow and crippled but very portable version of Fyre. Anything scene-graph-like, such as your usual DHTML tactics, would be disgustingly slow and memory-intensive. I ended up using Pnglets, a Javascript library that encodes PNG images entirely client-side. This worked, but was only slightly less disgustingly slow.

Anyway, the result of porting this little demo to Canvas was pretty neat. It’s still no speed demon, but it’s very impressive compared to the Pnglets version. It’s fast enough to be somewhat interactive, and it has at least basic compositing of the variety that Fyre had before we discovered histogram rendering. If you have Deer Park, Firefox 1.1, or a recent version of Safari you should be able to run the demo yourself.