Keep Working, Worker Bee!

12.23.2005

Ruby on Rails, macros, and code generation

Now that the quarter is over and my paper is submitted, I've had a little bit more breathing room and one of the things I've been doing is looking into Ruby on Rails. I can hardly say I'm a Rails expert after the few hours I've spent tinkering with it, but one thing about it really struck me: Rails does an awful lot of code generating for you. One of the first things they tell you to do in the beginners' tutorial is to run a battery of scripts that generate several ruby files in several different directories, all of which are generated pretty much systematically but have some minor variation based on the particulars of your application.

Of course my immediate Schemey reaction was to think that they had yet again confirmed that those without macro systems are doomed to reinvent them (probably poorly). But delving into it a little more, I'm not so sure I was right about that. I more-or-less randomly came across a blog post by David Hansson about why he thinks Rails deserves to be held in higher esteem than your average code-generation wizards. His basic argument is: Rails code generators are just creating stubs, not any real code; but they're creating the right stubs. New subclasses derive from the right base classes, even though they don't have methods. They're put in the right directories, and give you a good factoring from the beginning. He doesn't view it as a panacea, but he does think it's useful.

In essence, if I understand the argument correctly the RoR scripts aren't there to take away the drudgery of writing repetitive code so much as to be executable documentation on proper Rails use. That sounds like a much more intriguing idea to me.

12.17.2005

Paper submitted

Well, it was looking a little unlikely at the end, but I'm happy to report that The Ins and Outs of type dynamic and Fine-Grained Interoperability by Matthews, Findler, Gray, and Flatt has been submitted. The name is actually a double meaning. One thing about working on a paper with Robby and Matthew is that if you come up with a dumb pun for the name of something, they will insist that it stay in — these are the guys who named MrEd, after all :).

This marks the first time I've submitted anything that includes my interoperability work. Exciting!

Anyway, I'm about to head out, so if I don't get back to you before then, I wish you all a very happy holiday!

[Yes, that was just for Bill O'Reilly's benefit.]

12.05.2005

Summing up computer science

So I've been pretty busy working on an ECOOP paper recently — more about after it's actually done — but I thought I'd report on my class, now that I've done all the lectures and I'm just waiting for people's final projects. I got through all the material I had wanted to cover with a spare lecture to go, so I made a lecture that I've wanted to do for a long time: build up all of computer science from transistors all the way up to the Halting Problem.

I had an hour and a half to get through that, so obviously I had to, err, summarize large portions of that material, but I still covered some of my favorite topics in a fair amount of detail. I showed how you'd build a NAND gate out of transistors (skipping the messy electrical bits like resistance because I'm an electricity imbecile), then built up NOT, AND, OR, and XOR from that, and from there did latches and briefly touched on clocks and timing. From there we quickly got to compilers and operating systems (both of which were just glanced upon), and from there applications like networking and databases.

After that I pointed out there are a lot of things that are the same between all these areas, and that database writers and operating systems writers and so on shouldn't have to independently figure out how fast their algorithms are or how to verify they work. That let me start talking about algorithms and big-Oh notation. From there you can ask whether there are any problems that require a certain amount of time, and that let me skim the proof that it takes big-theta(n*log n) to sort a list of numbers given only a comparison function. From there I was able to explain P vs. NP (though I think I sort of botched this part) and then ask whether there might be programs that you can't solve at all, no matter how much time you had. That led me to the Halting Problem, and that's where I left it.

I think these sorts of big-picture lectures are really important to an intro class, particularly when it's about a subject like computer science where the discipline isn't obvious to outsiders. I'm sure the students only got a very superficial understanding of the subjects I was talking about, but that's what the rest of the major is for.