- The Megahertz Challenge
I'm not normally one for new-years resolutions, but I think I finally thought of a good one.
Over Christmas I did some Python prototyping for one of my personal projects. I repeatedly caught myself using silly, lazy algorithms that burnt massive amounts of CPU time. (for example, try 20% of one i7 core, just for redrawing the screen of a trivial textmode program). Now, I knew I was writing the thing naively and understood that it was going to be a bit wasteful, but I didn't realize how inefficient my approach was until I explicitly opened a CPU monitor to check. Good thing I did that, because I couldn't "feel" the performance hit whatsoever. My laptop just soaked up the abuse and kept trucking.
This bothered me.
Hardware has gotten INCREDIBLY FAST: even cheap hardware, but especially the kind of machines that software developers use. The faster the code-compile-test cycle, the more productive a developer becomes. We select our hardware accordingly: even the ultrabooks of choice have i7s and SSDs these days. At software companies (at least the good ones where management is well aware that dev time is expensive and hardware is cheap), things get even more INSANELY OVERSPECCED, and soon there's a multi-Xeon'd baby-server behemoth on every desk and that silly laptop looks like a tomagotchi by comparison.
Is this good? Well, yes. But if you're not careful, it bites you. You code-compile-test, notice everything seems nice and snappy, and commit it. Later, it is the user, with his dusty old Celeron, who is left to wonder why desktop and web applications are becoming so curiously slow.
(...Or your cow-orker catches it on code review, or automatic performance tests flag it, or any number of things, yeah, yeah. That's fine for work, where you should have all that tooling set up. But is your workplace really that organized? Do you think the open-source projects you use every day have that kind of development rigor? And what about your personal projects?)
Now, "the sky is falling" is not a new or particularly unique sentiment. People have been griping about software bloat for as long as software has existed. Yes, there are other factors, and no, I don't want to change the world. But perhaps it's time for a nice, arbitrary, line in the sand.
Remember that "20% of an i7 core" that my slow program was wasting? THAT'S MORE THAN AN ENTIRE RASPBERRY PI 2. For context, we're talking a 900MHz quadcore ARM here. Simply wasted.
Surely 999 MHz should be enough for anyone?
Without further ado, THE MEGAHERTZ CHALLENGE.
If the clock speed is measured in GHz, I CAN'T USE IT.
I want to prove (to myself) that a little elbow grease can defeat Wirth's Law: that "software gets slower faster than hardware gets faster".
The challenge is difficult enough to be satisfying, but not impossible. It will force me to learn new (and old) technologies if I want to keep doing the things I take for granted today.
I may also be a little inspired by this guy.
As long as I can take it.
When do you start?
Tentatively, 2016-01-11 (Mon).
- My job, obviously.
- I don't have a landline, so I can't avoid using my (>2 GHz) cell phone. I will partially compensate by disabling mobile data and pretending it's a dumb "feature phone".
- 10 minutes of "cheating" per day. Long enough for transferring files that would otherwise be "marooned" on forbidden devices, but not much else.
I have no idea if this is a good idea. That's what makes it a good idea.
 I love my job because it doesn't let me get away with this shit. Developing for an embedded platform keeps you honest. You can't just tell the customer to get a better CPU.
 I don't claim to be good at benchmarking. But here goes. I found a python program that computes the first 1,000,000 primes and ran it on my i7 laptop. I then compared to published results for the same program running on the rpi2.
- The rpi2 takes 490 seconds.
- My laptop did it in 10 seconds.
This program is single-threaded and only exercises one core. But, even (490/4) for the rpi >>> (10/.2) for the i7 core at 20% load.