Mark Damon Hughes Topic: Software [Parental Advisory: Explicit Lyrics] [about]
Android's Lethal Problems
Sat, 2009Apr18 12:50:04 PDT
in Software by kamikaze

Android has 3 lethal problems:

  1. Poor usability; open-source projects can make nice screenshots, but never make usable software.

    "Design is not just what it looks like and feels like. Design is how it works."
    — Steve Jobs

    This is the classic Linux problem, and none of them understand this, or how to fix it. The newer versions of KDE look pretty, but are miserably painful to use, and other window managers and desktop managers for Linux are even harder to use (I was fond of WindowMaker, a clone of parts of NeXTstep, but it was very primitive and lacked even a real file manager).

  2. Multiple platforms and implementors and versions, so apps can't rely on consistent hardware or OS.

    This is what killed J2ME; you had to build and test on 20-30 combinations of phones and carriers to release any software. This is part of what keeps Nokia and Blackberry from having significant 3rd-party apps, each app only works on some subset of the phones. Classic Palm OS managed backward compatibility most of the time, but even so, the Palm V and Treo couldn't run some Palm III apps.

    Android is not what anyone would consider "release quality" software yet, it's still beta. The API and implementation are going to change, and change in incompatible ways, and the existing software will break.

  3. Conflict between business interests and the freetards will sour the freetards on it.

    (By "freetard", I mean specifically the FSF, the EFF who have been completely infiltrated by the FSF, and associated fanboys; not everyone who uses or even develops on Linux is a freetard, but almost all freetards use Linux)

    This has already occurred, with tethering apps being removed from the G1's app store, and the freetards screaming "TRAITORS! UNCLEAN!" at them. It's only going to get worse, because business and real customers want a stable platform, while the freetards feel they have the right to do anything they want with no regard for cost to others.

I expect Android will be around for a couple years as a crappy second-rate system for non-Apple phones, then die.

← Previous: Here we are now, imitate us! [shotgun blast] (Media) Next: Cleaned the Feed (Personal) →
Should a license be required to use IDEs?
Sat, 2009Feb14 20:50:05 PST
in Software by kamikaze

Not everyone loves Xcode with the burning, still-illegal-in-49-states passion that I do. Some people still like Eclipse. I know, I know. No, really, stop laughing, it's a serious mental illness. A mental illness that AlBlue has.

Almost everything he says is simply wrong, and shows that he hasn't even read the docs and learned to use Xcode.

  • The correct spelling is "Xcode". Not "XCode".
  • The editor window can be vertically split with the vertical split icon above the scrollbar, and can show different files.
  • To show multiple windows, double-click the file, or change back out of "All-in-One" display mode.
  • To flip from .h to .m, use Cmd-Opt-Up or the "Go To Counterpart" icon in the editor.
  • Autocompletion relies on having a correct type. NSArray *foo; and id foo; will give different autocompletes because Objective-C is a dynamic language.
  • Xcode has had refactoring since 3.0, and it works rather well, including safely correcting your NIB files.
  • You can Cmd-double-click a class to jump to its definition, or Opt-double-click to open its documentation.
  • The documentation can be shown in full either in the Help|Documentation window, or by turning on the Research Assistant, which gives a floating window with constantly-updated API information and sample code for whatever type is selected. Seems to work fine for me.
  • The doc set size is fairly large, but it's also significantly better documentation than the 56MB Java 6 docs. Look inside the JDK download: it's full of example code, too. You'd just whine like a high-pitched whiny thing if there were no examples.

If you want to use Eclipse, that's fine (though stupidly masochistic), but lying or failing to check your statements just makes you look like a fool.

← Previous: Stupid Comments Be Gone! (Software) Next: Adobe Cares. Deeply. (Mac) →
Stupid Comments Be Gone!
Sat, 2009Feb07 16:10:00 PST
in Software by kamikaze

People who write comments on YouTube and Flickr are, with very few exceptions, idiots. It irritates me to have a page half-full of stupid comments when I just want to watch a video or look at a photo... Neither site has any option to remove the comments. So, how do I eliminate them?

The following works in Safari:

Visit, hit submit, and you'll get a customized style sheet.

Save this file as "stupidcomments.css" in your Documents folder.

In Safari's Preferences | Advanced, change the style sheet to Other..., and pick "stupidcomments.css".

Now reload those YouTube or Flickr pages, and voilá! No more comments from idiots! If you ever do want to see them, you can open preferences and change the style sheet to "None Selected" briefly, then put it back.

[Updated 2009-07-02: Moved to a file, not just a blog post.]

[Updated 2010-08-07: Generator form lets you pick the parts you like.]

← Previous: Ad Irony (Society) Next: Should a license be required to use IDEs? (Software) →
Python 3000
Fri, 2008Dec05 00:38:45 PST
in Software by kamikaze

Python 3000 came out, 992 years ahead of schedule!

# dowdy old Python 2.5:
def something(n):
    print "S %dtD %d" % (n, n+1)


# fancy new Python 3.0
def something(n : int):
    print("♠{0}t♢{1}".format(n, n+1))

Not a useful example, but it shows four of my favorite new features: annotated function parameter types, print as a function, and format as a string method. The string in the first example is ASCII-only. The one in the second is full Unicode, and all Python 3.0 scripts are assumed to be encoded as UTF-8.

It's still probably not something you'd want to use on existing production code, even with the 2to3 migration tool; it's not backwards-compatible, and it's not as efficient yet. Python 2.6 has most of the new features in a backwards-compatible form, and remains one of the fastest dynamic languages. For new code, though, Python 3.0 is looking pretty cool.

Perl, however, is in trouble. Perl 5 came out in 1994-Oct-17. Perl 6 was announced in 2000-Jul-19, and still isn't out. The Parrot VM designed for it doesn't actually run Perl 6 code yet. The whole project has been one of unfocused nerds doing nothing productive for years.

Over on, the soi-disant "Ovid" writes: Perl 5 Programmers Are Dying, about the increasing difficulty of hiring qualified Perl programmers. This should be obvious, but the reason he can't find qualified Perl programmers is because all the qualified programmers have moved to other languages, and new ones aren't bothering to learn such an old and awkward language. Redesigning and advertising the Perl site better isn't going to change the situation. Where did all those Ruby users come from? I'm no fan of Ruby, but it's a better Perl than Perl 5, and it's in active development.

← Previous: Pownce Pwned (Software) Next: Review: The Day the Earth Stood Still (1951) (Media) →
Pownce Pwned
Mon, 2008Dec01 18:41:00 PST
in Software by kamikaze

Pownce is dead. It's probably fairer to say it never reached life, compared with Twitter, or even Friendfeed:

Now, why did Twitter win and Pownce fail so badly? I think it boils down to three factors:

Twitter is simple

Twitter has (almost) the absolute bare minimum of features:

One-way links to other people you find interesting, they don't have to "approve" your friendship, it doesn't have to be mutual.
Posting a comment is as simple as typing up to 140 chars and hitting Send. No subject, no categories, nothing else.
Features by convention, not user interface
@NAME to reply, #TAG to mark with a subject tag, d NAME to send a direct message, don't need any additional user interface.

SMS is useful, sometimes, but perhaps an unnecessary distraction from Twitter's real role. But using SMS forced the limit to 140 chars, which shaped the interaction into short, zero-effort bursts, rather than long blog posts like this.

Twitter has no built-in file-sharing or picture support or comment threading, or anything else. There's no gender, relationship status, horoscope sign, or message wall. Just a (mostly) one-way series of short messages. And that's all we needed. The rest can be built on the side, like TwitPic, with URLs we paste in.

Twitter was targeted at the right people

Twitter was initially aimed (or spread virally) very strongly at adults who were in technology or social media. It's not a tool for kids (MySpace) or college students (Facebook). That's smart, because we're the early adopters who drag our friends and family into these crazy things, and are actively looking for people to connect with, but maybe don't feel comfortable with the whole "we must both be friends to read each other's stuff" notion. Facebook requires too much intimacy.

Friendfeed chased the pro blogger market, like Dave Winer. A fair number of people use it as an aggregator for several social services. It has value for them, but it's too ugly and complicated to screw around with on a regular basis. Worse, it's a source of spam if you let it announce posts from one service (like your blog, or Friendfeed itself) to another (like Twitter). I immediately unfollow or even block anyone who spams Twitter from Friendfeed (yeah, you, Mr. Winer).

By comparison, Pownce had no visibly interesting demographic. There was nobody there I wanted to talk to.

Twitter fixed its reliability problems

A year ago, Twitter would show the Failwhale all the time. Update rate was choked to almost nothing. It was clearly dying.

And then... they stopped using Ruby on Rails. The failwhales have become almost extinct. Twitter is now faster and more reliable than any other social network; not the highest bar I'm setting there, but it's an accomplishment.

As Pownce started to deal with scaling, it just fell over. A lot. Even when it worked, it was slow.

Making a social network is hard. I used to be a big fan of My profile's still there: I joined on 09/28/03, last updated 12/10/04. I quit because they stagnated on adding new features, or even fixing the existing ones when they broke. It was the easiest place in the world to set up an online group, and send notices to the group, but if it didn't work, it was no good to anyone. It never reached a useful demographic; after a while, pornography and scammers and spammers were the dominant activity, and it was left to die.

At least Pownce is having the good grace to close the doors and turn out the lights first.

← Previous: Ray Ozzie Lies (Software) Next: Python 3000 (Software) →
Ray Ozzie Lies
Sat, 2008Nov29 10:12:53 PST
in Software by kamikaze

I love Windows, because without it there would be no PC. There would be no PC developers. There might not even be a Web.

-Ray Ozzie, TechReady 2008

Every word of that except "I love Windows" is a lie.

Apple made the first real personal computer, the Apple I, in 1976, and IBM built the IBM-PC in 1981, 5 years later.

IBM wanted Digital Research to make the OS, but the deal fell through, and Microsoft (who until then had made only BASIC) bought QDOS from another company and sold it to IBM.

Windows was an inferior copy of the Lisa and original Mac OS. Almost nobody developed apps for Windows until 3.0, in 1990. PC developers used DOS for any serious apps.

The World Wide Web was invented on a NeXT (the predecessor to the current Mac OS X), and the early spread of the WWW was on Unix machines, NOT Windows, which could barely reach the Internet over dialup PPP.

Ray Ozzie lies worse than Sarah Palin. He lies blatantly, without a trace of remorse or awareness that anyone with a couple brain cells can figure out that he's lying.

← Previous: Twitter Qwitter Drama QQ (Society) Next: Pownce Pwned (Software) →
JavaFX makes me sad
Mon, 2008Sep08 10:43:11 PDT
in Software by kamikaze

Reading about JavaFX just makes me sad. It's very nice, it's what Sun should've done in 1998 instead of Swing.

But today? It's pissing in the wind. Flash owns the "installed everywhere, brute-force rich client", and HTML 5 owns the future; those of us using WebKit in Safari, iPhone's Mobile Safari, Chrome, or several other browsers are already living in this future.

There are three kinds of applications: Local-only desktop apps, Internet desktop apps, and Web applets.

Local desktop apps (word processors, high-power games, etc.) indisputably work best as native applications; you need to be able to use them even if you can't reach the Internet, you need local file access, you need fast drawing speed, and you need native OS integration (drag-and-drop files, for instance). While people have tried to shoehorn these into other categories (Google Apps), it's never worked as a general replacement, and probably never will.

An Internet desktop app will be better and more pleasant to use if built with platform-native technologies. It might be kind of like a web browser in parts, like the iTunes Store, but you would have a very hard time building iTunes in Flash, HTML, etc.

Web applets don't require speed or flashy graphics, just some interactivity and a web server. These are easier to build (and especially, easier to find semi-competent developers for) and get people to use if they use Flash or HTML 5.

So there's just no place for JavaFX and Silverfish and Adobe AIR. They either compete with an installed base of Flash and Flash developers, or the future base of HTML 5 and web developers, or the existing base of native app developers, and in every case, they lose that fight.

← Previous: DungeonDice released, at long last (Mac) Next: 9/11, Seven Years Later (Personal) →

In my Macs Make Programmers article, I listed a handful of introductory books for Python. Any one of those is a great way to get started. But what can you learn with beyond the starting level?

I learned the hard way: by writing challenging Python programs, in particular Umbra, but of course by then I was a decades-experienced programmer, and I still made a lot of mistakes based on false assumptions and invalid mental models of Python. If I'd learned more first, I might have done better.

Looking around at the field, I can't find too many advanced Python books.

Programming Python ***
Spends a lot of time revisiting the introduction to the language, and it stays at the junior tutorial level even when addressing more complex subjects. I didn't find it to provide enough detail.
Python Cookbook ****½
This can be extremely valuable: real-world, advanced Python examples to learn from. This goes the other way from PP; it doesn't have enough chatty tutorial material, too often it just presents the code and you either figure out what's going on or you don't. Overall, though, if you want to be a serious Python programmer, you should get and study this book.
Foundations of Python Network Programming ****
An excellent, detailed, in-depth study of one really specific area of Python coding: network protocols. Since everything uses the 'Net somehow these days, you really want this book, but it is relentlessly tightly focused, and the Python Cookbook and Programming Python cover some of the same material.
Data Structures and Algorithms with Object-Oriented Design Patterns in Python ***½
Bruno R. Priess has a series of books on DSaAwOODP in (C++|Java|C#|Python|Ruby), intended for college students. While it's very dry and mathematical, it's a solid computer science book. As a Python book, it suffers from attempting to shoehorn generic CompSci solutions into the language, when idiomatic Pythonic solutions would be better.

If you want to do graphics, there are books on Tkinter, wxPython, PyQt, and so on, but in my opinion any of those are a mistake; all of the "cross-platform" GUI libraries just look bad and work poorly on every platform. Instead, writing portable model code and then writing platform-specific view/controller code (in Python/Cocoa, for instance) is far more effective.

← Previous: Macs Make Programmers Feedback (Mac) Next: Castles 1.2 Released (Mac) →
When "Freedom" doesn't mean "freedom"
Fri, 2008Jul25 08:26:14 PDT
in Software by kamikaze

Yet again, the enemies of freedom are trying to redefine reality with their NewSpeak.

First, some context:

A couple weeks ago, the "Free" Software Foundation put out another of their communist rants against people making money from the products they produce (or just read it being angrily dismantled by the Angry Drunk). This isn't news of any kind; the FSF shouts out their schizophrenic drivel on a daily basis, and it deserves no attention.

[Update: The "OpenMoko" phone the FSF is pushing can be seen in these OpenMoko Train Wreck videos... I don't think iPhone has anything to fear here.]

John Gruber of Daring Fireball proves the screed to be false in even its least insane point, that you can't write GPL software for the iPhone, by pointing to GPL software for the iPhone.

FSF apologist Aristotle Pagaltzis then claims "John Gruber doesn’t understand freedom"...

Yeah, he does. Everyone understands freedom:

n 1: the condition of being free; the power to act or speak or
    think without externally imposed restraints
2: immunity from an obligation or duty [syn: exemption]

What WordPress and Gruber may or may not have misunderstood are the precise legal terms of the GPL, which is an extremely unclear and legally unsound document. But everyone understands the word "freedom"... Except the "Free" Software Foundation. The GPL is, by definition, a violation of real freedom: it imposes restraints on what you can do, such as distribute software to certain app stores. It's discriminatory against all commercial enterprises.

This has to stop. Children, crazy people, and communists like the FSF should not be permitted to redefine the language used by adults who work for a living. Freedom means freedom, it does not mean "stuff Richard Stallman would like".

As I've said before, if you use the GPL, you give crazy people power over what you can do with your own work.

The root problem is that the GPL has been marketed as a free software license, when in fact it is nothing of the sort, it is filled with restrictions and poison pills to make sure you cannot use it in any productive, commercial fashion. The solution is simple: stop using the GPL. If you want to give out source, use the BSD or MIT license. If you only want to give out source to people you like, say so up front and give them an individual license.

← Previous: Upcoming iPhone apps (Mac) Next: Macs Make Programmers (Mac) →
Yahoo Spam, I mean, Mail
Sat, 2008Jul05 10:49:21 PDT
in Software by kamikaze

I wanted to sign up for Uli Kusterer's mac-gui-dev mailing list.

I have an old Yahoo!® account, but I plan to get rid of it because it's a giant spam bucket. So I make a new one. Here begins my tale of woe.

I pick my standard trashcan account for the "alternate address", and go immediately to the marketing page and turn OFF all the dozen+ junk mail lists Yahoo!® sign you up for when you create an account.

I join the group, that works fine. Now I try to add a new list-collecting address at one of my sites as another alternate address. And the form just silently fails. Nothing happens.

I submit a problem report, and frankly at this point I don't expect anything to happen, I expect they'll just ignore it and continue fucking up like the bunch of yahoos that they are at Yahoo!®.

So for now, I need to use Yahoo!®'s mail reader. When I get into the web-email client, I don't see my inbox. First, I get assaulted with a bunch of modal dialogs pointing out the features I didn't ask for, didn't want, and have fuck-all to do with my email. Underneath them is a page full of ads and news links and every damn thing except email. There's no way to turn this off and go straight to the inbox.

Then when I manage to fight my way to the inbox, I get assaulted by a giant red blinking "YOU ARE ALREADY A WINNER" circa-1996 banner ad. Who the fuck is stupid enough to click on one of those? Are they just trawling for lonely, confused old pensioners and stealing their money? Yahoo!® helps con artists steal money from your gramma. This is a fact.

Every single step of this process is filled with hate. I'm not going to be favorably inclined to anyone stupid enough to advertise there, I'm going to despise them and buy from their competitor. Almost every part of the site is hideous and bad.

I'm clearly spoiled by Google's nearly perfect Zen design. It's mellow, simple. Plain text, no garish colors, no blinking, no image ads. They focus on the content you want, with some other stuff off to the side you can look at if you want. Even the customized, art-themed iGoogle page is an order of magnitude simpler than the Yahoo home page, and it's entirely optional.

Yahoo!® is no damn good at anything. They betray their customers to the Chinese dictatorship to be imprisoned and tortured, they spam and infect PCs with viruses, and they can't even get something as simple as webmail right. Yahoo!®, please die.

Shell + Whitespace = FAIL
Sat, 2008Apr26 15:33:45 PDT
in Software by kamikaze

The old Unix and MS-DOS filesystems expected that all filenames would consist only of letters (lowercase on Unix, uppercase on MS-DOS), underscores, dashes, periods, and digits (everything else was dangerous SOMEWHERE). So you could get away with tools requiring space-delimited filenames. This is still enshrined in the way we encode URLs:, check this out!.txt becomes Ugh.

Nobody uses such a limited set of characters in their filenames anymore, except hardcore Linux nerds. So when they have to interact with real files, named with spaces and parens and all sorts of punctuation, it's a catastrophe. The Unix shells are, essentially, useless for dealing with this world.

If you want to rename all files in subdirs, you could try using find:

find . -name "*.foo" -exec mv {} `basename {} .foo`.bar

This fails when you hit spaces. Try interpolating quotes... And names containing quotes fail. It's an unsolvable problem. You can pipe it through 'xargs' and 'read' and try to put it together, but it'll never be more than a spit-and-baling-wire solution.

The sane solution is to use a real language to process the files, like Python. is an example of this:

#!/usr/bin/env python

import os, sys

where = sys.argv[1]
fromext = sys.argv[2]
if not fromext.startswith("."): fromext = "."+fromext
toext = sys.argv[3]
if not toext.startswith("."): toext = "."+toext

for (root, subdirs, names) in os.walk(where):
	for name in [x for x in names if x.endswith(fromext)]:
		newname = name[0:-len(fromext)] + toext
		print os.path.join(root, name), "->", newname
		os.rename(os.path.join(root, name), os.path.join(root, newname))

There's more boilerplate to set it up, but it's correct, and can be easily modified for a different task, or generalized to call any function with eval.

← Previous: Don Knuth is Wrong, Alas (Software) Next: iPhone vs. the Mobile World (Mac) →
Don Knuth is Wrong, Alas
Sat, 2008Apr26 12:41:53 PDT
in Software by kamikaze

InformIT interview with Donald E. Knuth

I'm astounded, disappointed, and frankly repelled by much of what he says. This is almost tragic: I pretty much learned my serious computer science skills from Don Knuth's The Art of Computer Programming. They're extraordinarily difficult books to read and work through, but they're very rewarding in teaching algorithm design and optimization. The beta versions of the new editions have been interesting, and the MMIX virtual machine was much more relevant to modern hardware. So imagine my surprise here:


the idea of immediate compilation and "unit tests" appeals to me only rarely, when I’m feeling my way in a totally unknown environment and need feedback about what works and what doesn’t. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up."

I'm not entirely a test-first, test-driven developer. Graphics and interaction make up most of my code, which isn't productive to unit test (QA testing, later, yes). But by and large, and especially when building any algorithm and back-end logic, the tests are the proof that I've actually done what I thought, that there are no typos, and that it runs in something less than geological time.

While some smaller segments of algorithms can be proven mathematically, most code cannot, and the larger interactions absolutely cannot. So you're left with testing the edge cases and most common cases. Unit testing is the only way to isolate those tests from the rest of your app; QA testing just shows that the app looks correct, unit tests show that each part is still working correctly.


Still, I hate to duck your questions even though I also hate to offend other people’s sensibilities—given that software methodology has always been akin to religion. With the caveat that there’s no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I’ve ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go...with one exception. The exception is the idea of working in teams and reading each other’s code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.

I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.

Yow. Extreme Programming's main practices are just good software engineering practices (GSEP hereafter) pushed to the logical conclusion. Unit testing is GSEP, so all code where practical is written test-first, then implemented. Code review is GSEP, so all code is written in pairs, constantly reviewed. Regular check-in to source control is GSEP, so code is only written in short sessions and committed. If you can't check in, you throw it out and try again at a smaller task. Regularly integrating all code in the repository is GSEP, so you set up a build machine to immediately get all checked-in code, build it, and run the unit tests, with a prominent alert if the repository has broken code, and nobody does any more checkins until the build is fixed.

These are all provably 100% improvements in software engineering. The name "extreme programming" might be a bit silly, but it's actually the most serious set of practices I know of. For a single developer, some of them can slide without harm, and some teams might consider them to be more overhead than they're comfortable with, but they're as real as gravity, the heliocentric model of the solar system, and evolution. These are facts. It is nonsensical and religious to dispute them. I'm really appalled.

The notion that reusable code is a menace, that libraries of well-tested, carefully-designed tools do not lift you up and make you more powerful, is so alien and dysfunctional I don't know how to even communicate with that.

And now for the real horrorshow:


I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won’t be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Titanium" approach that was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write.

Let me put it this way: During the past 50 years, I’ve written well over a thousand programs, many of which have substantial size. I can’t think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading. Surely, for example, multiple processors are no help to TeX.[1]

How many programmers do you know who are enthusiastic about these promised machines of the future? I hear almost nothing but grief from software people, although the hardware folks in our department assure me that I’m wrong.

Actually, most of the good programmers are pretty enthusiastic about multithreading. If you perform a long operation in response to a user event, a single-threaded application will block (and on the Mac, give you the spinning beach ball of death: @). A multithreaded application can respond to the event, fire off a task to work on it, and return to the user; this is a gigantic leap forward in usability.

The OS/2 user interface guidelines required that you react to an event within 0.1 seconds. Not surprisingly, OS/2 had fantastically good threading support for its time. Threading is hard with most older languages and APIs, but modern languages and frameworks make it approachable: Java has had quite good threading and tools for years, ever since Doug Lea's Concurrent Programming in Java, now the java.util.concurrent libraries. Objective-C/Cocoa has NSOperation. Functional languages are better at multithreading, like Haskell, Scala, and Dylan, and these languages are growing in popularity. One of the few major problems with Python is that the "Global Interpreter Lock" prevents true multithreading, which cripples its long-term performance. Python was never meant to be a fast language, but every generation of chips is going to make it fall exponentially further behind until the GIL is removed.


I know that important applications for parallelism exist—rendering graphics, breaking codes, scanning images, simulating physical and biological processes, etc. But all these applications require dedicated code and special-purpose techniques, which will need to be changed substantially every few years.

Even if I knew enough about such methods to write about them in TAOCP, my time would be largely wasted, because soon there would be little reason for anybody to read those parts. (Similarly, when I prepare the third edition of Volume 3 I plan to rip out much of the material about how to sort on magnetic tapes. That stuff was once one of the hottest topics in the whole software field, but now it largely wastes paper when the book is printed.)

The machine I use today has dual processors. I get to use them both only when I’m running two independent jobs at the same time; that’s nice, but it happens only a few minutes every week. If I had four processors, or eight, or more, I still wouldn’t be any better off, considering the kind of work I do—even though I’m using my computer almost every day during most of the day. So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it’s a pipe dream. (No—that’s the wrong metaphor! "Pipelines" actually work for me, but threads don’t. Maybe the word I want is "bubble.")

This is entirely backwards. The kind of work Knuth is doing is reaching irrelevancy, because it depends on having a single super-fast monolithic computing core, like an old-fashioned mainframe. But we don't have those anymore. We have a network or cloud of computing systems, and we push work out to a bunch of them, collect results when they get done, and make them fault-tolerant. SETI@home is impossible on a monolithic computer, but a cloud of cheap, simple computers is chewing away on it all the time.

This isn't the future of computing, it's the present. Single-processor systems are archaic, and cannot scale much further. We can get almost infinite scaling by parallelism, following the model of the best computers around: the human brain. There is no core CPU in the brain, it's just a bunch of tiny, almost useless processor nodes chatting with their neighbors along weighted connections.


I currently use Ubuntu Linux, on a standalone laptop—it has no Internet connection. I occasionally carry flash memory drives between this machine and the Macs that I use for network surfing and graphics; but I trust my family jewels only to Linux.


From the opposite point of view, I do grant that web browsing probably will get better with multicores. I’ve been talking about my technical work, however, not recreation.

This is perhaps the most weird and alien part. The Web is not "recreation" only; it sure can be, like any other medium, but it was designed for publishing scientific papers, and it's primary uses are news and business; and, sure, communications and porn and games, so it's really covering all of human life. Like most technical people, I now spend most of my day on the Web, or using Web-related services like Twitter.

I apologize in advance for the following unpleasant comparison with Professor Knuth (who, while obviously out of touch now, has produced good work in the past), but I must note that Filthy Communist Richard Stallman does not have Internet access or surf the Web. Is this just generational? I can't think of a lot of older computer scientists online; maybe our culture scares them and they're unable to filter the entertainment parts from the business parts? Vint Cerf is still adapting and surviving in the real world. Maybe it's just 50 years of insular University life that makes you fear change and reality.

Ted Neward Goes Back to Vietnam
Wed, 2008Apr16 10:55:11 PDT
in Software by kamikaze

Internet Blowhard Ted Neward has done yet another of his pompous "technology X is like Vietnam! Or Auschwitz! It's BAD, man!" rants. I wouldn't even waste my time, but... this is wrong far beyond his usual levels of narcissistic ignorance.

The short version, if you ignore his masturbation at the altar of Microsoft, and his characterization of software engineers as superstitious cargo-cultists who are incapable of rationally evaluating technologies (clearly, he looks in the mirror too much), is that Domain-Specific Languages and Functional Programming, because they're "popular" this year, are just an instinctive reaction, and there's probably nothing to them.

Domain-Specific Languages, aka Little Languages, have been around and heavily used for nearly 40 years on Unix, and to some extent before that. They're not some newfangled solution to a temporary pain, they're a long-term, well-proven strategy for making your expression be as close as possible to the domain space, rather than trying to force-fit a domain into your language-du-jour. And yes, I say that as someone who's written at least 20 DSLs in the last 20 years, and made customers very happy with them.

Functional programming is over 50 years old (Lisp is semantically an extremely powerful language, it's just syntactically unreadable; most other FPs are much more readable). For the tasks it's well-suited for, it's extremely useful; for some other tasks, especially interaction, it's not. Very few people would claim it's a silver bullet for all tasks, but only a fool rejects it entirely where it solves a problem better.

Ted Neward is that fool.

[I would have merely posted this as a comment on his blog, but his comment form wasn't working. I expect that Neward wrote it himself, since it's in ASPX.]

History Meme: the Shell is History
Wed, 2008Apr16 07:02:37 PDT
in Software by kamikaze

There's a history meme going around the blogotubes, where you run the Unix command:

history|awk '{a[$2]++ } END{for(i in a){print a[i] " " i}}'|sort -rn|head

This reports the top 10 most-used commands in your shell history. Which, for heavy shell users, is pretty revealing about their work habits.

But... I barely use the shell at all anymore. I have always auto-wiped my history when I log out (You: "Mark, are you paranoid?" Me: "Why do you want to know? Who are you working for?"), so I can't show stats, but all I ever run from shell now is "open" (opens a file/folder in Finder, and mostly I use Spotlight instead now), sometimes "touch" or "mkdir -p", because creating empty files and chains of folders is easier that way; maybe I should write an AppleScript app to do those tasks.

From 1988 to 2003-ish, I was a Unix nerd, I lived almost entirely in the shell. Vim was the only editor I needed, ant my only build environment, and I would touch the mouse only if I was web-surfing or using a paint program (actually, I mostly used Opera with keyboard controls).

I now see that that was archaic and extremely limited. It's easier to use good graphical tools (which excludes anything on Linux or Windows), and BBEdit, Eclipse, Xcode, Preview, and the Finder (even as flawed as it is) blow away all the simplistic tools I was using. If I need to glue several things together, I have AppleScript; all Unix shells can do is combine in/out pipes of plain text from programs that have no user interaction, while AppleScript gives you real data structures and the ability to control a running application.

I don't need the primitive "stone knives and bearskins" environment of a shell that much anymore.

← Previous: Non-Blocking User Interface (Software) Next: Ted Neward Goes Back to Vietnam (Software) →
Non-Blocking User Interface
Sun, 2008Apr13 19:57:36 PDT
in Software by kamikaze

Spent the weekend working on my Secret Mac OS X Game (as opposed to my Secret iPhone Game, which is further along but I can't make more progress until Apple sends me a developer key), and I came to a little choice in user interface design that made me appreciate how the Mac is different from other platforms.

In this game, you can edit sprites by selecting them, and a dialog comes up, you change its properties, and put it away.

In Java or any other GUI I've used in the last 25+ years of writing graphical apps, I would make a modal dialog that blocked the main thread until it was disposed of, with a bunch of fields, and nothing would change until the user clicked OK. At that point, the object is changed, and you go back. This is easy to write in either AWT or Swing; it might be 10 lines of code, where the alternative would be many hundreds of lines.

At first I started to do that here. But it felt wrong. It didn't feel like how any other Mac app works.

So now, I save the old values before presenting the editor, and let you change values immediately, and there's a Cancel button to revert to the old values. If you just select something else or close the editor panel, your values are saved. It just removes that one step of hitting OK, but it makes an enormous difference to feeling like it's easy to edit a sprite. And it turns out in Cocoa that that's not noticeably harder to code than the obtrusive Java-like way.

← Previous: Fitna (Atheism) Next: History Meme: the Shell is History (Software) →

There's no method on NSString for trimming whitespace... So I hacked up this, which appears to work for my use cases, but I'm not sure I'm doing sane things with UTF-8.

+ (NSString *)trim:(NSString *)s {
  NSInteger len = [s length];
  if (len == 0) {
    return s;
  const char *data = [s UTF8String];
  NSInteger start;
  for (start = 0; start < len && data[start] <= 32; ++start) {
    // just advance
  NSInteger end;
  for (end = len - 1; end > start && data[end] <= 32; --end) {
    // just advance
  return [s substringWithRange:NSMakeRange(start, end - start + 1)];

[update 2008-06-17:]

It does exist, it's just... well, inexplicable:

[@" foo " stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceCharacterSet]];

My trim: is shorter and faster, but at least there is a "standard" way to do it.

← Previous: On Being a Snob (Mac) Next: Twatting about on Twitter (Personal) →
Mon, 2008Mar03 23:18:37 PST
in Software by kamikaze

Apparently the angry mob of web developers armed with pitchforks and torches have convinced Microsoft to behave like adults for once in their sorry lives.

While it doesn't say anything good about the morons developing IE that they tried this in the first place, at least someone in MS recognized that they don't live in isolation from the real world anymore.

And someday, maybe they'll figure out how to support application/xhtml+xml and proper XHTML content, just like real browsers!

← Previous: Cocoa APIs That Suck Goat Ass (Software) Next: On Being a Snob (Mac) →
Cocoa APIs That Suck Goat Ass
Mon, 2008Feb25 19:08:45 PST
in Software by kamikaze

To make a list of integers in Java, you type:

List<Integer> data = Arrays.asList( new Integer[] { 1, 2, 3, 4, } );

Ugly syntax wrapped around it, but all the data is pretty clean. Easy enough.

In Python, you type:

data = [ 1, 2, 3, 4, ]

Suck on that, Java. Javascript, Groovy, and most newer languages are also easy like that.

In Objective-C, you write... Parents, please remove your children from the room, they shouldn't look upon this, lest their face melt off:

NSArray *data = [NSArray arrayWithObjects:[NSNumber numberWithInteger:1],
  [NSNumber numberWithInteger:2],
  [NSNumber numberWithInteger:3],
  [NSNumber numberWithInteger:4],

Yeah, you wouldn't want INTEGERS to be usable as objects, would you? So I screwed around with varargs a bit, came up with this. Use it and be happy:

+ (NSMutableArray *)arrayWithSize:(NSInteger)count integers:(NSInteger)arg, ... {
  NSMutableArray *array = [NSMutableArray arrayWithCapacity:count];
  if (count > 0) {
    // first arg not part of varargs
    [array addObject:[NSNumber numberWithInteger:arg]];
    va_list arglist;
    va_start(arglist, arg);
    NSInteger item;
    NSInteger i;
    for (i = 1; i < count; ++i) {
      item = va_arg(arglist, NSInteger);
      [array addObject:[NSNumber numberWithInteger:item]];
  return array;

Then you can invoke it like:

[DataUtil arrayWithSize:4 integers:1, 2, 3, 4];

(Yes, I know you can add methods to classes with categories in Obj-C, so I could add that to NSArray instead of to my own utilities class. I find that hard to follow. Go ahead and make your code confusing if you like.)


NSPredicate regular expression support
This has actually been in Cocoa since at least 2005, according to the revision history, but the "common wisdom" imparted online and in books was that you need an external regexp library, because there wasn't one in Cocoa. That's just wrong. I'm not sure why people have a hard time finding it, when it shows up in a search of the documentation, and is in the obvious place for "queries to match data": NSPredicate.
NSAlert setAccessoryView:
You can add arbitrary Cocoa widgets to an NSAlert. Like, say, a text input field:
- (NSString *)input:(NSString *)prompt defaultValue:(NSString *)defaultValue {
  NSAlert *alert = [NSAlert alertWithMessageText:prompt

  NSTextField *input = [[NSTextField alloc] initWithFrame:NSMakeRect(0, 0, 200, 24)];
  [input setStringValue:defaultValue];
  [alert setAccessoryView:input];
  NSInteger button = [alert runModal];
  if (button == NSAlertDefaultReturn) {
    [input validateEditing];
    return [input stringValue];
  } else if (button == NSAlertAlternateReturn) {
    return nil;
  } else {
    NSAssert1(NO, @"Invalid input dialog button %d", button);
    return nil;
← Previous: Night 2 of the MacBook Air Era (Mac) Next: Cocoa APIs That Suck Goat Ass (Software) →

Anyone who develops web pages woke up to some nasty news about Internet Explorer 8 today, according to a Microsoft developer's press release on A List Apart. The one-stop response is John Resig's X-IE-VERSION-FREEZE post, and see also Surfin' Safari's "we don't need it" response.

IE6 and IE7 don't render web pages correctly, so Microsoft won't have IE8 won't render them correctly, either. They want developers to insert a new tag in their page <head>:

<meta http-equiv="X-UA-Compatible" content="IE=8" />

This will make the page render using the marginally-less-incompetent IE8 rendering engine. If you leave it off, it renders like IE6 or IE7. If IE9 ever comes out, the page will still render like IE8, so you'd either have to change the content to "IE=9" on every page you ever wrote, or use the forward compatibiity content, "IE=edge". Naturally, being Microsoft, they discourage the use of forward compatibility. If you were so stupid that you still worked at MS[0], you certainly wouldn't want the world to get any more useful and functional in the future, after all.

So Microsoft has opted out of following web standards in the future; they want to condemn the entire world to render like IE6 did, even when they "upgrade" their browser.

Well, enough of this. The answer is not to write version-specific code and leave the browser to stagnate; MS already tried that with IE6, and it's rotten. The answer is to write pages according to the HTML specification, test on real browsers like Safari and Firefox and Opera, and if Microsoft is so incompetent that they can't handle the standards, they should be ignored. Put in "IE=edge" if you feel generous. I suppose I will for work, but never for my own pages.

The longer-term solution is to stop enabling Microsoft. Option 1 is to simply redirect IE users to a download page for a real browser. Option 2 is to write an ActiveX plugin for IE that will let us embed WebKit to render pages, much like the Tamarin-on-IE7 hack. Then we can shove a proper rendering engine down at people who are still foolish enough to keep using IE.

Everyone competent has already left for Google or Amazon or startups. I live in Seattle Eastside and get to regularly see the kind of sub-room-temperature IQs who still work there. I have actually seen these people using shit-brown and puke-green Zunes. Yes, seriously!
← Previous: Cloverfield Should be Lost (Media) Next: Day 2 of the MacBook Air Era (Mac) →
Page:   0    1    2    3    4    5   Archive  
Feedback  | Key: ] =local file, * =off-site link  | Copyright © 2003-2010 by Mark Damon Hughes | Subscribe with RSS 2.0