Questioning the W3C’s intentions, and coming up pleasantly surprised.
Come and theorize with me for a moment.
You’ve no doubt seen Owen Briggs’ Design Rant. Keeping in mind that it was written in 2001, the concepts are more relevant today than ever.
The World Wide Web Consortium, he says, is taking the long view. Rather than duplicate past experiences like NASA’s Viking mission, the data for which is no longer readable by machines, the W3C is planning for the future now. Each new spec coming out is considered in a historical context, as new features added extend existing capabilities, instead of replacing them. An evolving language, each baby step along the way has to set the stage for the giant leaps to come in future revisions. So as not to break stuff.
So what’s up with XHTML 2.0? No more <img> tag? That little doozy alone wipes out almost every single page on the existing internet. XHTML–based browsers can “process new markup languages without being updated”, oh sure, but that doesn’t really do much for existing sites, now does it? It strikes me that in their quest for semantic purity, they’re casting off the primary goal of future compatibility.
But then… then the sound of a flick of a switch as the light turns on.
They’re doing this once so that it never has to be done again. The goal of XHTML is to transition people from HTML, a non–extensible (and if you’d like to argue this, I present the case of IE4 vs. NN4) subset of SGML to the prior–mentioned XML which, again, can “process new markup languages without being updated.”
HTML will die. Today’s internet is obsolete, and anyone still coding in HTML 4 is planning the obsolescence of their own code. The big picture says that if, and this is a big if, but if we can move to an XML–based internet, then revisions to markup languages, existing and new, don’t require browser updates. Once we have user agents that fully support an eXtensible Markup Language, and the style sheets used to format it, then it doesn’t matter anymore if we lose the <cite> tag, or if <img> gets dropped. We create our own damn subsets that include them, and everyone else can use our subsets without downloading a new agent! Wouldn’t that have been convenient 5 years ago…
I’m late to the game. Or early, depending on where you’re coming from. This is a big thing, and it’s taken me a while to see it for what it is. If you approach recent announcements from the Consortium keeping this in mind, it all begins to mesh.
However. It’s an ideal, and years and years down the road. Coding for today’s web means you can’t afford not to create sites that will be obsolete when the dream takes off. Browser and developer support for even the next transitional technologies like XHTML and the absolutely critical CSS is not nearly enough to start coding these future–friendly sites today.
As we’ve moved from presentational HTML to semantic XHTML separated from style, some have come along for the ride but most haven’t. The next few phases are going to be even more difficult to get to, since at least today’s transition is still very backwards–compatible. The big leap is going to be far tougher, since killing HTML for good simply cannot happen in today’s climate, or tomorrow’s, or any time in the next 5 or 10 years.
Will it ever happen? Hopefully. I’m excited about the prospect. I realize this is something to look forward to, and not something to even think about using in the near future. But they’ve giving us our training wheels in XHTML, and that’s a good start.