I wouldn't say it is XML's fault as such, but it being used for a purpose it is less than ideally suited for, a consequence of early oversell (those who remember 2000 would know what I talk about).
There is another lesson that is web-relevant. Big dataset like these shouldn't be naively be tagged into an XML format and presented as is on the Web. This is because in a web setting the overhead for each element is rather large as the DOM will be applied to it, allowing arbitrary dynamic mutations. It is easy to overwhelm even the most powerful processor this way and zap all available memory.
I have earlier proclaimed markup an [necessary] evil. A more constructive way of putting it is to say that markup should always be minimal. You should use as much markup as you need, and no more. Markup is something we add to aid machines. Too much or wrong markup can do more damage as too little or too vague.
This design principle determines how to standardize markup. Unless the author knows something the user doesn't, the markup should not be there.
This principle obviously caters to the author's laziness, the admirable human trait not to do more than necessary. It is less obvious, but no less important, that it also empowers the user. More minimal markup means more flexible and accessible markup, assuming that the user agents do their job and actually act on their users' behalf.
Where we areFour years ago I wrote a small piece on conditional comments in IE7, and whether there should be an institutionalised Opera CSS hack, in the style of
@browser opera. While IE's standards support isn't stellar, it is still better than it was four years ago, and the desire to make specific hacks for the shortcomings of IE, Opera, or any other browser hasn't gone away and is unlikely to go away in the next decade either. This entry is triggered by a comment this Friday asking for Opera conditional comments. For all the talk about the ills of browser sniffing, and using capability detection instead, it is not going to go away. In that case wouldn't it be better to make browser sniffing less bad?
There is one basic document functionality that none of HTML, CSS, nor SVG can do. None can represent one box, another box, and a link between the two.
Unfortunately as often as not the confirmation request message to make sure that the user is not a spammer will itself end up in the user's spam folder since the mail program or service can't know that the email isn't from a spammer.
HTML4 became a W3C standard 11 years ago. By now we should have plenty of implementation experience with the standard, user agents, web developers, and authoring tools and what has actually made the Web more or less accessible. Ideally there should be an audit of all the HTML4 features for their impact on accessibility, whether they were designed for the purpose or not.
We also have extensive implementation experience. Accessibility was central to the design of Opera from the very beginning and part of the company culture, but that doesn't mean every initiative was a success. Other browsers and tool makers should have learned something the last decade as well. Accessibility enjoys considerable goodwill among developers, most want the Web to be accessible, but to turn good will into good products first we need to make the implicit knowledge explicit, what failed as much as what succeeded and why it failed.
A link to the past
HTML is the Hypertext Markup Language. Hyperlinks is what made HTML special. When I came to the HTML Working Group, shortly after the browser war was over, the feud of the day was with XLink 1.0, which quickly had become a Recommendation through a flawed process. The HTML group wasn't happy about it, as they didn't think the specification fulfilled its design goals.
XLink had a complex history, originally it was meant to be an Extensible Linking Language to complement the Extensible Markup Language (XML). The specification ended up creating a number of attributes in the XLink namespace, 'link:type', 'xlink:href', 'xlink:role', 'xlink:arcrole', 'xlink:title', 'xlink:show', 'xlink:actuate', 'xlink:label', 'xlink:from', 'xlink:to'. The idea was that any XML language needing hypermedia functionality would mix in the appropriate XLink attributes.
When I left the HTML Working Group a few years later XLink was forgotten, but the HTML working group had made a very similar collection of floating attributes for XHTML2, 'xhtml:href', 'xhtml:role', 'xhtml:src', 'xhtml:about' and so on. The idea now was that any XML language needing hypermedia functionality would mix in the appropriate XHTML2 attributes.
The XML storyIn the beginning was SGML. There is a lot to be said about SGML so I won't. HTML was specified to be an application of SGML, but that never happened in practice. Among browsers Opera kept the pretence of supporting SGML for the longest time, causing us a lot of trouble because Opera behaved differently from every other browser. DocBook is another known SGML application, but in general SGML was no success.
About a decade ago a small group of people started a reformulation of the old SGML standard, First they did it outside of the W3C and later, when the success became apparent, within the W3C. The story of this simplified SGML, now known as XML, may be best told via the annotated XML, by Tim Bray, one of the principal authors. Essentially XML is angle brackets and a number of production rules on top of Unicode (for a fuller description see Comparison of SGML and XML).
Modern image formats are designed for lower bandwidth, with interlaced and progressive functionality, showing the image in progressively improving resolution as the bits arrive, but there is no functionality to say that enough bits have been transferred already and that any further bits won't add to the quality of the image.
The usual approach is to generate thumbnails based on the full-size images, and let the thumbnails function as links to the full-size image. The problem with this approach is that the image and the thumbnail are unconnected. Given the image address you can't use the thumbnail to quickly display a rough outline of the image until it is downloaded, given the thumbnail you can't download the image if the resolution is insufficient, and there is no way to cache an image's thumbnail.
Given the multimegapixel image sources there will be at least three levels of raster images, the original which tend to be too large for any useful purpose, the edited/enhanced Web image, and the scaled down and often cropped thumbnail. The same applies to the audio and video media types as well, only here the downsampled files can be even more dramatically smaller than the raw source files.
A thumbnail should link to the image it is derived from and describe the list of operations done to generate the thumbnail. In formats that supports it like PNG this could be stored in a chunk, but markup is a more generalised solution.
The keyboard-friendly design of Opera was one of the things that attracted me to the browser in the first place, and one I am disappointed with the slow progress with. Keyboard-wise Opera today isn't substantially better today than Opera 3, or at least Opera 7. In some cases it is better (like spatial navigation when it works), in other cases it is worse (I still haven't found how to recover the Alt+Z history view, one of Opera's greatest inventions). I don't think Opera does any keyboard-only or keyboard-augmented usability testing.
Opera's lack of progress is one thing, but in the Web sphere things are actually getting worse. Early on you could do keyboard-only browsing most of the time. If the site used frames it was very awkward and it was better to use any mousing device available, and you had the occasional idiot who used 'onclick' functionality to recreate actual links, either because he didn't like the colour or underline of links or simply because he could.
Most of those idiots have discovered CSS by now (there should be a Hall of Shame site for those who haven't), but in these webapp ajax gidget 2.0 days recreating the user interface is the game of the day, and that usually means breaking the keyboard.
One of the tasks you can't reliably do with the keyboard today is drag and drop, and that goes for navigating to dynamically generated submenus and choices as well.
In my view the raging accessibility question shouldn't be whether the 'alt' attribute is optional or mandatory, but how we can make the Facebooks of the future accessible.
This ties in with HTML5 and the obituaries over the
An attribute like 'longdesc' can make an inaccessible picture into a device-independent and accessible textual representation of that picture. All the designer has to do is to spend a couple hours for each picture describing them in detail what they depict and what they are used for in an as context-independent way as possible. For some reason most web designers opt not to do that.
Given the choice between having Facebook and similar sites mostly usable or the WAI site and a couple other special interest sites perfectly accessibly marked up, Facebook is the winner. If the spec can give designers the features they crave and then behind the scenes give the browser or the assistive tool the information they need to cater to their users we have a winner. The HTML5 work with drag and drop looks very promising in this regard.
What the HTML5 spec doesn't cover, and which Opera has struggled with in its extensive keyboard support, is how to handle the conflicting interests of the web application developer and the user agent, be it a browser and/or assistive tool. As an example Opera uses the A key to navigate to the next link, what if that key is used by the application? In a keyboard not having the A key another key is used instead, or the user can configure his own keyboard mapping, so it wouldn't make sense to make a collection including A "reserved key" unavailable for the application. So who should win in a battle of A, the web application or the user agent? CSS has covered a similar conflict in the Cascade (the C in CSS), essentially default < author preferences < user-important preferences. For a User Agent I believe the best option is to let the UI be overridden by the web application (otherwise the application won't work in that environment) but have a mechanism to override the application when needed.
Related, given the great variety of keyboards, some user mechanism to remap keyboard mappings is necessary not only for the user agent, but for the web application as well.
Here is a demo of keyboard-accessible drag and drop in action.
The concept of human races that most of us have grown up with has been shown to be at best simplified or misleading and at worst completely false. That hasn't and won't make racism go away. Furthermore this racial theory we have inherited is founded on Victorian science, and an enlightened project to classify and make sense of the world as they knew it then. The racial theory we know is far better founded than the theories at their time, but that wasn't good enough.
Combined with the obvious question at the time, "Why are Europeans so apparently superior to other human beings?", this cause profound misery in the late 19th and the first half of the 20th century. The scientists did their best based on what they knew moved on based on new data. But the societies didn't as there will always be a generation gap between the two.
I think history might repeat itself. Phrenology and talking about the Mongoloid race is (mostly) a thing of our past, slowly so may the Victorian idea of race do as well. The question of today seems to be "Why are the East Asians so much better at making money than other human beings?" Based on the past we can expect racist attacks and theories both against and by ethnically East Asians.
Science has moved on, we don't talk of race any longer, we talk of populations. But when it comes to genetics we are no better informed than the Victorians were about anthropology. We are making our conclusions on very scant material. There will be something in it for everyone, and many of these will have an axe to grind. By carefully selecting data you can get material to support every wild idea you can find on the Internet.
Nothwithstanding "It is important to emphasise that there are no genetic variations exclusive to any racial group. Some are more common in certain populations, but their distribution does not align with social categories of race." the idea of race is too ingrained and too useful to go away.
And it isn't just about race. Say you have data that 30% in a population, such as a classroom or an office, are predisposed to obesity. How are the 30% or the 70% to react to this information? If you know you are predisposed, would you eat less, or will you eat more because you can't help it anyway? What if you are predisposed to being bad in math? Or to violent crime? What if it is the neighbour and the neighbour's kids?
And what if 15 years later scientist discover that you or your neighbour weren't predisposed for obesity or being bad in math or violent crime after all? How would your life have changed from living 15 years under false assumptions?
My favourite headline has been El Reg's Opera to take web back to the old days.
For decades Web intellectuals have railed against the client-server model, argued that it is too stale and authoritarian, had the server point of failure and couldn't scale with the exponentially growing Web. Power to the distributed systems. Then Google came along and showed you can build a bigger server. The bigger the problem, the bigger the server park. Problem, any problem, solved. But way back in the CERN pioneering days the client was the server, the consumer was the producer. This grass root idealism didn't survive the Web's brush with success in the mid-90s, and when the revolutionaries no longer had scale on their side the revolution faltered, ending up with SETI searchers and UFO fanatics.
Unite is old within Opera as well, I couldn't put a date on it but as a concept it would be Opera 7 to Opera 8 territory. Having a server on a phone generated some buzz on Friday nights, or maybe it was the beer. I have some very simple ideas for what I want from computing, among them I want to liberate data from the machines they are running on. Lars' concept, which Opera Unite is based upon, takes the opposite approach, it gives the machines a human face. To me this is a little weird, Opera is after all in W3C parlance a user agent, not a machine agent, and a little wonderful as well. It hasn't fully dawned on people yet, but Opera might unite not just them, but their machinery as well.
Concept is one thing, doing the work is something else. There were some very tricky problems to be solved for a browser-server solution to work in practice, and it was integrated with the widget framework, which has a number of advantages, ease of authoring and a common security model among them.
This leads us to now. What if we have a revolution and nobody comes? Even if beneficial for the Web, it is no guarantee that Opera Software will benefit from the revolution they have started, past revolutions and Opera's past performance should give enough historical evidence of that. Worst case this might end up as Opera Hypercard, technologically interesting, even influential, but with little commercial benefits.
There are immediate advantages, obviously to run the servers Opera would have to be installed on the machines, and it can be an argument to switch browser. Longer term Opera's benefit largely lies in letting us eat a bigger cake. As long as their slice of the cake is as small as today others will profit from the sweat of Opera's labour, but browser vendors today are marked by coopetition, competitors with a strong incentive to cooperate, though some of the vendors have other agendas as well.
Seemingly going from the client to the server is a step backwards, after all some can recognize the Opera brand, many more the Firefox brand, but almost nobody can name the Apache brand, even though this is arguably the greatest success story of them all. But by turning a client into a client-server, and the server as well, you have peers. Opera Unite wouldn't be an efficient way to distribute content, or even to represent me or mine, but it can be a flexible way to communicate.
|November 2013January 2014|