Ссылки о веб-разработке за апрель 2010

HTML5 Video

Shared by arty
IE9 will support playback of H.264 video only

There’s been a lot of posting about video and video formats on the web recently. This is a good opportunity to talk about Microsoft’s point of view.

The future of the web is HTML5. Microsoft is deeply engaged in the HTML5 process with the W3C. HTML5 will be very important in advancing rich, interactive web applications and site design. The HTML5 specification describes video support without specifying a particular video format. We think H.264 is an excellent format. In its HTML5 support, IE9 will support playback of H.264 video only.

H.264 is an industry standard, with broad and strong hardware support. Because of this standardization, you can easily take what you record on a typical consumer video camera, put it on the web, and have it play in a web browser on any operating system or device with H.264 support (e.g. a PC with Windows 7). Recently, we publicly showed IE9 playing H.264-encoded video from YouTube.  You can read about the benefits of hardware acceleration here, or see an example of the benefits at the 26:35 mark here. For all these reasons, we’re focusing our HTML5 video support on H.264.

Other codecs often come up in these discussions. The distinction between the availability of source code and the ownership of the intellectual property in that available source code is critical. Today, intellectual property rights for H.264 are broadly available through a well-defined program managed by MPEG LA.   The rights to other codecs are often less clear, as has been described in the press.  Of course, developers can rely on the H.264 codec and hardware acceleration support of the underlying operating system, like Windows 7, without paying any additional royalty.

Today, video on the web is predominantly Flash-based. While video may be available in other formats, the ease of accessing video using just a browser on a particular website without using Flash is a challenge for typical consumers. Flash does have some issues, particularly around reliability, security, and performance. We work closely with engineers at Adobe, sharing information about the issues we know of in ongoing technical discussions. Despite these issues, Flash remains an important part of delivering a good consumer experience on today’s web.

Dean Hachamovitch
General Manager, Internet Explorer

Jeremy Zawodny: High Performance Web Sites :: Call to improve browser caching

Jeremy Zawodny
High Performance Web Sites :: Call to improve browser caching - http://www.stevesouders.com/blog...
arty, Tracy, Meryn Stol and Kevin Johnson liked this
a really damned good point - Jeremy Zawodny
I'd like to see more use of pre-emptive content fetching. - Meryn Stol

Andrey Smirnov: Facebook videos make switch to HTML5

Andrey Smirnov
Facebook videos make switch to HTML5 - http://www.ipodnn.com/article...
Facebook videos make switch to HTML5
Iván Abrego, winckel, arty and 6 other people liked this
"Social networking website Facebook has begun switching some of its videos over to HTML5. ... Not all videos are available in the new format. Some clips, namely older ones, continue to be hosted in Flash, and will generate error messages on incompatible devices." - Andrey Smirnov

% ) Facebook uses Decentralized Extensibility to centralize the Web around Facebook

Henri Sivonen on twitter: Facebook uses Decentralized Extensibility to centralize the Web around Facebook. (See also part one.)

mobilehtml5: Android outpaces iPhone in US web traffic, via...


Android outpaces iPhone in US web traffic, via IntoMobile

This is big.

Built-in or bolt-on accessibility in HTML5? How about a bit of both?

While following the development of HTML5 I’ve seen a fair bit of talk about “built-in” vs. “bolt-on” accessibility. Perhaps I’m missing something vital, but I don’t really see what the problem is or why it has to be one or the other.

Built-in accessibility by means of semantic elements and controls that browsers can do something meaningful with and expose to assistive technology is great. As long as web developers use the elements of HTML as they are intended, they get plenty of accessibility more or less for free.

On the other hand, we can’t have HTML elements and attributes that cover every imaginable use case. It also isn’t always possible, for various reasons, to change existing, non-semantic and inaccessible markup. That’s where “bolt-on” features like WAI-ARIA are also necessary – to make it possible to make UI elements created with non-ideal markup, CSS and scripting accessible.

Read full post

Posted in , .

Эти пользовательские интерфейсы / Миф об обязательном поле

В мире разработки программных продуктов бытует немало мифов и заблуждений. Чтобы двигаться вперед, а не топтаться на месте, их совершенно необходимо разрушить. Сегодня об одном из самых закоренелых заблуждений, которое к тому же достаточно вредное — называется «Миф об обязательном поле».

Речь пойдет о практически любых системах, использующих для ввода информации формы. Обязательное поле — это поле формы, без заполнения которого система не примет у вас информацию. Среди подавляющего большинства разработчиков ПО бытует мнение, что обязательными полями должны быть:
  1. Все необходимые с точки зрения предмета поля (например, ФИО и дата рождения человека, если речь о паспортном столе);
  2. Все необходимые для функционирования системы поля (те, без которых не будут работать алгоритмы — например, дата, с которой начинается предоставление услуг, чтобы делать по ним начисления);
  3. Важные поля — такие, которые не необходимо, но желательно заполнить (например, обоснование вносимого изменения) — с той мотивацией, что пусть лучше пользователь попотеет, когда не нужно, чем забудет ввести значение, когда будет нужно.
Как видите, тут целый комплекс мифов, развеивать которые нужно скрупулезно и планомерно. Поэтому начнем с двух других заблуждений.
Читать дальше →

A pixel is not a pixel is not a pixel

Shared by arty
торжество legacy — теперь и в мобильных : (

Yesterday John Gruber wrote about the upped pixel density in the upcoming iPhone (960x640 instead of 480x320), and why Apple did this. He also wondered what the consequences for web developers would be.

Now I happen to be deeply engaged in cross-browser research of widths and heights on mobile phones, and can state with reasonable certainty that in 99% of the cases these changes will not impact web developers at all.

The remaining 1% could be much more tricky, but I expect Apple to cater to this problem by inserting an intermediate layer of pixels. (Later John pointed out that such a layer already exists on Android.)

One caveat before we start: because they’re unimportant to web developers I have mostly ignored the formal screen sizes, and I’m not really into disucssing the ins and outs of displays, pixel densities, and other complicated concepts. So I might use the wrong terminology here, for which I apologise in advance.

What web developers need

I do know what web developers are interested in, however. They need CSS pixels. That is, the “pixels” that are used in CSS declarations such as width: 300px or font-size: 14px.

These pixels have nothing to do with the actual pixel density of the device, or even with the rumoured upcoming intermediate layer. They’re essentially an abstract construct created specifically for us web developers.

It’s easiest to explain when we consider zooming. If the user zooms in, an element with width: 300px takes up more and more of the screen, and thus becomes wider and wider when measured in device (physical) pixels. In CSS pixels, however, the width remains 300px, and the zooming effect is created by expanding CSS pixels as much as is needed.

When the zooming factor is exactly 100%, one CSS pixel equals one device pixel (though the upcoming intermediate layer will take the place of device pixels here.) The image below depicts that. Not much to see here, since one CSS pixel exactly overlaps one device pixel.

(I should probably warn you that “zoom 100%” has little meaning in web development. Zooming level is unimportant to us; what we need to know is how many CSS pixels currently fit on the screen.)

The following two images illustrate what happens when the user zooms. The first shows device pixels (the dark blue background) and CSS pixels (the semi-transparent foreground) when the user has zoomed out. The CSS pixels have become smaller; one device pixel overlaps several CSS pixels. The second image shows device and CSS pixels when the user has zoomed in. One CSS pixel now overlaps several device pixels.

Thus our element with width: 300px is always exactly 300 CSS pixels wide, and how many device pixels that equals is up to the current zooming factor.

(You can calculate that factor by dividing screen.width by window.innerWidth — on the iPhone. Browser incompatibilities are rife here; expect a full report in the not-too-distant future. Besides, as a web developer you’re not interested in the zooming factor, but in how many pixels (device or CSS) fit on the device screen.)

This system will not change. If it did, all iPhone-optimised sites would become severely un-optimised in a hurry, and that’s something Apple wants to prevent at all cost.

Thus, a fully zoomed-out website would still display at 980 CSS pixels, and how many device pixels that equals is unimportant to us.

The tricky bits

However, there are two tricky bits: the device-width media query and the <meta name="viewport" width="device-width"> tag. Both work with device pixels, and not with CSS pixels, because they report on the context of the web page, and not on its inner CSS workings.

The media query

The device-width media query measures the width of the device in device pixels. The width media query measures the total width of the page in CSS pixels, which, for reasons I’ll explain later, is at least 980px on the iPhone.

The device-width media query works as follows:

div.sidebar {
	width: 300px;

@media all and (max-device-width: 320px) {
	// styles assigned when device width is smaller than 320px;
	div.sidebar {
		width: 100px;


Now the sidebar is 300 CSS pixels wide, except when the device width is 320 device pixels or less, in which case it becomes 100 CSS pixels wide. (You stil follow? This is complicated.)

By the way, in theory you could use a media query that queries the device screen in centimeters or inches (@media all and (max-device-width: 9cm)). Unfortunately it seems badly to outright unsupported, even by the iPhone. The problem here is that physical units such as inches are usually translated to (CSS) pixels; thus width: 1in equals 96 pixels on all browsers I tested so far (and that’s quite a few). So these media queries are unreliable.

The <meta> tag

In general <meta name="viewport" width="device-width"> is even more useful. This tag, originally Apple-proprietary but meanwhile supported by many more mobile browsers, actually makes the layout viewport fit the device exactly.

Now what is the layout viewport? It’s the area (in CSS pixels) that the browser uses to calculate the dimensions of elements with percentual width, such as div.sidebar {width: 20%}. It’s usually quite a bit larger than the device screen: 980px on the iPhone, 850px on Opera, 800 on Android, etc.

If you add <meta name="viewport" width="device-width">, the width of this layout viewport is constrained to the device width in device pixels; 320 of them in the iPhone’s case.

That matters if your pages are narrow enough to fit in the screen. Take this page without any CSS width statement and without the <meta> tag. It stretches over the full available width of the layout viewport.

This is probably not what you want. You want to fit the text nicely on the screen. That’s the task of <meta name="viewport" width="device-width">. When you add it, the layout viewport is contracted (to 320px in the case of the iPhone), and the text fits.

Apple’s changes

Now what impact will Apple’s resolution changes have on the device-width media query and the <meta> tag? Of course I cannot be certain, but I expect that nothing will change for web developers.

The <meta> tag

The <meta> tag is easiest to explain. Apple has deliberately invented it precisely in order to allow people to fit their content on an iPhone screen, and has pushed it with developers. That means that it can’t afford to change the device width as read out by the <meta> tag now.

In fact, the Nexus One has already solved this problem. Its official screen width (in portrait mode) is 480px, but when you apply the <meta> tag it acts as if the screen width is 320px, 2/3rds of the official width.

If I understand correctly, this is what John Gruebr is saying when talking about the Nexus’s display and its missing one sub-pixel and thus 1/3rd less pixels. That fits the Nexus interpretation of the <meta> tag exactly.

So basically Google has already inserted a layer of what are apparently called dips; device-independent pixels. This layer comes between the official, reported screen size and the CSS pixels web developers work with.

I expect the new iPhone to copy the Nexus trick and report the screen size as 320px (half of the formal resolution, in other words) when queried by the <meta> tag. It’s half and not two-thirds because the pixel density of the new iPhone is higher than the Nexus (or something).

The media query

That leaves the device-width media query as the sole problem area. On the Nexus it uses 480px as the screen width, despite the fact that here, too, 320px may be more appropriate. We’ll have to see what Apple does here.

The more fundamental question is whether the dips are also going to be used for media queries. On the whole I’d say we want that; formal device size is unimportant to web developers: we want to know how much content we can get on the screen, and it seems dips are most suited for that.

Unfortunately the Nexus does not do that right now; as far as media queries are concerned the device-width is still 480px, and not 320px. But maybe Apple can solve this problem for web developers.

So the situation is quite clear for normal websites and for those that use the <meta> tag; less clear when it comes to media queries.

Stay tuned.

Ruby-style Blocks in Python

Ruby-style Blocks in Python. Yes, yes, yes, yes. A proposal for muli-line lambda support in Python that doesn’t trip up on significant whitespace. If this gets in before the proposed feature freeze I’ll be a very happy Pythonista. UPDATE: This is a post from over a year ago, and it looks like the proposal has since stalled.

Opera Mobile 10 and the Opera Widgets Mobile Emulator on your desktop

Making sure that your site looks great and works exactly as it should in mobile browsers can often be a tedious process. With Opera Mobile 10 for Windows, Linux and Mac we offer a native application that can be run directly from your desktop machine. And with the integrated Opera Widgets Mobile Emulator, developing mobile optimized widgets has never been easier.

RAIC? What's that?

RAIC? What’s that? (via). “Redundant Array of Independent Cloud providers”. Solve the cloud lock-in problem by storing data with multiple different providers from the start.

JavaScript / seedJS — Менеджер пакетов CommonJS



Ребята из SproutCore представили менеджер пакетов стандарта CommonJS (в настоящее время поддерживает node.js как целевую систему).
Читать дальше →

Jeremy Zawodny: Massive CouchDB Brain Dump

Jeremy Zawodny
Massive CouchDB Brain Dump - http://blog.mattwoodward.com/massive...
arty liked this
wow, lots of good info! - Jeremy Zawodny

The Linux kernel is boring corporate software pretending to still be community developed

RT @fxn: "The Linux kernel is boring corporate software pretending to still be community developed." http://www.jfplayhouse.com/2010...

Stack Overflow Blog: OpenID, One Year Later

Stack Overflow Blog: OpenID, One Year Later. Google’s support is a huge deal—61% of Stack Overflow accounts use Google. Google’s implementation of directed identity has caused problems though, since Google provide a different OpenID for each domain making it hard for Stack Overflow, Server Fault and Super User to correlate accounts. Their solution is to require a (verified) e-mail address from Google OpenID users using sreg and use that as a key for the accounts.

Google / Google откроет VP8 для HTML5 Video в мае

Как сообщает NewTeeVee на предстоящей в мае конференции Google I/O ожидается официальное объявление о том, что видео кодек VP8 становится открытым.
Кодек VP8 стал подконтрольным Google после приобретения On2 в феврале.
Таким образом на поле стандартов видео помимо Ogg Theora и H.264 выходит новый игрок от Google. По заверениям разработчиков кодека, VP8 превосходит по качеству вышеназванные кодеки по всем параметрам. (Еще в 2008 году разработчик On2 заявлял о более высоком качестве сжатия нежели H.264).
Конечно не стоит ждать, что поддержка данного кодека быстро появится в IE9 и Safari, мы знаем «любовь» Microsoft и Apple к открытым стандартам, к тому же они вроде как разрешили все свои вопросы и остановились на H.264. Чтож — посмотрим, к тому же у нас есть еще Firefox, Chrome и Opera.
Для меня главное, чтобы видео было качественное и быстро закачивалось, если это будет VP8 — вполне неплохо.

Of Building Blocks, Rosetta Stones and Geographic Identifiers

Of Building Blocks, Rosetta Stones and Geographic Identifiers. Yahoo! GeoPlanet is now mapped to identifiers from other gazetteers such as GeoNames, FIPS and IATA—and those identifiers are available via the GeoPlanet API.

RFC5785: Defining Well-Known Uniform Resource Identifiers

RFC5785: Defining Well-Known Uniform Resource Identifiers (via). Sounds like a very good idea to me: defining a common prefix of /.well-known/ for well-known URLs (common metadata like robots.txt) and establishing a registry for all such files. OAuth, OpenID and other decentralised identity systems can all benefit from this.

Гугл спонсирует мобильную Теору

Гугл спонсирует мобильную Теору

Douglas Crockford about kinds of security approaches

“Many people get discouraged when you talk about security because so many popular approaches to security fail. They just fail. For example, there’s security by inconvenience, which I’m sure you’ve seen practiced at the airport where they put us in corrals like cattle and run us around. It’s proven that that is not an effective security mechanism, but they do it anyway because we’ve got to do something; we have to at least put on a show that we’re making you safe. That makes people feel better, so it accomplishes that — maybe — but not much else. We’ve tried security by obscurity, where you try to make your system so complicated that the attackers can’t understand them. That doesn’t work. We’ve seen people try to inject speed bumps into the information super highway with the thinking that’s going to slow the attackers down — it doesn’t. We’ve seen confusion of security with identify, that if we know who wrote the code then that tells us something about how safe it is to use. That turns out to be completely useless. What we’re left with is security by vigilance, and that doesn’t really work either.”

- Douglas Crockford about security approaches

Where do users expect to find webpage objects?

Shared by arty
правда, разбираются только вебмагазины и новостные порталы
Паттерны: Where do users expect to find webpage objects? (из поста Ивана Бурмистрова со ссылками http://b23.ru/eaf5) - http://interruptions.net/private...
Паттерны: Where do users expect to find webpage objects? (из поста Ивана Бурмистрова со ссылками http://b23.ru/eaf5) Паттерны: Where do users expect to find webpage objects? (из поста Ивана Бурмистрова со ссылками http://b23.ru/eaf5) Паттерны: Where do users expect to find webpage objects? (из поста Ивана Бурмистрова со ссылками http://b23.ru/eaf5)
Онлайн-магазины, новостные порталы, корпоративные сайты, и консолидированные «модели ожиданий и стереотипы», очень круто. - ***

What’s wrong with extending the DOM

I was recently surprised to find out how little the topic of DOM extensions is covered on the web. What’s disturbing is that downsides of this seemingly useful practice don’t seem to be well known, except in certain secluded circles. The lack of information could well explain why there are scripts and libraries built today that still fall into this trap. I’d like to explain why extending DOM is generally a bad idea, by showing some of the problems associated with it. We’ll also look at possible alternatives to this harmful exercise.

But first of all, what exactly is DOM extension? And how does it all work?

How DOM extension works

DOM extension is simply the process of adding custom methods/properties to DOM objects. Custom properties are those that don’t exist in a particular implementation. And what are the DOM objects? These are host objects implementing Element, Event, Document, or any of dozens of other DOM interfaces. During extension, methods/properties can be added to objects directly, or to their prototypes (but only in environments that have proper support for it).

The most commonly extended objects are probably DOM elements (those that implement Element interface), popularized by Javascript libraries like Prototype and Mootools. Event objects (those that implement Event interface), and documents (Document interface) are often extended as well.

In environment that exposes prototype of Element objects, an example of DOM extension would look something like this:

  Element.prototype.hide = function() {
    this.style.display = 'none';
  var element = document.createElement('p');

  element.style.display; // ''
  element.style.display; // 'none'

As you can see, “hide” function is first assigned to a hide property of Element.prototype. It is then invoked directly on an element, and element’s “display” style is set to “none”.

The reason this “works” is because object referred to by Element.prototype is actually one of the objects in prototype chain of P element. When hide property is resolved on it, it’s searched throughout the prototype chain until found on this Element.prototype object.

In fact, if we were to examine prototype chain of P element in some of the modern browsers, it would usually look like this:

  // "^" denotes connection between objects in prototype chain


Note how the nearest ancestor in the prototype chain of P element is object referred to by HTMLParagraphElement.prototype. This is an object specific to type of an element. For P element, it’s HTMLParagraphElement.prototype; for DIV element, it’s HTMLDivElement.prototype; for A element, it’s HTMLAnchorElement.prototype, and so on.

But why such strange names, you might ask?

These names actually correspond to interfaces defined in DOM Level 2 HTML Specification. That same specification also defines inheritance between those interfaces. It says, for example, that “… HTMLParagraphElement interface have all properties and functions of the HTMLElement interface …” (source) and that “… HTMLElement interface have all properties and functions of the Element interface …” (source), and so on.

Quite obviously, if we were to create a property on “prototype object” of paragraph element, that property would not be available on, say, anchor element:

  HTMLParagraphElement.prototype.hide = function() {
    this.style.display = 'none';
  typeof document.createElement('a').hide; // "undefined"
  typeof document.createElement('p').hide; // "function"

This is because anchor element’s prototype chain never includes object refered to by HTMLParagraphElement.prototype, but instead includes that referred to by HTMLAnchorElement.prototype. To “fix” this, we can assign to property of object positioned further in the prototype chain, such as that referred to by HTMLElement.prototype, Element.prototype or Node.prototype.

Similarly, creating a property on Element.prototype would not make it available on all nodes, but only on nodes of element type. If we wanted to have property on all nodes (e.g. text nodes, comment nodes, etc.), we would need to assign to property of Node.prototype instead. And speaking of text and comment nodes, this is how interface inheritance usually looks for them:

  document.createTextNode('foo'); // < Text.prototype < CharacterData.prototype < Node.prototype
  document.createComment('bar'); // < Comment.prototype < CharacterData.prototype < Node.prototype

Now, it's important to understand that exposure of these DOM object prototypes is not guaranteed. DOM Level 2 specification merely defines interfaces, and inheritance between those interfaces. It does not state that there should exist global Element property, referencing object that's a prototype of all objects implementing Element interface. Neither does it state that there should exist global Node property, referencing object that's a prototype of all objects implementing Node interface.

Internet Explorer 7 (and below) is an example of such environment; it does not expose global Node, Element, HTMLElement, HTMLParagraphElement, or other properties. Another such browser is Safari 2.x (and most likely Safari 1.x).

So what can we do in environments that don't expose these global "prototype" objects? A workaround is to extend DOM objects directly:

  var element = document.createElement('p');
  element.hide = function() {
    this.style.display = 'none';
  element.style.display; // ''
  element.style.display; // 'none'

What went wrong?

Being able to extend DOM elements through prototype objects sounds amazing. We are taking advantage of Javascript prototypal nature, and scripting DOM becomes very object-oriented. In fact, DOM extension seemed so temptingly useful that few years ago, Prototype Javascript library made it an essential part of its architecture. But what hides behind seemingly innocuous practice is a huge load of trouble. As we'll see in a moment, when it comes to cross-browser scripting, the downsides of this approach far outweigh any benefits. DOM extension is one of the biggest mistakes Prototype.js has ever done.

So what are these problems?

Lack of specification

As I have already mentioned, exposure of "prototype objects" is not part of any specification. DOM Level 2 merely defines interfaces and their inheritance relations. In order for implementation to conform to DOM Level 2 fully, there's no need to expose those global Node, Element, HTMLElement, etc. objects. Neither is there a requirement to expose them in any other way. Given that there's always a possibility to extend DOM objects manually, this doesn't seem like a big issue. But the truth is that manual extension is a rather slow and inconvenient process (as we will see shortly). And the fact that fast, "prototype object" -based extension is merely somewhat of a de-facto standard among few browsers, makes this practice unreliable when it comes to future adoption or portability across non-convential platforms (e.g. mobile devices).

Host objects have no rules

Next problem with DOM extension is that DOM objects are host objects, and host objects are the worst bunch. By specification (ECMA-262 3rd. ed), host objects are allowed to do things, no other objects can even dream of. To quote relevant section [8.6.2]:

Host objects may implement these internal methods with any implementation-dependent behaviour, or it may be that a host object implements only some internal methods and not others.

The internal methods specification talks about are [[Get]], [[Put]], [[Delete]], etc. Note how it says that internal methods behavior is implementation-dependent. What this means is that it's absolutely normal for host object to throw error on invocation of, say, [[Get]] method. And unfortunatey, this isn't just a theory. In Internet Explorer, we can easily observe exactly this—an example of host object [[Get]] throwing error:

  document.createElement('p').offsetParent; // "Unspecified error."
  new ActiveXObject("MSXML2.XMLHTTP").send; // "Object doesn't support this property or method"

Extending DOM objects is kind of like walking in a minefield. By definition, you are working with something that's allowed to behave in unpredictable and completely erratic way. And not only things can blow up; there's also a possibility of silent failures, which is even worse scenario. An example of erratic behavior is applet, object and embed elements, which in certain cases throw errors on assignment of properties. Similar disaster happens with XML nodes:

  var xmlDoc = new ActiveXObject("Microsoft.XMLDOM");
  xmlDoc.firstChild.foo = 'bar'; // "Object doesn't support this property or method"

There are other cases of failures in IE, such as document.styleSheets[99999] throwing "Invalid procedure call or argument" or document.createElement('p').filters throwing "Member not found." exceptions. But not only MSHTML DOM is the problem. Trying to overwrite "target" property of event object in Mozilla throws TypeError, complaining that property has only a getter (meaning that it's readonly and can not be set). Doing same thing in WebKit, results in silent failure, where "target" continues to refer to original object after assignment.

When creating API for working with event objects, there's now a need to consider all of these readonly properties, instead of focusing on concise and descriptive names.

A good rule of thumb is to avoid touching host objects as much as possible. Trying to base architecture on something that—by definition—can behave so sporadically is hardly a good idea.

Chance of collisions

API based on DOM element extensions is hard to scale. It's hard to scale for developers of the library—when adding new or changing core API methods, and for library users—when adding domain-specific extensions. The root of the issue is a likely chance of collisions. DOM implementations in popular browsers usually all have properietary API's. What's worse is that these API's are not static, but constantly change as new browser versions come out. Some parts get deprecated; others are added or modified. As a result, set of properties and methods present on DOM objects is somewhat of a moving target.

Given huge amount of environments in use today, it becomes impossible to tell if certain property is not already part of some DOM. And if it is, can it be overwritten? Or will it throw error when attempting to do so? Remember that it's a host object! And if we can quietly overwrite it, how would it affect other parts of DOM? Would everything still work as expected? If everything is fine in one version of such browser, is there a guarantee that next version doesn't introduce same-named property? The list of questions goes on.

Some examples of proprietary extensions that broke Prototype are wrap property on textareas in IE (colliding with Element#wrap method), and select method on form control elements in Opera (colliding with Element#select method). Even though both of these cases are documented, having to remember these little exceptions is annoying.

Proprietary extensions are not the only problem. HTML5 brings new methods and properties to the table. And most of the popular browsers have already started implementing them. At some point, WebForms defined replace property on input elements, which Opera decided to add to their browser. And once again, it broke Prototype, due to conflict with Element#replace method.

But wait, there's more!

Due to long-standing DOM Level 0 tradition, there's this "convenient" way to access form controls off of form elements, simply by their name. What this means is that instead of using standard elements collection, you can access form control like this:

  <form action="">
    <input name="foo">
  <script type="text/javascript">
    document.forms[0].foo; // non-standard access
    // compare to
    document.forms[0].elements.foo; // standard access

So, say you extend form elements with login method, which for example checks validation and submits login form. If you also happen to have form control with “login” name (which is pretty likely, if you ask me), what happens next is not pretty:

  <form action="">
    <input name="login">
  <script type="text/javascript">
    HTMLFormElement.prototype.login = function(){
      return 'logging in';
    $(myForm).login(); // boom!
    // $(myForm).login references input element, not `login` method

Every named form control shadows properties inherited through prototype chain. The chance of collisions and unexpected errors on form elements is even higher.

Situation is somewhat similar with named form elements, where they can be accessed directly off document by their names:

  <form name="foo">
  <script type="text/javascript">
    document.foo; // [object HTMLFormElement]

When extending document objects, there’s now an additional risk of form names conflicting with extensions. And what if script is running in legacy applications with tons of rusty HTML, where changing/removing such names is not a trivial task?

Employing some kind of prefixing strategy can alleviate the problem. But will probably also bring extra noise.

Not modifying objects you don’t own is an ultimate recipe for avoiding collisions. Breaking this rule already got Prototype into trouble, when it overwrote document.getElementsByClassName with own, custom implementation. Following it also means playing nice with other scripts, running in the same environment—no matter if they modify DOM objects or not.

Performance overhead

As we’ve seen before, browsers that don’t support element extensions—like IE 6, 7, Safari 2.x, etc.—require manual object extension. The problem is that manual extension is slow, inconvenient and doesn’t scale. It’s slow because object needs to be extended with what’s often a large number of methods/properties. And ironically, these browsers are the slowest ones around. It’s inconvenient because object needs to be first extended in order to be operated on. So instead of document.createElement('p').hide(), you would need to do something like $(document.createElement('p')).hide(). This, by the way, is one of the most common stumbing blocks for beginners of Prototype. Finally, manual extension doesn’t scale well because adding API methods affects performance pretty much linearly. If there’s 100 methods on Element.prototype, there has to be 100 assignments made to an element in question; if there’s 200 methods, there has to be 200 assignments made to an element, and so on.

Another performance hit is with event objects. Prototype follows similar approach with events and extends them with a certain set of methods. Unfortunately, some events in browsers—mousemove, mouseover, mouseout, resize, to name few—can fire literally dozens of times per second. Extending each one of them is an incredibly expensive process. And what for? Just to invoke what could be a single method on event obejct?

Finally, once you start extending elements, library API most likely needs to return extended elements everywhere. As a result, querying methods like $$ could end up extending every single element in a query. It’s easy to imagine performance overead of such process, when we’re talking about hundreds or thousands of elements.

IE DOM is a mess

As shown in previous section, manual DOM extension is a mess. But manual DOM extension in IE is even worse, and here’s why.

We all know that in IE, circular references between host and native objects leak, and are best avoided. But adding methods to DOM elements is a first step towards creation of such circular references. And since older versions of IE don’t expose “object prototypes”, there’s not much to do but extend elements directly. Circular references and leaks are almost inevitable. And in fact, Prototype suffered from them for most of its lifetime.

Another problem is the way IE DOM maps properties and attributes to each other. The fact that attributes are in the same namespace as properties, increases chance of collisions and all kinds of unexpected inconsistencies. What happens if element has custom “show” attribute and is then extended by Prototype. You’ll be surprised, but show “attribute” would get overwritten by Prototype’s Element#show method. extendedElement.getAttribute('show') would return a reference to a function, not the value of “show” attribute. Similarly, extendedElement.hasAttribute('hide') would say “true”, even if there was never custom “hide” attribute on an element. Note that IE<8 lacks hasAttribute, but we could still see attribute/property conflict: typeof extendedElement.attributes['show'] != "undefined".

Finally, one of the lesser-known downsides is the fact that adding properties to DOM elements causes reflow in IE, so mere extension of element becomes a quite expensive operation. This actually makes sense, given the deficient mapping of attributes and properties in its DOM.

Bonus: browser bugs

If everything we’ve been over so far is not enough (in which case, you’re probably a masochist), here’s a couple more bugs to top it all of.

In some versions of Safari 3.x, there’s a bug where navigating to a previous page via back button wipes off all host object extensions. Unfortunately, the bug is undetectable, so to work around the issue, Prototype has to do something horrible. It sniffs browser for that version of WebKit, and explicitly disables bfcache by attaching “unload” event listener to window. Disabled bfcache means that browser has to re-fetch page when navigating via back/forward buttons, instead of restoring page from the cached state.

Another bug is with HTMLObjectElement.prototype and HTMLAppletElement.prototype in IE8, and the way object and applet elements don’t inherit from those prototype objects. You can assign to a property of HTMLObjectElement.prototype, but that property is never “resolved” on object element. Ditto for applets. As a result, those elements always have to be extended manually, which is another overhead.

IE8 also exposes only a subset of prototype objects, when compared to other popular implementations. For example, there’s HTMLParagraphElement.prototype (as well as other type-specific ones), and Element.prototype, but no HTMLElement (and so HTMLElement.prototype) or Node (and so Node.prototype). Element.prototype in IE8 also doesn’t inherit from Object.prototype. These are not bugs, per se, but is something to keep in mind nevertheless: there’s nothing good about trying to extend non-existent Node, for example.

Wrappers to the rescue

One of the most common alternatives to this whole mess of DOM extension is object wrappers. This is the approach jQuery has taken from the start, and few other libraries followed later on. The idea is simple. Instead of extending elements or events directly, create a wrapper around them, and delegate methods accordingly. No collisions, no need to deal with host objects madness, easier to manage leaks and operate in dysfunctional MSHTML DOM, better performance, saner maintenance and painless scaling.

And you still avoid procedural approach.

Prototype 2.0

The good news is that Prototype mistake is something that’s going away in the next major version of the library. As far as I’m concerned, all core developers understand the problems mentioned above, and that wrapper approach is the saner way to move forward. I’m not sure what the plans are in other DOM-extending libraries like Mootools. From what I can see they are already using wrappers with events, but still extend elements. I’m certinaly hoping they move away from this madness in a near future.

Controlled environments

So far we looked at DOM extension from the point of view of cross-browser scripting library. In that context, it’s clear how troublesome this idea really is. But what about controlled environments? When script is only run in one or two environments, such as those based on Gecko, WebKit or any other modern non-MSHTML DOM. Perhaps it’s an intranet application, that’s accessed through certain browsers. Or a desktop, WebKit-based app.

In that case, situtation is definitly better. Let’s look at the points listed above.

Lack of specification becomes somewhat irrelevant, as there’s no need to worry about compatibility with other platforms, or future editions. Most of the non-MSHTML DOM environments expose DOM object prototypes for quite a while, and are unlikely to drop it in a near future. There’s still a possibility for change, however.

Point about host objects unreliability also loses its weight, since host objects in Gecko or WebKit -based DOMs are much, much saner than those in MSHTML DOM. But they are still host objects, and so should be treated with care. Besides, there are readonly properties covered before, which could easily cripple the flexibility of API.

The point about collisions still holds weight. These environments support non-standard form controls access, have proprietary API, and are constantly implementing new HTML5 features. Modifying objects you don’t own is still a wicked idea and can lead to hard-to-find bugs and inconsistencies.

Performance overhead is practically non-existent, as these DOM support prototype-based DOM extension. Performance can actually be even better, comparing to, say, wrappers approach, as there’s no need to create any additional objects in order to invoke methods (or access properties) off DOM objects.

Extending DOM in controlled environment sure seems like a perfectly healthy thing to do. But even though the main problem is that with collisions, I would still advise to employ wrappers instead. It’s a safer way to move forward, and will save you from maintenance overhead in the future.


Hopefuly, you can now clearly see all the truth behind what looks like an elegant approach. Next time you design a Javascript framework, just say no to DOM extensions. Say no, and save yourself from all the trouble of maintaining a cumbersome API and suffering unnecessary performance overheads. If on the other hand, you’re considering to employ Javascript library that extends DOM, stop for a second, and ask yourself if you’re willing to take a risk. Is ellusive convenience of DOM extension really worth all the trouble?

Using the lang attribute makes a difference

About a year ago I posted a Quick Tip titled Specify each HTML document’s main natural language. The reason is that software like screen readers can use this info to adjust the way they speak text.

But do they really do that? Well, it depends. You need to use a screen reader that supports language switching and can speak the natural languages of the document you’re viewing. One example of when it works as expected is VoiceOver for the iPhone and for the iPod touch.

Read full post

Posted in , .

squadette: об именовании переменных

я рассказывал эту историю разным людям, и видимо пора её наконец опубликовать (финальным толчком послужил вот этот пост). так как я лишь "стоял рядом", то может оказаться, что часть этой истории или она вся целиком является исключительно плодом моего воображения. ну да непосредственные участники щас меня в комментах поправят, whatever.

в начале нулевых я начинал свою карьеру веб-разработчика в прекрасном стартапе E-Labs. Помимо прочего, небезызвестный многим С. работал над проектом под условным названием "межгалактический бордель". Это был веб-интерфейс к эскорт-агентству. Барышни стоили от косаря грина в час, базировались в Словении (?) и вылетали самолетом куда скажете.

Клиент был щедр, поэтому сайт быстро обрастал фичами (например, там была редкая по тем временам полная локализация на шесть европейских языков, включая перевод роста и базовых показателей между дюймами и сантиметрами в зависимости от настроек пользователя). Ближе к концу проекта на сайте был реализован полнофункциональный календарь, на котором можно было посмотреть загрузку барышни и забронировать её (или их) заранее, и по-моему, даже как-то сделать предоплату.

Бизнес заказчика рос тоже, и в какой-то момент пришло очередное задание: теперь агентство предоставляло также эскортных юношей.

Так в таблице girls появилось поле "пол".

Prototype 1.7 RC1: Sizzle, layout/dimensions API, event delegation, and more

We've just tagged the first release candidate of Prototype 1.7: a major new version with some major new features.

Sizzle as the selector engine (or mix in your own)

With Prototype 1.7, we've finally realized our long-held goal of moving to Sizzle, the middleware selector engine used by jQuery and others. I wrote our previous selector engine, used since 1.5.1, but nevertheless I'm excited to switch to a more robust engine that's shared between frameworks.

So Sizzle is the new default. But there's more to it than that. In moving to Sizzle, we've modularized the selector engine entirely. If you want to use Diego Perini's NWMatcher library in place of Sizzle, you can. Just check out the source code and build like so:

rake dist SELECTOR_ENGINE=nwmatcher

If you're a sentimentalist, you can use the legacy Prototype selector engine by specifying SELECTOR_ENGINE=legacy_selector. Or add your own selector engine by creating a subdirectory in vendor/ and following some simple conventions.


Element#on is a new way to access the Prototype event API. It provides first-class support for event delegation and simplifies event handler removal.

In its simplest form, Element#on works just like Element#observe:

$("messages").on("click", function(event) {
  // ...

An optional second argument lets you specify a CSS selector for event delegation. This encapsulates the pattern of using Event#findElement to retrieve the first ancestor element matching a specific selector. So this Prototype 1.6 code...

$("messages").observe("click", function(event) {
  var element = event.findElement("a.comment_link");
  if (element) {
    // ...

...can be written more concisely with Element#on as:

$("messages").on("click", "a.comment_link", function(event, element) {
  // ...

Element#on differs from Element#observe in one other important way: its return value is an object with a #stop method. Calling this method will remove the event handler. (Technically, this is an instance of a new class called Event.Handler.) With this pattern, there's no need to retain a reference to the handler function just so you can pass it to Element#stopObserving later.

For example, in Prototype 1.6, where you'd need to write something like...

start: function() {
  this.clickHandler = function(event) {
    // ...

  $("messages").observe("click", this.clickHandler);

stop: function() {
  $("messages").stopObserving("click", this.clickHandler);

...you can now write:

start: function() {
  this.clickHandler = $("messages").on("click", function(event) {
    // ...

stop: function() {

Also note that the Event.Handler class has a corresponding #start method that lets you re-attach an observer you've removed with #stop.

So, to review, Element#on is both a new approach to event observation and an implementation of event delegation. Feel free to eschew Element#observe and use Element#on exclusively; or use Element#on just for event delegation; or keep using Element#observe the way you always have.

Element.Layout: Your digital tape measure

The second major feature in 1.7 is Element.Layout, a class for pixel-perfect measurement of element dimensions and offsets.

Now you don't have to decide between properties like offsetWidth (which return numbers, but not the numbers you want) or retrieving computed styles (which have their own set of quirks and require a call to parseInt).

The simple case

If you want a one-off measurement of an element, use the new Element#measure:

$('troz').measure('width'); //-> 150
$('troz').measure('border-top'); //-> 5

// Offsets, too:
$('troz').measure('top'); //-> 226

The argument passed to measure is one of a handful of intuitive names, most of which are derived from their CSS equivalents. So width means the width of the content box, just like in CSS — but we throw in extra properties (e.g., padding-box-width, margin-box-height) for some common measurements. This approach gives you far more granularity than common DHTML properties like offsetWidth and clientHeight.

These measurements are guaranteed to be in pixels. Even in IE. (In fact, Prototype works around a handful of IE quirks that would ordinarily result in inaccurate measurments.) It can even measure elements that are hidden, as long as their parents are visible. (Like when you want to animate an element from a hidden state and need to know how tall it will be.)

The complex case

If you need to measure several things at once, though, Element#measure is not the most efficient way to do it. Often an element will need a bit of manipulation before it reports its dimensions accurately, which means measurements can be costly.

The Element.Layout class tries to minimize that cost. It's a read-only subclass of Hash that remembers values in order to avoid re-computing.

First, use Element#getLayout to obtain an instance of Element.Layout:

var layout = $('troz').getLayout();

Now use Element.Layout#get to retrieve values, using the same property names you used for Element#measure:

layout.get('width');  //-> 150
layout.get('height'); //-> 500

layout.get('padding-left');  //-> 10
layout.get('margin-left');   //-> 25
layout.get('border-top');    //-> 5
layout.get('border-bottom'); //-> 5

layout.get('padding-box-width'); //-> 170
layout.get('border-box-height'); //-> 510

layout.get('width');  //-> 150

Here's where the remembered values (or memoization, if you prefer) come in. When I ask for width, Prototype measures the element – which, as we discussed, is a costly operation — and returns a value. A few lines later, I ask for width again, and I get the same value. But this time it didn't do any measuring. It remembered the value from last time.

There's more. When I ask for border-box-height, Prototype knows that's just height plus border-top plus border-bottom. All three of those properties are already memoized, since I asked for them earlier, so it skips the measurement phase and just gives me the sum.

How does it know when an element's dimensions change? It doesn't. Don't hang onto an instance of Element.Layout for too long; it's meant for short-term efficiency, not long-term caching. You can grab a new instance by calling Element#getLayout again.

Believe it or not, this is the short version. Read the documentation to learn more.

JSON fixes, ES5 compliance

The JSON interface slated for ECMAScript 5 is already being implemented in major browsers. It uses many of the same method names as Prototype's existing JSON implementation, but with different behavior, so we rewrote ours to be ES5-compliant and to fall back to the native JSON support where possible. A few other methods, like Object.keys, received similar treatment.

And, of course, bug fixes

Consult the CHANGELOG for further details.

Download, report bugs, and get help

As always: thanks to the many contributors who made this release possible!

Web-разработка / Javascript виджет авторизации OpenID

Shared by arty
ура! велосипед! https://rpxnow.com/
Где то полгода назад я сильно увлекся OpenID и всем что с ним связанно. Моим главным занятием в это время стало — неспешное чтение спецификаций, форумов, блогов и хабрапостов OpenID тематики.

Все знания, которые мной были получены за это время, я «материализовал» в проекте компании, в которой собственно я работаю.

Изучая спецификацию OpenID и прочих его расширениях (SREG, AX) и надстройках, мне пришла идея разработать Javascript виджет со своим API-прослойкой, в помощь другим разработчикам нежелающим «коротать» дни и ночи изучая спеки различных способов авторизации и их расширений.

Собственно об этом далее.
Читать дальше →
← предыдущий месяц