Unicode is great. It defines a bunch of code points with a plethora of different encodings. One with security issues (UTF-7). One that is good (UTF-8). Two we could have avoided (UTF-16BE and UTF-16LE), but are now more or less stuck with due to parts of JavaScript and the DOM being defined in 16 bit units. Got to love that. Two we try to kill (in e.g. HTML5) and we actually recently removed support for in Opera (UTF-32BE and UTF-32LE). After all, just like fighting license proliferation, fighting character encoding proliferation is the good fight. In summary, Web software should use UTF-8 and Web browsers are probably stuck with UTF-16 internally though using UTF-8 with 16 bit unit indexing internally might be better in theory.
I have been involved in two small battles between the CSS WG and the Internationalization Core WG (i18n WG). The first was about case-insensitive matching. You see, when the grass was green, the earth flat, and US-ASCII the only character encoding that really mattered case-insensitive matching was a simple matter. A
matches a
and c
matches C
. HTTP is in fact still restricted to a very limited character set that can do only slightly more than US-ASCII. I.e. ISO-8859-1, also known as Latin-1 or l1, and actually treated by Web browsers as Windows-1252 due to our friends in Redmond. Unicode gave a different meaning to case-insensitive. I.e. it would make sense that e.g. ë
case-insensitively matches Ë
, right? Well yes, and this was the argument from the i18n WG. The thing is though, we were not dealing with a search engine of some sorts, but rather the design of a computer language. And although we get more processing power and such it is hardly useful to waste that on marginal complex features given that most of the language is US-ASCII compatible anyway. Worse is better.
The CSS WG ended up making user defined constants (e.g. namespace prefixes) case-sensitive and language constants (e.g. property names) ASCII case-insensitive. Yay for sanity.
The second battle is going on now and it has been escalated to the near useless and private Hypertext Coordination Group (Hypertext CG). It started with the i18n WG raising a seemingly innocent Last Call comment against the Selectors draft. It is about comparing strings again. Now some may think that comparing two strings is a simple matter. You ensure that both are in the same encoding (likely UTF-16 because you know, legacy) and then put the ==
operator to use. Maybe you lowercase both strings first in case of a case-insensitive match. Well, as it turns out some people think this should be more complex because otherwise the matching is biased towards the Western crowd which is not affected by, drum drum drum, Unicode Normalization. As it turns out character encoding nonsense is not all there is to Unicode. Also, beware of bridges.
The potential problem here is that two people work on something together and one of them generates NFC HTML content and the other generates NFD CSS content. This problem is highly theoretical by the way, according to non-scientific studies by Google NFC dominates Web markup by 99.9999% versus, well, nothing. (Maybe all those pesky non-NFC people tried to cross a bridge before publishing.)
Going further, XML does not normalize, HTML does not normalize, ECMAScript does not normalize, and CSS does not normalize. And nobody complained so far. Nobody. Well, apart from the i18n WG. Making Web browsers more complex here seems like the wrong solution. What is next, treat U+FF41 identical to U+0061? Make validators flag non-NFC content, but please do not require huge comparison functions where a simple pointer comparison should do. It is just not worth it.
Magic properties make Firefox synchronously load the Java plugin. Even defining a function called sun() (or several other symbols) will trigger the Java VM to be loaded, dramatically hurting the performance of your page.
Traditionally, file uploading in the browser has been awkward, slow and error-prone. File selection is done one at a time and monitoring progress of the upload is difficult. There are no simple callbacks for total bytes, progress, error handling and so on, restricting the developer’s ability to provide meaningful messaging on the UI end.
Conveniently, existing browser plug-ins such as Flash can be used to provide or enhance certain functionality which browsers themselves do not support. The combination of Flash and JavaScript allows for batch file selection, progress and error reporting, and speedier uploading.
In a typical Flash-driven uploader, Flash provides the core service and provides callbacks to JavaScript-land with status updates, messaging and so on. JavaScript then updates an HTML and CSS-driven UI. Flash-JavaScript communication is made possible by Flash’s ExternalInterface
API, introduced with Flash 8. Several projects have implemented uploaders based on this approach, including the YUI Uploader control and SWFUpload among others. While developing against ExternalInterface
can get a bit quirky, an effective library can abstract away most of the quirks and provide a convenient API allowing you to take advantage of Flash’s improved file-handling abilities through JavaScript.
On Flickr, we implemented a simple large “Choose photos and videos” link which when clicked, opens a multi-select-capable file-selection dialog driven by the YUI Uploader (which requires Flash 9). YUI Uploader provides file metadata via fileSelect
event callbacks after files are selected, at which point the file list and UI can be updated. The user can add and remove files as they like according to business logic, configure upload options and so on.
Once the user has prepared their selection of files and clicked “Upload Photos and Videos”, the file queue is processed. YUI Uploader can upload files simultaneously or in sequence to a given URL (a signed API call in Flickr’s case) with callbacks for file progress, errors, file completion and upload completion. The idea is that the control’s Flash component simply sends files and reports errors and progress, leaving all of the event handling to JavaScript. Because of this separation, upload behaviours can easily be changed or updated without having to change the Flash component.
During file upload, the uploadProgress
event fires regularly, providing the file ID, bytes uploaded and total bytes for each file. This data can be reflected as a progress bar, a percentage value or raw bytes depending on your UI.
Flickr Uploadr screencast from designingwebinterfaces on Flickr.
If a file upload fails due to a connection or IO error from Flash, the uploadError
event will fire so you can attempt to gracefully recover by retrying the upload of that file. Another safeguard is to implement a basic timeout such that if a file upload “hangs” for too long without a reported error (e.g., 2 minutes passes without an uploadProgress
event), the file upload can be aborted.
When a file has been posted to the target URL, the server response is passed to a JavaScript callback via the uploadCompleteData
event. Photos are processed asynchronously post-upload in Flickr’s case, so a processing ticket ID is provided in the upload response. The ticket ID is then polled via API calls until a success/fail result is ultimately returned after server-side processing.
YUI Uploader handles the creation and writing out of the Flash object and its initialization process. Once the control has loaded, the contentReady
event fires and the file selection process can begin. It is worth considering displaying some sort of “loading” element in your UI, in case the user wants to “choose files” before the control has initialized. In Flickr’s case, we show a small animation next to the “Choose photos and videos” link to indicate a loading state, as well as greying out the text itself.
It is also helpful to have a fall-through error handler that redirects the user to an alternate upload method, such as a non-JavaScript form-based file upload. The Flickr Uploadr detects for Flash 9+ upfront with JavaScript (e.g., the SWFObject), and also uses a try...catch
block in the init
method and around the file-selection bits. So if something goes wrong during initialization or when the user clicks the “Choose” link, exceptions trigger a fall-through to our basic uploader. This also is an appropriate fallback for users who don’t have Flash or JavaScript to begin with.
Due to a change in the security model beginning with Flash 10, file selection must now begin via the user clicking directly on the Flash movie. With previous versions, you could call [Flash movie].selectFiles() from JavaScript and the selection dialog would be shown without requiring user action.
To keep the user experience consistent on Flickr where an HTML link could trigger the selection dialog, we made the Flash movie transparent and overlaid it on top of the HTML link. Thus, the Flash movie captures the click and the selection dialog works as expected. One might call this a legitimate, local form of clickjacking.
If repositioning the Flash movie is undesirable in your use case, another option is to render the link, text or button UI within the Flash movie itself and show the movie as a normal inline element.
While there are some notable technical considerations associated with developing a Flash-based uploader UI — such as initialization and error handling — as with most nifty/shiny web things, the technical complexity of the implementation rests solely with the developer. Once the application logic has been implemented by the developer and integrated with YUI Uploader, the end result is an upload experience that is consistently faster, more convenient, efficient and more robust to the end user.
Составил тут небольшую компиляцию разъяснений по поводу различных недопониманий вокруг OpenID по мотивам комментариев к предыдущей статье, а также по своим предыдущим наблюдениям. Надеюсь, что получился хороший полный FAQ, который поможет разобраться тем, кто “слышал о”, задумывается ввести у себя на сайте вход по OpenID, но питает естественное недоверие к новой технологии.
Если что пропустил — пишите, дополнить всегда можно.
Так, возможно, было пару лет назад. Сейчас, когда большинство аккумуляторов пользовательских аккаунтов (Яндекс, Рамблер, ЖЖ, Google, Blogger, LiveInternet и др.) работают как OpenID-серверы, вам придется очень внимательно поискать пользователя, у которого OpenID реально нет.
Юзеры так же не знают про HTTP и SMTP, но это не мешает им пользоваться вебом и почтой. Забота по созданию удобных интерфейсов, использующих внутри OpenID, лежит на нас с вами. Один из вариантов решения проблемы я описывал в прошлом посте (плюс, в комментариях было упомянуто еще лучшее решение).
На самом деле, OpenID во многом аналогичен email’у. Только он работает через HTTP, а email — через SMTP. И вот именно HTTP, в отличие от SMTP, умеет быть универсальной точкой, из которой можно получить о пользователе любую информацию, которую он готов дать. В частности, у OpenID есть дополнение под названием SRE, из которого можно получить email. Юзер имеет возможность полностью контролировать от полного закрытия до полностью автоматической выдачи известным ему сайтам. Вот, например, как это выглядит для пользователя на Яндексе при первом логине на сайт:
Но и это еще не все! Кроме того, что умеет SRE (никнейм, email, имя, день рождения, пол), из HTTP-адреса можно вытянуть аватарку через pavatar, полноформатную фотографию, геогрфическое положение, адрес и профессию через hCard, а также всю социальную сеть пользователя через FOAF или XFN. Другими словами, OpenID существенно более информативен, чем email.
Да, конечно. И OpenID не только не отменяет регистрацию, но и помогает ее организовывать. Смотрите, вот регистрация на http://sudokular.com/:
Сначала меня спрашивают OpenID:
А потом показывают регистрационную форму, где мне нужно дозаполнить несколько полей:
Не будь OpenID, мне пришлось бы опять вводить свои email и имя.
Да, так и есть. Однако эта проблема не специфична для OpenID. Сейчас то же самое происходит с email’ом: взлом email’а пользователя дает возможность через функцию “забыл пароль” получить большую часть его паролей.
Если посмотреть чуть дальше, то несмотря на рекомендации, большинство людей везде используют одинаковые пароли. Поэтому на практике сейчас злоумышленнику достаточно получить пользовательский пароль к любому из десятков сайтов пользователя. OpenID создает более защищенную среду, потому что пароль к своему OpenID-сайту пользователь пишет только на сайте провайдера. Кроме того, в OpenID-сообществе все очень надеются, что разовьются другие механизмы авторизации пользователя (биометрические или с использованием сертификатов, например), и свой пароль он вообще вводить перестанет.
Точно так же и email-провайдер может много чего делать от имени пользователя. В конечно счете это вопрос доверия пользователя провайдеру. Впрочем, OpenID, как децентрализованная система, предлагает пользователю по крайней мере еще один вариант — быть своим собственным провайдером. Это, безусловно, требует соответствующей квалификации или желания ее оплатить, но это тем не менее тоже вариант.
OpenID и email тут полностью аналогичны. Для анонимных одноразовых входов пользователь может завести себе анонимный бросовый OpenID. И также, как с email’ом, есть провайдеры OpenID, на которых даже регистрации как таковой не требуется: человек указывает некий общеизвестный URL провайдера, а тот на все авторизационные вопросы просто отвечает утвердительно.
Разумеется, так же, как и с email’ом, доверия к таким пользователям не много, и сайты, требующие серьезной регистрации, вольны не принимать такие OpenID.
Выше я уже показал, что OpenID никак не связан с отменой регистрации. Используйте регистрацию и включайте в ней, например, CAPTCHA. Другими словами, решение о доверии новым пользователям не зависит от того, используете вы для их идентификации пару логин-пароль или некий URL.
Что важно, так это то, что с помощью OpenID можно построить распределенную систему передачи доверия, чего невозможно с логинами-паролями. Например, если у вас на сайте регистрируется человек с логином “maniac”, вы не знаете про него ничего. Этот логин существует только у вас в системе и никак не связан с maniac’ами на других сайтах. OpenID URL универсален для всех сайтов. На этом основании разные сайты могут обмениваться информацией о том, доверяют они такому-то OpenID или нет. Есть по крайней мере две конкретные идеи про так, как это технически организуется, о чем я еще собираюсь написать подробней.
Интересно сравнить OpenID с паспортом. Паспорта нужны государствам как раз для того, чтобы привязывать к конкретному человеку некую историю его поведения и потом делать по ней выводы. Только в случае OpenID все несколько свободней: не нужно никакой центральной организации, чтобы завести себе паспорт и начать им пользоваться, собирая для него доверие.
Есть два варианта решения проблемы.
Прямо с первой версии протокола в OpenID предусмотрена ситуация, которая отвязывает человека от зависимости от провайдера: делегация. Вместо использования прямого URL’а провайдера человек может использовать (зарегистрировать) любой свой собственный URL и несложными движениями сказать “мой URL http://vasya.pupkin.name/ обслуживается по OpenID провайдером таким-то”. Когда он будет логиниться на сайты со своим URL’ом http://vasya.pupkin.name, они будут запоминать именно его. В результате Василий сможет в любой момент перекинуться на другого провайдера, и никто ничего не заметит.
Первый вариант, очевидно, не подходит для массового пользователя, у которого либо нет своего URL’а, либо он не контролируется в такой степени. Тогда решение старо как мир: использовать несколько OpenID. Сайтам, принимающим OpenID рекомендуется давать пользователю возможность регистрировать на свой аккаунт несколько OpenID, чтобы в случае, когда один вдруг почему-то не работает, можно было зайти по другому.
Но вообще, проблема “утери личности” глобально пока никак не решена. В конечном счете все сводится к вопросу доверия пользователя тем, кто технически обеспечивает его присутствие в сети: регистратору доменного имени, хостеру, провайдеру OpenID. Так устроен мир :-)
Первые сайты, начавшие пускать OpenID-пользователей, установили практику отображения URL’а в качестве имени пользователя. Это было самое простое тупое решение, но свет клином на нем не сошелся. Вы вольны узнавать никнейм пользователя через SRE или читать его из hCard или спрашивать пользователя самого, как он хочет называться. Просто не надо придерживаться неудачных решений.
Пожалуй, из всех вопросов на этот у меня нет четкого фактического ответа, потому что это скорее вопрос веры :-). Если вы просто не доверяете технологии — что ж, не спешите, посмотрите, во что это выльется.
Что важно понимать, это то, что OpenID действительно простая технология. Она не обещает никакой новой надежности, криптографической стойкости или решения проблемы спама. Суть ее как технологии сводится к двум вещам:
Первое дает удобство пользователям, второе позволяет строить всякие межсайтовые механизмы. А как это использовать и что из этого получится — увидим. Пока что никто не отменял правила о том, что все успешные сложные проекты появились из простых проектов.
Shared by arty
Major norvegian site propose upgrade for IE6 users
Almost exactly eight years ago, Jeffrey Zeldman wrote To Hell With Bad Browsers, in which he implored web developers to start ignoring Netscape 4 because its standard support sucked majorly. Yesterday several large Norwegian sites placed a warning against IE6 on their pages.
Web developers from all over the world are following this initiative with interest. To Hell With Bad Browsers is obviously in for a remake.
Just now I added an IE6 warning to QuirksMode.org. I also wrote an upgrade page that attempts to explain the problem and its solution to end users.
I’d like to call upon all my readers to think about following the example set by the Norwegians.
Mark Shuttleworth has announced Ubuntu 9.10:
Ladies and gentlemen, allow me to introduce the Karmic Koala, the newest member of our alliterative menagerie.
When you are looking for inspiration beyond the looming Jaunty feature freeze, I hope you’ll think of the Koala, our official mascot for Ubuntu 9.10. And if you’ll bear with me for a minute I’ll set the scene for what we hope to achieve in that time.
Server
A good Koala knows how to see the wood for the trees, even when her head is in the clouds. Ubuntu aims to keep free software at the forefront of cloud computing by embracing the API’s of Amazon EC2, and making it easy for anybody to setup their own cloud using entirely open tools. We’re currently in beta with official Ubuntu base AMI’s for use on Amazon EC2. During the Karmic cycle we want to make it easy to deploy applications into the cloud, with ready-to-run appliances or by quickly assembling a custom image. Ubuntu-vmbuilder makes it easy to create a custom AMI today, but a portfolio of standard image profiles will allow easier collaboration between people doing similar things on EC2. Wouldn’t it be apt for Ubuntu to make the Amazon jungle as easy to navigate as, say, APT?
What if you want to build an EC2-style cloud of your own? Of all the trees in the wood, a Koala’s favourite leaf is Eucalyptus. The Eucalyptus project, from UCSB, enables you to create an EC2-style cloud in your own data center, on your own hardware. It’s no coincidence that Eucalyptus has just been uploaded to universe and will be part of Jaunty - during the Karmic cycle we expect to make those clouds dance, with dynamically growing and shrinking resource allocations depending on your needs. A savvy Koala knows that the best way to conserve energy is to go to sleep, and these days even servers can suspend and resume, so imagine if we could make it possible to build a cloud computing facility that drops its energy use virtually to zero by napping in the midday heat, and waking up when there’s work to be done. No need to drink at the energy fountain when there’s nothing going on. If we get all of this right, our Koala will help take the edge off the bear market.
If that sounds rather open and nebulous, then we’ve hit the sweet spot for cloud computing futurology. Let me invite you to join the server team at UDS in Barcelona, when they’ll be defining the exact set of features to ship in October.
Desktop
First impressions count. We’re eagerly following the development of kernel mode setting, which promises a smooth and flicker-free startup. We’ll consider options like Red Hat’s Plymouth, for graphical boot on all the cards that support it. We made a splash years ago with Usplash, but it’s time to move to something newer and shinier. So the good news is, boot will be beautiful. The bad news is, you won’t have long to appreciate it! It only takes 35 days to make a whole Koala, so we think it should be possible to bring up a stylish desktop much faster. The goal for Jaunty on a netbook is 25 seconds, so let’s see how much faster we can get you all the way to a Koala desktop. We’re also hoping to deliver a new login experience that complements the graphical boot, and works well for small groups as well as very large installations.
For those of you who can relate to Mini Me, or already have a Dell Mini, the Ubuntu Netbook Edition will be updated to include all the latest technology from Moblin, and tuned to work even better on screens that are vertically challenged. With millions of Linux netbooks out there, we have been learning and adapting usability to make the Koala cuddlier than ever. We also want to ensure that the Netbook Remix installs easily and works brilliantly on all the latest netbook hardware, so consider this a call for testing Ubuntu 9.04 if you’re the proud owner of one of these dainty items.
The desktop will have a designer’s fingerprints all over it - we’re now beginning the serious push to a new look. Brown has served us well but the Koala is considering other options. Come to UDS for a preview of the whole new look.
UDS in Barcelona, 25-29 May
As always, the Ubuntu Developer Summit will be jam-packed with ideas, innovations, guests and gurus. It’s a wombat and dingbat-free zone, so if you’re looking for high-intensity developer discussions, beautiful Barcelona will be the place to rest your opposable thumbs in May. It’s where the Ubuntu community, Canonical engineers and partners come together to discuss, debate and design the Karmic Koala. The event is the social and strategic highlight of each release cycle. Jono Bacon, the Ubuntu Community Manager has more details at http://www.jonobacon.org/2009/02/19/announcing-the-karmic-koala-ubuntu-developer-summit/ including sponsorship for heavily-contributing community members.
More details of the Ubuntu Developer Summit can be found at http://wiki.ubuntu.com/UDS.
A newborn Koala spends about six months in the family before it heads off into the wild alone. Sounds about perfect for an Ubuntu release plan! I’m looking forward to seeing many of you in Barcelona, and before that, at a Jaunty release party. Till then, cheers.
Mark
Shared by arty
мотивирует : )
Web Hooks and the Programmable World of Tomorrow. Tour de force presentation on Web Hooks by Jeff Lindsay. Tons of really good ideas—provided your application isn’t Flickr sized, there’s a good chance you could implement web hooks pretty cheaply and unleash a huge flurry of creativity from your users. GitHub makes a great case study here.
CloudMade: A Summary of the Future of Mapping. CloudMade are now offering commercially supported APIs on top of OpenStreetMap, including geocoding, routing and tile access libraries in Python/Ruby/Java and a very neat theming tool that lets you design your own map styles. This is really going to kick innovation around OpenStreetMap up a notch.
Google App Engine 1.1.9 boosts capacity and compatibility. Niall summarises the recent changes to App Engine. urllib and urllib2 support plus massively increased upload limits and request duration quotas will make it a whole lot easier to deploy serious projects on the platform.
I wonder why the canonical
extension to the rel
attribute was not first proposed on an open forum. Someone might have bothered to point out that it is almost (if not the same) as the self
value Atom uses. Also, Google, Microsoft, et al, there is a registry for extensions to the rel
attribute. Take note.
The extension itself only seems marginally useful. In the extreme case you would have to use it for every page because someone could put a question mark at the end of the URL with a bunch of useless parameters that do not affect anything at all. In most of the other cases redirects would probably be better. The Wikipedia scenario is somewhat compelling though.
Specify your canonical. You can now use a link rel=“canonical” to tell Google that a page has a canonical URL elsewhere. I’ve run in to this problem a bunch of times—in some sites it really does make sense to have the same content shown in two different places—and this seems like a neat solution that could apply to much more than just metadata for external search engines.
Shared by arty
Практика подтверждает высочайшую эффективность «идеального openid» :)
The most-watched geek event of the day has to be the OpenID UX (User Experience) Summit, hosted at the Facebook headquaters. The most discussed moment of the day will surely be the presentation by Comcast's Plaxo team.
Plaxo and Google have collaborated on an OpenID method that may represent the solution to OpenID's biggest problems: it's too unknown, it's too complicated and it's too arduous. Today at the User Experience Summit, Plaxo announced that early tests of its new OpenID login system had a 92% success rate - unheard of in the industry. OpenID's usability problems appear closer than ever to being solved for good.
This experimental method refers to big, known brands where users were already logged in, it requires zero typing - just two clicks - and it takes advantage of the OpenID authentication opportunity to get quick permission to leverage the well established OAuth data swap to facilitate immediate personalization - at the same time, with nothing but 2 clicks required of users.
Plaxo, primarily known for the noxious flood of spam emails it delivered in its early days, is now an online user activity data stream aggregator owned by telecom giant Comcast. The Plaxo team has been at the forefront of the new Open Web paradigm best known for the OpenID protocol.
The method Plaxo has been testing is called an OpenID/OAuth combo, in collaboration with Google. What does that mean, in regular terms? It means that Plaxo told users they could log in with their Gmail accounts as OpenID by clicking a link to open a Gmail window, then Google asked for permission to hand over user contact data using the OAuth standard protocol. Once login was confirmed, whether contact data access was granted to Plaxo or not, the Gmail window closed and users were returned to Plaxo all logged in. No new accounts, no disclosure of Gmail passwords to Plaxo, no risky account scraping and no need to import or find friends on the new service before immediate personalization could be offered.
This is a very different flow than most OpenID "relying parties" have followed before - but it won't be for long.
Plaxo reported today that it has seen a staggering 92% of users who clicked on the "log-in with Gmail" button come back to Plaxo with permission to authenticate their identities via Gmail granted. Of those who returned, another 92% also granted permission for Plaxo to access their contacts list. Only 8% of the people who clicked to log in with a standards based 3rd party authentication ended up deciding to bail instead. That's the kind of ease-of-use that people presumed only Facebook Connect could provide.
When Plaxo engineers moved to turn off the short-term experiment, the business team said no way.
We expect to see this basic flow get iterated on even further. We hope it will ensure that every OpenID provider has some exposure and not just the big email providers, and we expect the pop-up action to be made increasingly unobtrusive.
This could be the day when OpenID became a far more realistic prospect than it has seemed before.
Сейчас многие API в интернете в качестве формата данных используют JSON. Спецификация требует указания для него типа application/json
. Однако это не всегда удобно, потому что многие браузеры по умолчанию предлагают сохранить ответ сервера в файл, вместо того, чтобы показать его как простой текст. Всё-таки, браузеры предназначены для просмотра страниц, а не изучения внутренностей веб-сервисов.
Эту проблему можно очень легко решить, научив браузер поддержке application/json
. Наилучшим образом это делается в Firefox: достаточно установить плагин JSONovich, и такие адреса не только начнут открываться прямо в браузере без лишних вопросов, но ещё и JSON будет удобно отформатирован и подсвечен. В опере штатными средствами пока что можно только показывать JSON в браузере: Preferences → Advanced → Downloads → Add… → MIME type "application/json" → Open with Opera. Впрочем, можно соорудить юзерскрипт по мотивам незабвенного XML Tree, но я не уверен, что займусь этим.
Yahoo! Query Language thoughts. An engineer on Google’s App Engine provides an expert review of Yahoo!’s YQL. I found this more useful than the official documentation.
Open in Browser Firefox Add-on (via). Solves the “application/json wants to download” problem, among others.
A Unix Utility You Should Know About: Pipe Viewer. Useful command line utility that adds a progress bar to any unix pipeline.
YQL opens up 3rd-party web service table definitions to developers. This really is astonishingly clever: you can create an XML file telling Yahoo!’s YQL service how to map an arbitrary API to YQL tables, then make SQL-style queries against it (including joins against other APIs). Another neat trick: doing a SQL “in” query causes API requests to be run in parallel and recombined before being returned to you.
I gave a talk last week at Google (at the request of the excellent Steve Souders) all about the performance improvements, and new APIs, that are coming in browsers. I cover the new browsers, their JavaScript engines, their JavaScript performance, and then do a whirlwind tour of their new DOM methods and some of their new CSS APIs.