Firefox 3.5 for developers. It’s out today, and the feature list is huge. Highlights include HTML 5 drag ’n’ drop, audio and video elements, offline resources, downloadable fonts, text-shadow, CSS transforms with -moz-transform, localStorage, geolocation, web workers, trackpad swipe events, native JSON, cross-site HTTP requests, text API for canvas, defer attribute for the script element and TraceMonkey for better JS performance!
Recently, a number of people have asked me what I think about Mozilla’s Content Security Policy draft spec. Back in January, I went on record as being someone who thinks that CSP is a good idea.
CSP is a mechanism for declarative security, whereby a site communicates its intent and leaves it up to the user-agent to determine how to enforce it.
There are a number of benefits to declarative security mechanisms:
If you don’t design something to prevent a security vulnerability, odds are that it doesn’t do a very good job of doing it.
Because declarative security features are designed solely to mitigate security threats, browsers may implement the restrictions however they want, and can patch any holes found in the restrictions without unexpectedly breaking unrelated functionality.
Internet Explorer has a rich history in this space: HTTPOnly, SECURITY=RESTRICTED frames, X-Content-Type-Options, X-Download-Options, X-Frame-Options are all declarative security mechanisms first implemented in IE, and now supported by other browsers to varying degrees.
The ideas behind the CSP draft are not new, and it is but one of many proposals for declarative security, from BEEP to HTML5 sandboxing. In some respects it overlaps with other mechanisms for restricting script, although if CSP is successful, new directives will likely be created to provide uniform specification of the available policies.
While valuable, declarative security mechanisms are not without their challenges:
No security technology is a panacea, and for comprehensive protection, I think browsers need to offer both:
To combat XSS attacks, IE8 introduced a number of attack-surface-reductions, a few new APIs, as well as the declarative security mechanisms (X-* headers) mentioned above. But we knew that sites wouldn’t immediately adopt these APIs and declarative security features, so we built the XSS Filter, an on-by-default, no-questions-asked, no-code-changes required mechanism which helps mitigate the most common types of XSS attacks in the wild today.
I’m eager to see the progress on CSP, which I believe is a promising approach to helping websites secure themselves against the growing alphabet soup of web threats. You can provide feedback on the CSP draft spec using Mozilla’s Talk page.
-Eric Lawrence
The Resource Expert Droid. Like the HTML Validator but for your server’s HTTP headers—extremely useful.
, flash, quicktime, media player, и просто загрузку. Без яваскрипта, валидный, и т.п.
Shared by arty
a source within a large company told me they are seriously considering installing Opera as the real browser on their internal network, and use IE6 only for accessing their internal apps
Recently I held a presentation at a local Microsoft conference in the Netherlands. Slides are here. Fanatical followers will recognise most of the topics I discussed from earlier slide shows, but the last one, about the changes to the market share of IE6, 7, and 8, is new.
Basically, IE6 will continue to exist when IE7 has all but disappeared, and, contrary to what you might expect, this situation will create exciting opportunities for Microsoft’s competitors.
Besides, last week the news came that Microsoft is going to voluntarily de-bundle IE from all Windows 7 machines that will be sold in Europe, and I continue to have my doubts about that affair.
So it’s time for a special State of the Browsers IE edition.
C64 Twitter client. Awesome.
Shared by arty
Instead, Facebook take advantage of the little known checkid_immediate mode. Once you’ve associated your OpenID with your Facebook account (using the “Linked Accounts” section of the settings pane) Facebook sets a cookie remembering your OpenID provider, which persists even after you log out of Facebook. When you later visit the Facebook homepage, a checkid_immediate request is silently sent to your provider, logging you in automatically if you are already authenticated there.
Today’s launch of Facebook Usernames provides an obvious and exciting opportunity for Facebook to become an OpenID provider. Facebook have clearly demonstrated their interest in becoming the key online identity for their users, and the new usernames feature is their acknowledgement that URL-based identities are an important component of that, no doubt driven in part by Twitter making usernames trendy again.
It’s interesting to consider Facebook’s history with regards to OpenID and single sign on in general. When I started publicly advocating for OpenID back in 2007, my primary worry was that someone would solve the SSO problem in a proprietary way, irreparably damaging the decentralised nature of the Web—just as Microsoft had attempted a few years earlier with Passport.
When Facebook Connect was announced a year ago it seemed like my worst fears had become realised. Facebook Connect’s user experience was a huge improvement over OpenID—with only one provider, the sign in UI could be reduced to a single button. Their use of a popup window for the sign in flow was inspired—various usability studies have since shown that users are much more likely to complete a SSO flow if they can see the site they are signing in to in a background window.
Thankfully, Facebook seem to understand that the industry isn’t willing to accept a single SSO provider, no matter how smooth their implementation. Mark Zuckerberg made reassuring noises about OpenID support at both FOWA 2008 and SxSW 2009, but things really stepped up earlier this year when Facebook joined the OpenID Foundation Board (accompanied by a substantial financial donation). Facebook’s board representative, Luke Shepherd, is an excellent addition and brings a refreshingly user-centric approach to OpenID. Luke was previously responsible for much of the work on Facebook Connect and has been advocating OpenID inside Facebook for a long time.
Facebook may not have committed to becoming a provider yet (at least not in public), but their decision to become a consumer first is another interesting data point. They may be trying to avoid the common criticism thrown at companies who provide but don’t consume—if they’re not willing to eat their own dog food, why should anyone else?
At any rate, their consumer implementation is fascinating. It’s live right now, even though there’s no OpenID login box anywhere to be seen on the site. Instead, Facebook take advantage of the little known checkid_immediate mode. Once you’ve associated your OpenID with your Facebook account (using the “Linked Accounts” section of the settings pane) Facebook sets a cookie remembering your OpenID provider, which persists even after you log out of Facebook. When you later visit the Facebook homepage, a checkid_immediate request is silently sent to your provider, logging you in automatically if you are already authenticated there.
While it’s great to see innovation with OpenID at such a large scale, I’m not at all convinced that they’ve got this right. The feature is virtually invisible to users (it took me a bunch of research to figure out how to use it) and not at all intuitive—if I’ve logged out of Facebook, how come visiting the home page logs me straight back in again? I guess this is why Luke is keen on exploring single sign out with OpenID. It sounds like the current OpenID consumer support is principally intended as a developer preview, and I’m looking forward to seeing how they change it based on ongoing user research.
As OpenID provider implementation is an obvious next step that can’t be that far off—I wouldn’t be surprised to hear an announcement within a month or two.
As an aside, I decided to check that Facebook were using the correct 3xx HTTP status code to redirect from my old profile page to my new one. I was horrified to discover that they are using a 200 code, followed by a chunk of JavaScript to implement the redirect! The situation for logged out users is better but still fundamentally flawed: if you enable your public search listing (using an option tucked away on www.facebook.com/privacy/?view=search) and curl -i your old profile URL you get a 302 Found, when the correct status code is clearly a 301 Moved Permanently.
One final note: it almost goes without saying, but one of the best things about OpenID is that you can register a real domain name that you can own, instead of just having another URL on Facebook.
And that is why, in 2009, when developing in Microsoft .NET 3.5 for ASP.NET MVC 1.0 on a Windows 7 system, you cannot include /com\d(\..*)?, /lpt\d(\..*)?, /con(\..*)?, /aux(\..*)?, /prn(\..*)?, or /nul(\..*)? in any of your routes.
Shared by arty
Очень крутая штука : )
Google has released a Firefox add-on called Page Speed. It integrates with another add-on, Firebug, and is aimed at web developers trying to make their pages faster. “Page Speed is a tool we’ve been using internally to improve the performance of our web pages,” Google writes.
To use this, once you installed Page Speed and restarted Firefox, expand Firebug and switch to the Page Speed tool tab. Move to your page, then click the Analyze Performance button. After a short loading time you’ll be presented with a handy list of things you did right, and things Google thinks you did wrong. The latter can be expanded so you can read up on the help provided in regards to issues like “Leverage browser caching”, “Remove unused CSS”, “Combine external JavaScript” and so on (you can also click on the entry to be taken to a longer explanation). Neat!
Also see YSlow, another Firebug add-on (this time by Yahoo) that “analyzes web pages and suggests ways to improve their performance”.
[By Philipp Lenssen | Origin: Google's Page Speed Optimization Add-on | Comments]
One of the first YUI Theater videos years ago was published after Joe Hewitt came to Yahoo! to talk about the 1.0 release of Firebug, and back in those days Firebug was a paradigm shift — it had a convenient interface that combined DOM inspection and debugging, and it allowed developers to finally put the venerable Venkman on the shelf.
The Firebug model has penetrated into other browser families today, and today IE8 and Safari both have capable developer tools while Firebug soldiers on in service to Firefox. None of the tools is perfect, but Firebug has served as a good proof-of-concept for what a multipurpose, extensible inspector/debugger toolkit can do.
Opera is now innnovating on this front as well, and Charles McCathieNevile, Chief Standards Officer, stopped by Yahoo! on May 26 to tell us about their latest effort: Opera Dragonfly. Dragonfly will work with Opera 9.5 and later, and it’s a novel approach — implemented as a widget using JavaScript and CSS and proposing a new “Scope API” that (if agreed upon by browser makers) could allow for a common debugging platform. Dragonfly is fully open-sourced.
Slides from Charles’s talk are available as zipped HTML files.
The embed from Yahoo Video follows; a higher-resolution version, along with a transcript, is available from the YUI Theater site.
Some other recent videos from the YUI Theater series:
у меня в голове давно уже вертится сравнение html и php. Эти технологии сильно похожи между собой в плане устойчивости к ошибкам идиота автора-любителя. Можно написать почти какую угодно чушь, и она будет как-то понята браузером или интерпретатором. Эта особенность невероятно сильно сыграла на руку обеим технологиям: каждая из них стала потрясающе популярной. Из этого можно сделать вывод, что для массового распространения системы нужно делать её максимально дуракоустойчивой, что достижимо, только если не особо задумываться о «правильности» применяемых решений.
однако, приобретая такую популярность, технология роет самой себе большую яму. Миллионы любителей используют самые неудачные особенности первых версий, и полагаются на страннейшие свойства защиты от дурака. И чтобы не потерять главную ценность — базу пользователей — технологии приходится очень сильно вкладываться в обратную совместимость. Поэтому она асимптотически устремляется к состоянию, когда чуть менее, чем полностью, состоит из legacy.
тут я хотел сослаться на хороший пост Станиса «Жрецы программирования», но его блог, видимо, умер, поэтому придётся передать мысль своими словами. Есть различие между «магами» и «жрецами». У жрецов очень развита память, поэтому они дословно помнят канон и кучу томов комментариев к нему. У магов память слабее, зато развит логический аппарат, поэтому они находят в данных взаимосвязи, и запоминают только их, сохраняя при этом способность восстановить исходные данные. К компьютерным системам эту идею можно приложить вот как: в high-legacy системе автору нужно помнить огромное число логически бессвязных особенностей (и даже не пытаться обобщать их), тогда как в случае low-legacy достаточно знать небольшой набор правил, описывающих множество случаев. Менее очевидна мысль о том, что хороший программист в первую очередь специализируется на поиске закономерностей и обобщении, поэтому работа с legacy непосредственно вредит ему своим запретом на обобщения.
возвращаясь к теме html/php, хочу привести кричащий пример такого отложенного выстрела в свою ногу, который и сподвиг меня на новый пост. Итак, «the day supporting document.onload became a bug». Добро пожаловать в мир, где крупнейшие браузеры не запускают событие load
на document
! То есть, document.onload
не срабатывает никогда. Запишите, дети, ибо понять это невозможно ©
справедливости ради следует отметить, что проблема legacy в случае php и html возникает из-за их сильной централизации. Скрипты на php обычно свалены тысячами у одного хостера с одной версией языка. Веб-страницы на html так вообще просматриваются жалким десятком (ну пусть даже двумя десятками) браузеров. А вот в случае тоже ориентированной на любителя jquery этой проблемы нет, потому что каждый любитель сам выбирает, с какой версией библиотеки будут работать его скрипты, и это не изменится даже через десять-пятнадцать лет.
кстати, рекомендую весь блог hallvors — этакие «записки веб-патологоанатома», любопытно бывает почитать