Today we tagged the first public release candidate of Prototype 1.6.1. (What happened to RC1? Long story.) While there are more minor fixes we’d like to get into this release, we decided an interim release was necessary because of the final release of Internet Explorer 8 last week.
This is the first public release of Prototype that is fully compatible — and fully optimized for — Internet Explorer 8’s “super-standards” mode. In particular, Prototype now takes advantage of IE8’s support of the Selectors API and its ability to extend the prototypes of DOM elements.
mouseenter
and mouseleave
events — simulating the IE-proprietary events that tend to be far more useful than mouseover
and mouseout
.Element#clone
method for cloning DOM nodes in a way that lets you perform “cleanup” on the new copies.Function#bind
, Element#down
, and a number of other often-used methods.Consult the CHANGELOG for more details.
In addition to the code itself, the 1.6.1 release features Prototype’s embrace of two other excellent projects we’ve been working on: Sprockets (JavaScript concatenation) and PDoc (inline documentation). Sprockets is now used to “build” Prototype into a single file for distribution. PDoc will be the way we document the framework from now on. The official API docs aren’t quite ready yet, but they’ll be ready for the final release of 1.6.1.
Thanks to the many contributors who made this release possible!
в гризманки юзерскрипты выполняются только после того, как страница загружена целиком — со всеми фотографиями, флешками и прочим. Опера выполняет скрипты едва ли не до начала парсинга страницы. Это позволяет, например, сразу навешивать обработчики событий на документ, и ещё в процессе загрузки длинной страницы можно будет пользоваться новыми возможностями. Если же есть необходимость обрабатывать DOM, то тут очень помогает document.addEventListener('DOMContentLoaded', …)
, срабатывающий по готовности непосредственно DOM, а не тормозных картинок.
важно отметить, что тут подстерегает маленькая ловушка: опера считает файлы скриптов с расширением .user.js
предназначенными для гризманки, и имитирует им работу в знакомом окружении — они выполняются только после загрузки страницы. Но достаточно просто изменить расширение на .js
, и всё начинает работать как надо
хотя, конечно, у гризманки есть свои плюсы, вроде нормального интерфейса для управления скриптами или возможности делать кроссдоменные запросы аяксом
I don’t know about others, but I haven’t been paying all that much attention to what’s going on with HTML 5 lately. Once I realised that it would not be about fixing the Web I pretty much lost interest.
Despite not being involved anymore I sometimes stumble across information related to HTML 5. One source of information, from a more humorous point of view, is Last Week in HTML5. Beware of sarcasm before entering.
Another, more serious, piece of info related to HTML 5 is HTML Evolution (see HTML5 Evolution for a related blog post). To provide background for a W3C meeting, Sam Ruby (co-chair of the HTML Working Group) has put together a whole lot of select quotes from different parts of the history of HTML, from the very beginning to the present day.
The background is that the current state of HTML 5 development is not what it should be, and that it should be possible to improve matters. Two of Sam’s ideas:
We should also endeavor to get the XHTML2 and HTML Working Groups brought together, or at least have the overlaps removed.
Interesting idea. Not sure if it’s good or bad, but it would certainly be interesting to combine the best parts of XHTML 2 and HTML 5.
Within the working group there certainly is more than adequate representation for the perspective of web crawlers and browser implementors. It is less obvious that we have adequate representation from content creators. Perhaps some sort of outreach by the W3C is appropriate here?
I believe having more content creators and ”authors”, i.e. web designers and web developers, in the HTML Working Group would be good. Unfortunately I think it’s hard to find web professionals who can spare the time unless they get paid to participate. I know I can’t.
Posted in (X)HTML, HTML 5, Web Standards.
We are delighted to release the first build of Opera with geolocation support. The geolocation Working Group of the W3C has recently relased the first Working Draft of the geolocation API specification, and we are now releasing the first Labs build with support for the API.
The API is used in a web page's Javascript code to retrieve the current latitude and longitude of the browser. The following snippet shows how a web page would request the browser's location:
// One-shot position request: navigator.geolocation.getCurrentPosition(showMapCallback); function showMapCallback(position) { // Show a map centered at (position.coords.latitude, position.coords.longitude). }
As you can see, the API is very simple, and doesn't get much more complicated with more advanced functionalities (see more examples from the specification)
Geolocation on the web is not new. Many sites already use the IP address of the browser to serve targetted content, mostly ads (you will have seen the 'Find a Friend in [your city]' banners). However, that method is notoriously inaccurate and cannot be reliably used for more advanced geolocation services. On the other hand, the device which the browser is running on is more likely to have an accurate idea of its location if it has a GPS unit or can triangulate the wireless access points or cell towers, or look up its IP address. Even if the device doesn't have the right hardware, a location provider web service can be used. This build uses the Skyhook service, and therefore you will need to register your site on loki.com in order for your geo-enabled web application to be allowed to request the locations of users. Additionally, if you're running Windows XP you will also need to run svcsetup.exe, which ensures that wifi scanning will not be affected by various "wifi managers" that are shipped with many laptops. All this won't be necessary in future releases, but for now if you experience crashes, it is likely because you need to run svcsetup.exe first.
More importantly, leaving it to the browser to transmit its location means that the user can remain in control of their privacy. This build will prompt the user if they agree to send their location, every time a site requests it. While the UI in this build is experimental, it provides one possible way of protecting the user's privacy.
The W3C Geolocation API is likely to become a widely adopted standard, and Opera is providing this early implementation of the API to let developers and users start experimenting with it. We would be very grateful for feedback from both developers and users, on the API itself and on what functionality and level of privacy control you would like to see exposed in the user interface.
Once you have installed this build, you can go and test it out on our geolocation demo page, which will show were you are on a map, and will display scheduled near you.
Dean Edwards explains how the standard “callback” pattern in JavaScript is too brittle for something like a “DOM ready” event. Prototype appears to be the one major library that handles this “correctly.” WIN!
Google is running an experiment in their search results, apparently shown to a portion of their users. What happens is that on the search results, say for the query comic books, a link in the top blue bar will read “Show options...”. Click it, and a side bar full of options expands to the left.
The options include some known experiments, plus things I didn’t see so far. There are restriction options to show only recent results, only videos, only forum entries, or only reviews. You can sort by relevance, or by date, and you can only show results from time ranges like the past 24 hours or the past week. You can opt to receive longer snippet text, and images. There’s also a timeline feature and search suggestions.
Here are some screenshots of the process (I’ve added a circle in the first screen showing the link that gets you started):
One of the most interesting experiment features is the “wonder wheel.” This will show a Flash-based interactive mini app which starts with your keyword in the center, and related terms around it. Clicking on a related term creates a new, connected circle with more related terms. And whenever you click on a term, to the very right, the web results change to reflect your current topic of focus. The wonder wheel worked quite smoothly, except when I tried using the back button after going to a page from the results.
If you want to try out this experiment yourself, that’s possible. All you need to do is go to google.com and paste the following into the address bar, and hit return – that will set a cookie telling Google you’re taking part in the prototype:
javascript:void(document.cookie="PREF=ID=4a609673baf685b5:TB=2:LD=en:CR=2:TM=1227543998:LM=1233568652:DV=AA:GM=1:IG=3:S=yFGqYec2D7L0wgxW;path=/; domain=.google.com");Below is my screengrab (available on YouTube and as WMV):
[By Tony Ruscoe & Philipp Lenssen | Origin: Google's Wonder Wheel Experiment, and More | Comments]
This blog entry has an [inline SVG] image with a text alternative. Who does it benefit?
Short answer: no one, but you have to do it anyway.
Long answer: As far as I know, none of the commercially available screenreaders support SVG in any way, much less reading the title of an SVG image included inline in an XHTML page (as opposed to, say, linked from the src
attribute of an <img>
element, or embedded in an <object>
element). Nonetheless, you have provided a text alternative for the image, and theoretically, that could be presented to a user in place of (or in addition to) the image. You have therefore fulfilled your moral duty, even though no one actually benefits from it. Welcome to the wacky world of access enablement.
The concept of access enablement is not complicated. In the physical world, it works like this: I build the ramp, you bring the wheelchair. I don’t have to provide you with a wheelchair; it’s up to you to procure one. Nor do I have to teach you how to use your wheelchair to get up my ramp. Nor do I have to push you up the ramp when you arrive. If your wheelchair happens to break at the bottom of my ramp, you can’t sue me for being inaccessible. I did my part: I built the ramp; everything else is Somebody Else’s Problem.
For better or for worse, this concept got translated directly into the virtual world of software. Just as there are standards that define the minimum width and maximum slope of wheelchair-accessible ramps, so too there are standards for building accessible software and authoring accessible content. In the desktop software world, priority #1 is to keep track of the focus. In the web authoring world, it’s to provide text alternatives for any non-text content. The exact techniques vary by medium. For the HTML <img>
element, the guidelines say you must provide an alt
attribute and (potentially) a longdesc
attribute. For SVG, they mandate a <title>
child element and (potentially) a <desc>
element.
The interesting part is not what the guidelines say, but what they do not say.
So here’s the crux of the problem: nowhere in the process of defining an accessibility feature is there any consideration for how often it would be used, how often it would be used correctly, what would happen if it were used incorrectly, how much it would cost to implement it, or how users would learn about the feature. In short, there is no cost-benefit analysis.
Now, some features are simple and easy and popular, so these questions never come up. If enough authors use them and tool vendors implement them and end users learn about them, then everything works. But not every feature is simple or easy or popular; a lot of them are waaaaay down the “long tail” of usage + implementation + education. So far down that, in any other field, you would start talking about the law of diminishing returns. But in accessibility, there is no such limit.
Some concrete examples: most browsers don’t expose information about the access keys available on a page, and most authors don’t define access keys in their pages, and those that do often conflict with other browser, AT, or OS-level shortcuts. Most images aren’t complex enough warrant a long description, and most authors who try to offer a long description get it wrong. But it is just assumed that users who would benefit from them will somehow learn of their existence and be motivated to find software that supports them (assuming they can ever find a page that uses them).
The accessibility orthodoxy does not permit people to question the value of features that are rarely useful and rarely used.
When this orthodoxy collides with reality, the results are both humorous and sad. When I was an accessibility architect at IBM, I assisted in the final stages of ensuring that Eclipse’s Graphical Editing Framework was fully accessible to blind people. This involved ensuring that all possible objects were focusable, all possible actions were keyboard-accessible (including drag-and-drop), and all possible information about nodes and connectors was exposed to third-party assistive technologies via MSAA. It was mind-numbing work, full of strange edge cases and bizarre hypothetical situations, not unlike the one Sam is struggling to understand. During one particularly difficult teleconference, an Eclipse developer muttered something like, “You realize no one is ever actually going to do any of this, right?” There was an awkward silence as the people who had spent their lives in the trenches of access enablement contemplated the very real possibility that no one would ever benefit from their work.
Back to Sam’s question. Few authors publish in true XHTML mode, fewer still include inline SVG images in their XHTML, and fewer still include titles or descriptions in those images. But in theory, you can imagine a situation where a web author publishes in true XHTML mode, and the author includes an inline SVG image within an XHTML page, and an end user is using a browser that supports true XHTML, and that user is using a hypothetical screenreader-of-the-future that implements support for the <title>
and <desc>
elements within inline SVG images within XHTML pages, and that user stumbles across that page. It’s theoretically possible, therefore you have to do it. Period. End of discussion.
Now go retrofit text alternatives into every SVG image you’ve ever published, or an accessibility advocacy group who has never visited your site will sue you on behalf of all the users you’ve been disenfranchising. All zero of them.
This past weekend I had the pleasure of attending and presenting at the annual SXSW conference, down in Austin, TX. I participated in a panel discussion called 'More Secrets of JavaScript Libraries' (a follow-up panel to last year's talk). The synopsis was as follows:
In a reprise from last year's popular panel - the JavaScript libraries authors are getting together again to impart their what they've learned from their experience in developing solid, world-class, JavaScript libraries. Covering everything from advanced aspects of the JavaScript language, to handling cross-browser issues, all the way up to packaging and distribution. A complete set of knowledge for a JavaScript developer.
The talks went really well - we each gave a quick 10 minute presentation on a topic that interested us and finished up with some Q&A. The individual talks were as follows:
The full presentation and audio can be found online.
More information about my talk can be found in my follow-up post: JavaScript Testing Does Not Scale.
(This is a follow-up on my portion of the More Secrets of JavaScript Libraries panel at SXSW.)
It's become increasingly obvious to me that cross-browser JavaScript development and testing, as we know it, does not scale.
jQuery's Test Suites
Take the case of the jQuery core testing environment. Our default test suite is an XHTML page (served with the HTML mimetype) with the correct doctype. In includes a number of tests that cover all aspects of the library (core functionality, DOM traversal, selctors, Ajax, etc.). We have a separate suite that tests offset positioning (integrating this into the main suite would be difficult, at best, since positioning is highly dependent upon the surrounding content). This means that we have, at minimum, two test suites straight out of the gate.
Next, we have a test suite that serves the regular XHTML test suite with the correct mimetype (application/xhtml+xml). We aren't 100% passing this one yet, but we'd like to be able to sometime before jQuery 1.4 is ready. Additionally, we have another version that we're working on that serves the regular test suite but with its doctype stripped (throwing it into quirks mode). This is another one that we would like to make sure we're passing completely in time for 1.4.
Both of those tweaks (one with correct mimetype and one with no doctype) would also need to be done for the offset test suite. We're now up to 6 test suites.
We have another version of the default jQuery test suite that runs with a copy of Prototype and Scriptaculous injected (to make sure that the external library doesn't affect internal jQuery code). And another that does the same with Mootools. And another that does the same for old versions of jQuery. That's three more test suites (up to 9).
Finally, we're working on another version of the suite that manipulates the Object.prototype before running the test suite. This will help us to, eventually, be able to work in that hostile environment. This is another one that we'd like to have done in time for jQuery 1.4 - and brings our test suite total up to 10.
We're in the initial planning stages of developing a pure-XUL test environment (to make sure jQuery works well in Firefox extensions). Eventually we'd like to look at other environments as well (such as in Rhino + Env.js, Rhino + HTMLUnit, and Adobe AIR). I won't count these non-browser/HTML environments, for now.
At minimum that's 10 separate test suites that we need to run for jQuery. Ideally, we should be running every one of them just prior to committing a change, just after committing a change, for every patch that's waiting to be committed, and before a release goes out...
in every browser that we support.
The Browser Problem
And this is where cross-browser JavaScript unit testing goes to crazy town. In the jQuery project we try to support the current version of all major browsers, the last released version, and the upcoming nightlies/betas (we balance this a little bit with how rapidly users upgrade browsers - Safari and Opera users upgrade very quickly).
At the time of this post that includes 12 browsers.
Of course, that's just on Windows and doesn't include OS X or Linux. For the sake of sanity in the jQuery project we generally only test on one platform - but ideally we should be testing Firefox, Safari, and Opera (the only multi-platform browsers) on all platforms.
The end result is that we need to run 10 separate test suites in 12 separate browsers before and after every single commit to jQuery core. Cross-Browser JavaScript testing does not scale.
Of course, this is just desktop cross-browser JavaScript testing - we should be testing on some of the popular mobile devices, as well. (MobileSafari, Opera Mobile, and possibly NetFront and Blackberry.)
Manual Testing
All of the above test suites are purely automated. You open them up in a browser, wait for them to finish, and look at the results - they require no human intervention whatsoever (save for the initial loading of the URL). This works for a lot of JavaScript tests (and for all the tests in jQuery core) but it's unable to cover interactive testing.
Some test suites (such as Yahoo UI, jQuery UI, and Selenium) have ways of automating pieces of user interaction (you can write tests like 'Click this button the click this other thing'). For most cases this works pretty well. However all of this is just an approximation of the actual interaction that a user may employ. Nothing beats having real people manually run through some easily-reproducible (and verifiable) tests by hand.
This is the biggest scaling problem of all. Take the previous problem of scaling automated test suites and multiply it the number of tests that you want to run. 100 tests in 12 browsers run on every commit by a human is just insane. There has to be a better way since it's obvious that Cross-Browser JavaScript testing does not scale.
What currently exists?
The only way to tackle the above problem of scale is to have a massive number of machines dedicated to testing and to somehow automate the process of sending those machines test suites and retrieving their results.
There currently exists an Open Source tool related to this problem space: Selenium Grid. It's able to send out tests to a number of machines and automatically retrieve the results - but there are a couple problems:
Naturally, this solution doesn't tackle the problem of manual testing, either.
A solution: TestSwarm
All of this leads up to a new project that I'm working on: TestSwarm. It's still a work in progress but I hope to open up an alpha test by the end of this month - feel free to sign up on the site if you're interesting in participating.
Its construction is very simple. It's a dumb JavaScript client that continually pings a central server looking for more tests to run. The server collects test suites and sends them out to the respective clients.
All the test suites are collected. For example, 1 "commit" can have 10 test suites associated with it (and be distributed to a selection of browsers).
The nice thing about this construction is that it's able to work in a fault-tolerant manner. Clients can come-and-go. At any given time there might be no Firefox 2s connected, at another time there could be thirty. The jobs are queued and divvied out as the load requires it. Additionally, the client is simple enough to be able to run on mobile devices (while being completely test framework agnostic).
Here's how I envision TestSwarm working out: Open Source JavaScript libraries submit their test suite batches to the central server and users join up to help out. Library users can feel like they're participating and helping the project (which they are!) simply by keeping a couple extra browser windows open during their normal day-to-day activity.
The libraries can also push manual tests out to the users. A user will be notified when new manual tests arrive (maybe via an audible cue?) which they can then quickly run through.
All of this help from the users wouldn't be for nothing, though: There'd be high score boards keeping track of the users who participate the most and libraries could award the top participants with prizes (t-shirts, mugs, books, etc.).
The framework developers get the benefit of near-instantaneous test feedback from a seemingly-unlimited number of machines and the users get prizes, recognition, and a sense of accomplishment.
If this interests you then please sign up for the alpha.
There's already been a lot of interest in a "corporate" version of TestSwarm. While I'm not planning on an immediate solution (other than releasing the software completely Open Source) I would like to have some room in place for future expansion (perhaps users could get paid to run through manual tests - sort of a Mechanical Turk for JavaScript testing - I dunno, but there's a lot of fodder here for growth).
I'm really excited - I think we're finally getting close to a solution for JavaScript testing's scalability problem.
Today we’re excited to release the final build of Internet Explorer 8 in 25 languages. IE8 makes what real people do on the web every day faster, easier, and safer. Anyone running Windows Vista, Windows XP, and Windows Server can get 32- and 64-bit versions now from http://www.microsoft.com/ie8. (Windows 7 users will receive an updated IE8 as part of the next Windows 7 milestone.)
We’ve blogged a lot here about what’s in IE8. Stepping back from individual features, Internet Explorer is focused on how real people use the Web. We designed the product experience based on real-world data from tens of millions of user sessions. We worked closely with developers and standards groups to deliver a far better platform for the people who build the web. We cooperated closely with the security community to address the real threats that users face on the web, and keep users in control of their browsing and information. The resulting product takes a “batteries included,” just works out of the box approach to delivering the next browser for how hundreds of millions of people really use the web. We think it will surprise people who haven’t looked closely at IE in a while. Perhaps it’s time to re-think conventional wisdom about IE.
Today at the MIX conference, we showed IE8’s technology and design in the context of what real people do all the time on the web:
What’s Next
First, as a team we want to thank everyone who used our pre-release software and provided feedback. You helped us deliver IE8.
Our next steps start with listening. We’re going to listen for customer and security issues and respond appropriately. We’re going to engage with web sites and developers on compatibility. We’re going to finish Windows 7. We’re going to work with standards bodies to finish CSS 2.1 and bring other standards to a customer-ready state faster. We’re going to stand behind this product and service and secure it for many years. We’re going listen to your feedback while we start work on the next version of IE.
The more important part happens outside of the IE team as people start using IE8. We’re excited to see how developers take advantage of it, from slices, accelerators, and visual search results that people can extend IE with to richer, safer websites that they’ll use every day.
Thanks –
Dean Hachamovitch
General Manager
P.S. The following table offers summarizes much of what we’ve blogged about here; please see http://www.microsoft.com/ie8 for a more complete list that includes our work on accessibility, manageability and deployment, and more:
Faster and easier for how people really browse the web every day Address bar. Searches across your history and favorites Search box. Visual Search suggestions, Quick Pick, search results from your browsing history. Accelerators. Immediate in-page access to the services of your choice. Web slices. One click access from the Favorites bar to the services you choose. Live previews and automatic updates. Tab grouping and coloring. Automatic tab organization and easier to multitask. “New Tab” experience. Easy access to your last browsing session or closed tabs. Favorites Bar. One click to add a favorite to the bar; and once click access to favorites, web slices, and feeds. Real world Performance. Top sites load fast on IE8. Find bar. Easily find and highlight text on the current web page. Suggested Sites. Discover more sites on the web that are similar to the sites you already enjoy. Toolbar close box. Easy to enable or disable toolbars Add-on load time. See and control which add ons affect IE performance. |
Safer, protecting real people from the real threats on the web Malware protection. Prevents installation of malicious software Cross-site Scripting Filter. Protection from web site attacks. Tab isolation and Automatic Crash Recovery. Keep browsing even if a site or control crashes. Domain highlighting in the address bar. Easy to see what site you’re really on. InPrivate Browsing. Protection -- never saves your browsing history. InPrivate Filtering. Avoid third-party web tracking Delete items from the Address bar. More over the shoulder privacy. Search settings protection. Your search provider is always your choice. Clickjacking protection. Protection from a class of exploits involving mouse click redirection tricks Per User/Site ActiveX. Additional protection from repurposed ActiveX controls. DEP/NX. Protection from a class of memory exploits More-secure mashups for developers with new functions and support for new standards-based mechanisms (ToStaticHTML(), XDomainRequest; Native JSON support, postMessage()). |
Opportunities and Interoperability for the people who build the web Standards mode by default. Easier to build sites that work across browsers. (Compatibility View list for end-users while developers adjust to a more interoperable IE.) Most CSS 2.1 compliant, with 7,000+ test case CSS 2.1 Test Suite (incorporating community feedback) contributed to the W3C. Web slices, Accelerators, and Visual Search extensibility. Easy to integrate site with the browser experience. These formats released under open licenses. Beginning of HTML5 support (XDR, local storage, navigation); ACID2 |
Building Fast Client-side Searches. Flickr now lazily loads your entire contact list in to memory for auto-completion. Extensive benchmarking found that a control character delimited string was the fastest option for shipping thousands of contacts around as quickly as possible.
Understanding Bidirectional (BIDI) Text in Unicode. It turns out you need to sanitise user input to ensure there are no unicode characters that switch your site’s regular text to RTL.
Parallel merge sort in Erlang. Thoughts on an Erlang-y way of implementing a combined activity stream (e.g. Facebook and Twitter). Activity streams are a Really Hard Problem—as far as I know there’s no best practise for implementing them yet.
Shared by arty
…changing stuff just because you can is popular in the mobile space because it expresses your power over other parts of the mobile value chain…
Since my previous post about mobile browser testing I’ve had four days in Düsseldorf to play with mobile phones, and I’ve once again unearthed quite a few problems that mobile browser testers will encounter. So this post is mostly about how the situation is even more complicated than we thought.
You can look over my shoulder while I’m testing, as far as I’m concerned, as long as you remember that every bit of data is provisional and may change radically without warning.
If you’re interested in real-time raw test results, follow me on Twitter. I regularly post my findings there, and it’s already delivered me some excellent feedback.
In this entry we’ll look at first-line and second-line browsers, mobile support for basic CSS, Opera’s two modes, the failure of @media handheld
, Vodafone “content adaptation,” the Nokia keyCode
problem, and we’ll close off with a few fun browser facts.
The crucial question of the moment is: who asserts supreme control over the way a website looks on a mobile phone? Currently I’m arguing the author should, but Opera and Vodafone assert vendor control, with Opera also giving the user a modicum of control.
redis (via). An in-memory scalable key/value store but with an important difference: this one lets you perform list and set operations against keys, opening up a whole new set of possibilities for application development. It’s very young but already supports persistence to disk and master-slave replication.
Shared by arty
Opera automated site testing engine
To make sure new versions of our browser core are of sufficient quality before making their way into any of our products, we run more than 100,000 automated tests on a number of different reference configurations every time we have a new build.
We run automated visual tests, JavaScript tests, selftests, performance tests, stability tests, memory tests and a lot more. One thing we have been missing, however, is automated tests for the things that require some sort of user interaction--clicking links, filling out forms, interacting with complex Web applications.
That is ... until now.
We are working on adding support for driving the browser through our scope protocol, which is the same protocol we use for the Opera Dragonfly debugger. Through a simple script, we can instruct the browser to automatically to search Google, log into Hotmail and send a message, buy books at Amazon or find plane tickets at Expedia.
Here's an example of what such a script can look like:
require "operawatir"
browser = OperaWatir::Opera.new
browser.goto("http://www.google.com")
browser.text_field(:name, "q").value = "Wikipedia"
browser.button(:name, "btnG").click
browser.link(:text, "Wikipedia").click
puts "PASS" if browser.text.include? "Wikipedia"
The syntax above is that of the Watir API, a Ruby test tool originally developed for Internet Explorer that is now being ported to Opera and other browsers.
Below is a video of the script running in the desktop version of our browser. We've had to slow it down significantly for you to be able to see what's going on - the test normally takes a few hundred milliseconds.
Shared by artyВчера Facebook объявил о ряде грядущих изменений главной страницы (см. макет), страницы профиля и потоков активности. Все вместе, все эти нововведения (Цукерберг называет их «философскими») явно наталкивают на мысль, что крупнейшая в мире социальная сеть всерьёз принимает в расчёт растущую популярность Twitter как системы широковещательной трансляции сообщений за пределы круга друзей.
Открытые профили фейсбука. Здравствуй, новый мир : )
Shared by arty
400 000 installations in one month
в консоли убунты есть несколько удобных клавиатурных сокращений, и моё самое любимое из них — ctrl+w. Если я сделал в слове опечатку, то обычно я исправляю её именно набирая слово с нуля, а этот хоткей как раз удаляет слово перед курсором — очень удобно. Настолько удобно, что в других приложениях я довольно часто рефлекторно нажимаю те же две кнопки. К сожалению, очень часто программа воспринимает это как команду закрыть таб.
отказаться от закрытия таба по ctrl+w мне очень не хочется, потому что таким хоткеем я тоже пользуюсь часто. Идеальным вариантом было бы удаление слова при редактировании текста и закрытие таба в остальных ситуациях. Как обычно, на помощь приходит user javascript — Smart Ctrl+W: delete word or close tab. Логика внутри довольно простая: если клавиатурное событие ctrl+w пришло из текстового поля, отменить событие и удалить в этом поле слово перед курсором.
как обычно, постфактум я сообразил, что в опере добиться желаемого можно ещё проще: достаточно в настройках горячих клавиш (Preferences → Advanced → Shortcuts → Keyboard setup → Edit…) исправить для «w ctrl» значение с «Close page, 1» на «Backspace word | Close page, 1».
дисклеймеры: этот хоткей есть не только в убунте, не только в консоли, не во всякой консоли; скрипт не работает в опере, потому что в опере работают настройки