cool900: Comparing Freedom on Maemo and Android - http://cool900.blogspot.com/2009...
|
memcache-top. Useful self-contained perl script for interactively monitoring a group of memcached servers.
Did you fall in love with Scala at first sight? Do you dream of actors, DSLs, implicits and pattern matching? We have a full-time position available for a Scala Hacker (Server-side Engineer) at Remember The Milk. If this sounds like you, and you'd like to join us in our quest to make the world more organized and productive, head on over to the jobs page and get in touch!
When using the HTML table
to mark up tabular data, remember to use th
elements for cells that provide header information for rows or columns.
In addition to using th
elements for header cells, you should also use the scope
or headers
attributes to tell user agents, primarily screen readers and other assistive technology, which header cells provide header information for any given data cell.
Explicitly associating header cells with data cells isn’t strictly necessary for very simple tables that only have one row or column of headers, but it doesn’t hurt to get in the habit of always doing so.
Posted in (X)HTML, Accessibility, Quick Tips.
Alexander Kiel » Status of TLS/SNI in 04/2008 - http://www.alexanderkiel.net/2008...
|
SimpleDB gotcha - http://dailyawswtf.com/post...
|
Подготовка Mail.ru к IPO, велосипедная почта "Рамблера", невидимый "Фантом" Завалишина и другие истории с конференции высоконагруженных разработчиков.
It is amazing how easy it is to sail through a Computer Science degree from a top university without ever learning the basic tools of software developers, without ever working on a team, and without ever taking a course for which you don’t get an automatic F for collaborating. Many CS departments are trapped in the 1980s, teaching the same old curriculum that has by now become completely divorced from the reality of modern software development.
Where are students supposed to learn about version control, bug tracking, working on teams, scheduling, estimating, debugging, usability testing, and documentation? Where do they learn to write a program longer than 20 lines?
Many universities have managed to convince themselves that the more irrelevant the curriculum is to the real world, the more elite they are. It’s the liberal arts way. Leave it to the technical vocational institutes, the red-brick universities, and the lesser schools endowed with many compass points (“University of Northern Southwest Florida”) to actually produce programmers. The Ivy Leagues of the world want to teach linear algebra and theories of computation and Haskell programming, and all the striver CS departments trying to raise their standards are doing so by eliminating anything practical from the curriculum in favor of more theory.
Now, don’t get me wrong, this isn’t necessarily a bad thing. At least they’re replacing Java with Scheme, if only because “that’s what MIT does.” (Too late!) And they are teaching students to think a certain way. And given how much the average CS professor knows about real-world software engineering, I think I’d rather have kids learn that stuff at an internship at Fog Creek.
Greg Wilson, an assistant professor at the University of Toronto, gave a talk at the StackOverflow DevDay conference in Toronto, which was entertaining, informative, and generally just a huge hit. We got to talking, and he told me about his latest brainchild, UCOSP, which stands for “All The Good Names Are Taken.”
It’s a consortium of 15 universities, mostly in Canada, which are organizing joint senior-year capstone projects. They’re setting up teams of a half-dozen undergraduates from assorted universities to collaborate on contributing to an open source project, for credit and for a grade. As soon as I heard about the program I volunteered to sponsor a team to make a contribution to Mercurial. Sponsoring a team consists of offering to pay for a trip to Toronto for all the undergrads to get organized, and providing a programmer to mentor the team.
Browsing around the UCOSP blog, I was reminded of why student projects, while laudatory, frequently fail to deliver anything useful. “One of the points of this course is to give you a chance to find out what it’s like to set and then meet your own goals,” Greg wrote. “The net result is pretty clear at this point: in many cases, students are doing less per week on this course than they would on a more structured course that had exactly the same content.”
College students in their final year have about 16 years of experience doing short projects and leaving everything until the last minute. Until you’re a senior in college, you’re very unlikely to have ever encountered an assignment that can’t be done by staying up all night.
The typical CS assignment expects students to write the “interesting” part of the code (in the academic sense of the word). The other 90% of the work that it takes to bring code up to the level of “useful, real-world code” is never expected from undergrads, because it’s not “interesting” to fix bugs and deal with real-world conditions, and because most CS faculty have never worked in the real world and have almost no idea what it takes to create software that can survive an encounter with users.
Time management is usually to blame. In a group of four students, even if one or two of the students are enterprising enough to try to start early in the term, the other students are likely to drag their heels, because they have more urgent projects from other classes that are due tomorrow. The enterprising student(s) will then have to choose between starting first and doing more than their fair share of the work, or waiting with everyone else until the night before, and guess which wins.
Students have exactly zero experience with long term, team-based schedules. Therefore, they almost always do crappy work when given a term-length project and told to manage their time themselves.
If anything productive is to come out of these kinds of projects, you have to have weekly deadlines, and you have to recognize that ALL the work for the project will be done the night before the weekly deadline. It appears to be a permanent part of the human condition that long term deadlines without short term milestones are rarely met.
This might be a neat opportunity to use Scrum. Once a week, the team gets together, in person or virtually, and reviews the previous week’s work. Then they decide which features and tasks to do over the next week. FogBugz would work great for tracking this: if you’re doing a capstone project and need access to FogBugz, please let us know and we’ll be happy to set you up for free. We can also set you up with beta access to kiln, our hosted version of Mercurial, which includes a code review feature.
I’ve been blaming students, here, for lacking the discipline to do term-length projects throughout the term, instead of procrastinating, but of course, the problem is endemic among non-students as well. It’s taken me a while, but I finally learned that long-term deadlines (or no deadlines at all) just don’t work with professional programmers, either: you need a schedule of regular, frequent deliverables to be productive over the long term. The only reason the real world gets this right where all-student college teams fail is because in the real world there are managers, who can set deadlines, which a team of students who are all peers can’t pull off.
Need to hire a really great programmer? Want a job that doesn't drive you crazy? Visit the Joel on Software Job Board: Great software jobs, great people.
Play framework for Java. I’m genuinely impressed by this—it’s a full stack web framework for Java that actually does feel a lot like Django or Rails. Best feature: code changes are automatically detected and reloaded by the development web server, giving you the same save-and-refresh workflow you get in Django (no need to compile and redeploy to try out your latest changes).
What every programmer should know about memory, Part 1 - http://lwn.net/Article...
|
Последнее время так надоело вставлять свои шрифты на странички, что я решил написать об этом пост. Аккумулированный опыт и последние новости.
Практически, будущее наступило, во всех современных браузерах на данный момент можно вставить нестандартный шрифт на страничку только с помощью CSS. (Firefox 3.5+, Chrome 3.0+, Safari 3.1+, Opera 10+, IE 5+)
Способов за годы веб-разработки придумано множество. И SIFR (шрифты через Flash), которому скоро 5 лет стукнет, и Cufon (шрифты через <canvas>), и всякие другие вариации типа Typeface (там кстати уже можно выделять и копировать!).
Но еще с прошлого тысячелетия самым крутым способом было конечно просто подключить нужный шрифт на страничку, и написать им текст. Сначала это сделали в Netscape, потом в IE 4.0. А теперь это в грядущем CSS3. По спецификации это принято делать так:
И затем использовать это:
Именно такой подход на данный момент сработает в Firefox 3.5+, Chrome 3.0+, Safari 3.1+, Opera 10+. То есть практически во всех последних версиях браузеров. Они поддерживают .ttf и .otf файлы.
В IE использование своих шрифтов работает еще с версии 4.0. Но, видимо, споткнувшись еще тогда на лицензировании (или по своим иррациональным причинам), они решили создать свой формат для подключаемых файлов Embedded OpenType (.eot). Который и является средоточием геморроя в этом шрифтовом вопросе.
В любом случае, подключение там работает примерно также:
На данном этапе очевидно, если создать этот загадочный .eot файл, то всё будет великолепно, и практически все современные браузеры отобразят наш шрифт.
Давно было известно о специальной программе WEFT от Microsoft. Но мне почему-то никак не удавалось ею воспользоваться, шрифты получавшиеся в результате никогда не работали в ИЕ. Я скидывал это на лицензии или какие-то еще заморочки. Но судя по последним постам, почти ни у кого не получалось заставить ее работать. Программа эта вообще довольно уникальна, стоит лишь пробежаться глазами по простому скринкасту для конвертации OTF в EOT.
Но буквально за последний месяц появилось еще два варианта, которые нас и спасут.
Появился отличный онлайн сервис ttf2eot для конвертации. Но, если в него просто засунуть первый попавшийся TTF файл, получившийся EOT не будет работать в IE. Чтобы он работал, нужно предварительно почистить атрибуты шрифта. Для этого понадобится FontForge.
Если открыть шрифт в FontForge, мы получим примерно такое окошко:
FontForge
Затем нам нужно будет открыть окно с «Font Info→TTF Names» и удалить все записи отсюда, красные не удалятся, ну и ладно:
Это окошко находится в меню Element>Font Info
Можно посмотреть как это разжевано в скринкасте.
После чего просто сохраняем шрифт (File→Generate Fonts) и скармливаем его сервису ttf2eot. Результирующий файл будет работать в IE.
В скринкасте кстати, сказано, что надо удалить все атрибуты, и добавить несколько своих. Я просто удалял все, и у меня все работает. Так что можно не заморачиваться.
Однако увидев всю эту мороку из предыдущего варианта, люди ее автоматизировали.
Это просто онлайн сервис. Вы загружаете шрифт, он вам генерирует .svg версию шрифта, .eot версию шрифта, и даже WOFF версию (формат для новых Firefox). Вдобавок ко всему, CSS файл с правильным подключением всех этих woff-eot-svg-otf (кошмар какой-то) в @font-face.
Вобщем, на данный момент просто спасительный сервис, пользуйтесь. Его разработчик активно следит за всеми новостями, и сразу же вносит обновления туда, так что на него можно в каком-то смысле ориентироваться.
На данный момент оптимальным принят следующий код:
Подробнее почему именно так, можно прочитать здесь. local — означает, что браузер должен проверить есть ли в системе такой шрифт, прежде чем скачивать. Должен, но все равно не везде работает. =(
Еще можно подключать шрифты в формате .svg, тогда они отобразятся даже в iPhone.
А буквально на днях активно продвинулся WOFF, это «сжатый» truetype который пока будет работать только в Firefox 3.6. Просто чтобы веб-разработчикам было нескучно с OTF, TTF, SVG и EOT.
Во-первых файлы шрифта можно просто gzip’ить. Как показывает опыт, можно ужать в два раза. Что вовсе не так мало, при размере порядка 100 килобайт.
Кроме того, если открыть файл шрифта тем же FontForge, то можно увидеть кучу символов, которые вряд ли будут вами использоваться:
Вся эта бяка, которая обычно никому не нужна, занимает такое же кол-во байт, как и нужные всем буквы
Прямо в этом же FontForge ненужные символы можно удалить:
Выделить с шифтом и сделать Clear
Благодаря удалению только таких символов, файл для шрифта Myriad у меня уменьшился с 96 килобайт, до 40. Соответственно уменьшается и файл .eot.
Не все шрифты можно использовать в @font-face. Просто потому что его же можно будет легко скачать, а он может стоить денег. И хотя на практике вряд ли кого-то будут доставать по этому поводу, для популярных сайтов лучше не рисковать. Уже есть сайты-сборники шрифтов доступных для @font-face. Такой список есть на Webfonts.info, например, и вот еще один отличный ресурс. (Спасибо Илья!) И время от времени на разных других сайтах.
Кириллических среди них конечно мало, но, пока никто толком не разобрался с этим вопросом, полагаю шрифты можно использовать какие найдете, или просто бесплатные. Вряд ли кого-то засудят за подключение Myriad на свой блог ;)
Предполагается, что WOFF должен решить проблему лицензирования, и что мы сможем использовать на страничках любой шрифт, которым легально владеем.
Похоже, скоро мы забудем о SIFR и Cufon! Ура!
Rails-like Quickly tools brings rapid development to Ubuntu - Ars Technica - http://arstechnica.com/open-so...
|
я уже писал о том, что в опере код user javascript может быть исполнен в разное время в зависимости от расширения файла. Впрочем, многим скриптам нужно запускаться только тогда, когда весь DOM уже загружен. С привязкой к событиям load
и DOMContentLoaded
у меня не всегда всё было гладко (как выяснилось, из-за глупых опечаток), поэтому сейчас мне захотелось досконально разобраться, какой из вариантов привязки лучше всего использовать.
итак, я быстро набросал простенький html и два почти идентичных скрипта: test.js и test.user.js, отличающихся только расширением и строчкой идентификации внутри. В пустом профиле оперы я положил скрипты в папку userjs, и открыл сам документ. Вот что появилось после этого в консоли:
test.js: start
html: end of head
html: end of body
test.js: document.DOMContentLoaded (capture)
test.js: document.DOMContentLoaded
test.js: window.DOMContentLoaded (capture)
test.js: window.DOMContentLoaded
test.user.js: start
test.js: document.load (capture)
test.js: document.load
html: document.onload
test.user.js: document.load (capture)
test.user.js: document.load
test.js: window.load (capture)
test.js: window.load
html: body.onload
test.user.js: window.load (capture)
test.user.js: window.load
какие из этого можно сделать выводы?
window.opera.addEventListener
для таких событий: ни один из вариантов с ним не сработалDOMContentLoaded
. Поэтому в скриптах лучше всего использовать вызов document.addEventListener('DOMContentLoaded', handler, true)
, не забывая про последний параметр, как нередко делал я : )load
на документе, поэтому в них можно использовать document.addEventListener('load', …)
, но наверняка можно обойтись и без негоподозреваю, что порядок событий load
, назначенных разными способами, представляет интерес только для разработчиков браузеров, вынужденных трудиться над совместимостью. Кое-что в нём логично, что-то удивительно (например, body.onload
идёт после document.load
и даже window.load
). Впрочем, теоретически это исследование может помочь несчастному, занимающемуся поддержкой кривого кода, но мне хочется верить, что проблем с этим ни у кого не возникнет : )
Opera будет поддерживать border-radius уже в 10.x версии, очень вероятно, что где-то в начале будущего года (…)На эту тему я уже, кстати, давал цитату.
Я уже давно тестирую версию, которая работает с border-radius, border-image, box-shadow, transform… и ещё десятком вещей из черновика CSS 3, что интересно — без префиксов. Это всё будет в движке Presto 2.4, который сейчас встроен во внутренние беты с индексом 10.x.
Ничего прямо вот так обещать не могу, но некоторую вероятность высказал выше, имея в виду финальные стабильные версии.
YouTube may pay less to be online than you do, a new report on internet connectivity suggests, calling into question a recent analysis arguing Google’s popular video service is bleeding money and demonstrating how the internet has continued to morph to fit user’s behavior.
In fact, with YouTube’s help, Google is now responsible for at least 6 percent of the internet’s traffic, and likely more — and may not be paying an ISP at all to serve up all that content and attached ads.
Credit Suisse made headlines this summer when it estimated that YouTube was binging on bandwidth, losing Google a half a billion dollars in 2009 as it streams 75 billion videos. But a new report from Arbor Networks suggests that Google’s traffic is approaching 10 percent of the net’s traffic, and that it’s got so much fiber optic cable, it is simply trading traffic, with no payment involved, with the net’s largest ISPs.
“I think Google’s transit costs are close to zero,” said Craig Labovitz, the chief scientist for Arbor Networks and a longtime internet researcher. Arbor Networks, which sells network monitoring equipment used by about 70 percent of the net’s ISPs, likely knows more about the net’s ebbs and flows than anyone outside of the National Security Agency.
And the extraordinary fact that a website serving nearly 100 billion videos a year has no bandwidth bill means the net isn’t the network it used to be.
But the lack of a monthly bill in the mailbox doesn’t mean Google’s internet connection is free — it’s just that it has purchased unused fiber optic cable known as “dark fiber” — and uses it to carry its traffic to other networks where it “peers” or trades traffic with other ISPs. Its costs for bandwidth are then amortized across the life of its fiber and routers.
YouTube has been mum on its actual costs, for competitive reasons, but did say in blog post in July that it has homegrown infrastructure and that traditional pricing models don’t apply.
There’s been a lot of speculation lately about how much it costs to run YouTube…. The truth is that all our infrastructure is built from scratch, which means models that use standard industry pricing are too high when it comes to bandwidth and similar costs. We are at a point where growth is definitely good for our bottom line, not bad.
In fact, YouTube’s low or nonexistent bandwidth bill points to a very important shift in the structure of the internet, which is rapidly becoming much more complicated.
Traditionally the net has been shaped like a pyramid with small ISPs at the bottom, connecting up to regional carriers, that connect to backbone and transcontinental carriers. It’s much more complicated now with the top 30 websites serving up 30 percent of net traffic, either from their own sets of pipes or from data centers around the world that connect much closer to your computer — and for much cheaper — than ever before.
It’s just one of many changes in how the net is structured, a change that started in 2007, according the report.
In 2007, the majority of the internet’s traffic came distributed by 30,000 blocks of servers around the net (technically Autonomous System Numbers).
In 2009, 150 blocks served up half of the net’s traffic.
“What we mean by the internet is changing and it’s happening really quickly,” Labovitz said. “I was blown away to find out that one-tenth of the internet is going [to] or coming from Google.”
Those blocks include Google and increasingly popular and cheap content-delivery networks, such as Akamai and Limelight, which serve content from websites such as Wired.com from server farms around the net — often at rates far cheaper than self-hosting.
Which is to say that the real money is in the ads and services in the packets, not in moving the bits from computer to computer. The cost of bandwidth has fallen and so too have the profit margins for moving bits, even as traffic grows at an estimated 40 percent a year.
With the growth of Google’s network and Content Delivery Networks, the economics of who pays whom to connect grows more complicated than the early days of the net when money flowed upwards — little ISPs paid regional ISPs who paid major ISPs who paid backbone operators.
Now if you are Google, you might even begin asking Comcast to pay up to connect its Google Tubes straight to their local cable ISP networks. That way, YouTube videos and Google search results would show up faster, letting the ISP brag that YouTube doesn’t stutter on their network, a potential commercial advantage over its DSL competitors.
“Who pays whom is changing,” Labovitz said. “All sorts of negotiations are happening behind closed doors.”
Unfortunately, few will know the outcomes of those talks, since most of the net’s architecture, let alone the financial machinations behind them, remain a secret cloaked in nondisclosure agreements.
But Labovitz says the changes will have a big upside for typical net users, who are already seeing faster downloads. For instance, many videos on YouTube now come in HD, an option that would have been unthinkable in the days when its video always seemed to be stuttering and buffering.
Labovitz also expects ISPs to react to falling margins for moving internet traffic by continuing to offer more and better services, such as backup services, smartphone apps to control their in-the-cloud cable DVRs or online video services like the controversial ESPN 360. That’s all part of their attempts to become something other than just dumb pipes ferrying YouTube videos — and Google’s ads — to your computer.
A full report, co-written with select academics, will be presented at the end of the month at the NANOG47 meeting, a gathering of net traffic engineers from North America. However, the Arbor Networks data is not available to other researchers due to confidentiality agreements, according to Labovitz.
See Also:
в prototype.js используется не совсем очевидный способ работы с исключениями, возникающими при обработке аякс-запросов: по умолчанию они подавляются и не попадают в консоль ошибок вообще. Чтобы их видеть, нужно добавлять в параметры запросов
onException: function logException(request, exception) {
// handle exception
}
или зарегистрировать глобальный обработчик:
Ajax.Responders.register({ onException: logException });
Однако при этом может возникнуть коллизия с другой фичей фреймворка: автоматическим исполнением пришедшего с сервера яваскрипта. По умолчанию установлено evalJS: true
, и если в заголовках обнаружится что-то вроде Content-Type: text/javascript
, прототайп попробует исполнить тело ответа как код, словит исключение, и передаст его в onException
.
бороться с этим явлением можно двумя способами: либо добавлять в параметры каждого аякс-запроса evalJS: false
, либо отдавать json с Content-Type: application/json
. Первый способ просто непрактичен — некрасиво и где-нибудь наверняка забудется. Второй иногда невозможен…
хорошего решения проблемы я пока не знаю : (
Posted in Accessibility, CSS, Usability.
Brad Neuberg of the Google Developer programs stopped by Yahoo! last week talk about HTML5. Brad has been hard at work on SVG Web lately, but he covered a lot of ground in this talk, including SVG/Canvas rendering, CSS transforms, app-cache, local databases, web workers, and much more. Brad does a fantastic job identifying the scope and practical implications of the changes that are coming along with HTML5 support in modern browsers. And he pulls no punches about which browsers fall into the “modern” category at this point.
Brad was on campus as part of the BayJax Meetup; thanks to BayJax’s organizers, and particularly Yahoo! engineer Gonzalo Cordero, for making the arrangements for the event and providing food and beverages. This was our third Yahoo!-hosted BayJax, and once again it was a stimulating evening with excellent speakers.
If the video embed below doesn’t show up correctly in your RSS reader of choice, be sure to click through to watch the high-rez version of the video on YUI Theater; the downloadable version is much smaller, optimized as it is for iPods, iPhones, and other handheld devices.
The cornerstone of all testing done on the core of the Opera browser is our automated regression testing system, named SPARTAN. The system consists of a central server and about 50 test machines running our 120 000 automated tests on all core reference builds. The purpose of this system is to help us discover any new bugs we introduce as early as possible, so that we can fix them before they cause any trouble for our users.
Before SPARTAN can test anything, it will require a build to test. Our build system automatically creates builds every night and pings SPARTAN when they are ready. Developers and testers can also request their own builds from the build system, using any build tag they want, to test stuff from their own experimental branches before this is eventually merged into the stable mainline we base our products on.
Unlike other browser vendors we ship our browser on a variety of different platforms. So our core build packages do not contain just one binary, but several. One for each general product category. Each of these profiles have the same feature set and memory constraints as the platform they correspond to. The whole set of tests are run on each of these profiles.
When the SPARTAN server is informed about the existence of a new build it will add this build to its testing queue and distribute a few hundred tests to each of the test machines the next time they ask for more work. Each test machine works independently with its assigned tests. It will download the Opera binaries it has been told to use, and run its assigned tests on it. Once it has finished its batch of tests, it will pass the test results back to the SPARTAN server, and again ask what to do next.
If it ever runs out of new builds to test, for example during the weekend, it will look back at older builds and run any newly added tests on them too. This to ensure that we have a full history for each test, and at any time can determine when a fix or regression was first introduced without having to manually test things again.
We have several different types of tests:
All in all, we currently run about 120 000 tests on each configuration in each build, but this number changes daily. We continuously write new test cases for bugs or test suites for new or old features, and we also copy any publicly available test suites we find useful. Right now we are also working on automating many of our previously manual tests, including memory tests.
Once the machines are done with their part of the job with any particular build, they will send an email to a human who will continue the work. SPARTAN will generate a report of changes between this build and the previous build. In most builds there are some tests that go from FAIL to PASS because we have fixed something. But there are also often regressions—tests that go from PASS to FAIL—because we accidentally broke something while fixing something else. This is expected, and is the reason for why we do regression testing. We know there will always be regressions, and need to find them as quickly as possible in order to fix them before they can cause any trouble for users or customers.
The human tester will analyze each regressed test. If a hundred different tests started failing at the same time, they could all have broke because of one regression, or there could be several different ones. For each unique regression identified the human tester will report a new bug and assign it to the developer responsible for the code that broke. Once a fix is ready, we will run all our tests again.
Opera Software makes one of the most intriguing browsers on the market. Opera 10 is was released this summer to excellent reviews, and it’s undoubtedly the best Opera yet. Opera continues to enjoy deep regional pockets of strength, especially in Eastern Europe. (In Russia, Opera was the top browser for much of 2009). Meanwhile, Opera Mini continues to make headway in the low-powered device market.
On the other hand, by most global measures Opera continues to be a niche browser. For example, it recently has been eclipsed by the rapidly ascending Google Chrome browser in terms of traffic on the Yahoo! network; StatCounter shows Chrome at roughly doubling Opera’s share in the past month on a global basis.
In this talk, Andreas Bovens and David Storey, both of Opera, make the case that we as web development community can and should be continuing to support Opera as one of the top-tier, modern, standards-compatible browsers. The talk covers not just why you should support opera, but also how specifically to go about it.
If the video embed below doesn’t show up correctly in your RSS reader of choice, be sure to click through to watch the high-resolution version of the video on YUI Theater; the downloadable version is much smaller, optimized as it is for iPods, iPhones, and other handheld devices.
недавно обнаружил довольно интересную надстройку над CSS, добавляющую в синтаксис много вкусностей. Как следует из заголовка, называется она SASS, или syntactically awesome stylesheets, и представляет собой маленькую программу на руби.
за примерам сахара лучше идти на сайт, но я их коротко перечислю:
поскольку CSS — декларативный язык, такие инструменты для него подходят очень хорошо
The Patent Advisory Group concluded that the inventive step claimed by US Patent Nr. 5,764,992 lies in the fact that the software program can update itself absolutely independent of functions performed by any resource external to the current software program. As the Widgets 1.0: Updates Draft uses an update-manager throughout the Specification, such self-updating does not occur.
/* * The code for each of the seven primitive types is largely identical. * C'est la vie. */
private static void sort1(long x[], int off, int len) { ... private static void sort1(int x[], int off, int len) { ... private static void sort1(short x[], int off, int len) { ... private static void sort1(char x[], int off, int len) { ... private static void sort1(byte x[], int off, int len) { ... private static void sort1(double x[], int off, int len) { ... private static void sort1(float x[], int off, int len) { ...
Shared by arty
Пользователям, у которых уже есть ОС, предложат сделать выбор через систему обновлений Windows Update.
Last week I spent a lot of time on WebKit in order to produce a comprehensive comparison of all WebKits. My purpose was to prove there is no “WebKit on Mobile,” and to gain some more insight in the complicated relations between the various WebKits.
Therefore I now present the Great WebKit Comparison Table. In it I compare 19 different WebKits on 27 tests.
у меня есть давняя сильная уверенность, что именно тот браузер, в котором ты ведёшь разработку, в котором обновляешь страницу каждые две минуты — именно он и кажется тебе самым безглючным и правильным. Правда, складывается ощущение, что эта мысль только мне приходила в голову, судя по тому, как часто олдовые фанаты ff и fb поминают добрым словом, например, оперу.
вы можете сказать, что при разработке в ie он всё равно продолжает казаться страшным глюкалом — но вспомните тихую армаду любителей продукции ms
One of the most interesting but under-appreciated processes in building a web site is the amount of testing that goes on to figure out exactly what should go where. Many startups rely on A/B testing as they roll out new features, and the big guys — namely very popular sites like Google and Facebook — conduct extensive usability studies that can involve interviews, eye monitoring, and more. Today YouTube has revealed some of the action that goes on behind the scenes as it continues to tweak its all-important ‘Watch’ page — the site you see when you’re actually viewing a video on YouTube.
To help gauge the Watch page’s ideal layout, YouTube invited in a number of users and gave them magnets that represented different elements from YouTube and other popular video sites. The results were not surprising, but they present an interesting challenge to YouTube: the vast majority of users chose to streamline their page as much as possible, featuring a large video player, a search box, and a strip of related videos. But the site’s heavy uploaders, who are obviously key to YouTube’s success, tended to favor a more complex site with a greater emphasis on analytics, sharing, and social interaction.
YouTube’s task is to figure out a way to appeal to both sets of users. And to do that, it sounds like there’s going to be a new set of customization options coming our way, which would allow users to tweak their watch pages with the features they want. YouTube wouldn’t confirm that this feature is definitely coming (the company is still doing extensive testing so it may not be sure itself), but don’t be surprised if you get the option to build your perfect ‘Watch’ page six months down the line.
Last month YouTube gave us a peek at another one of its recent research revelations: its five star rating system doesn’t work, because people tend to either rate videos as 5’s or 1’s.
Crunch Network: MobileCrunch Mobile Gadgets and Applications, Delivered Daily.
History of Django’s popularity. “What sequence of events made Django the most popular Python web framework?”—insightful answers from Alex Martelli and James Bennett.
YUI().use("node", function(Y) { Y.one("#message").setContent("Hello, World!"); });
We’re pleased to announce today the general-availability release of YUI 3.0.0. YUI 3’s core infrastructure (YUI, Node and Event) and its utility suite (including Animation, IO, Drag & Drop and more) are all considered production-ready with today’s release.
YUI 3 is the first ground-up redesign of YUI since 2005, and it brings with it a host of modernizations:
use()
them; this protects you against changes that might happen later in the page’s lifecycle. (In other words, if someone blows away a module you’re using after you’ve created your YUI instance, your code won’t be affected.)The code we’re shipping today in 3.0.0 is the same code that drives the new Yahoo! Home Page, and it goes out with confidence that it has been exercised vigorously and at scale. The team is thrilled to be sharing it with you today for the first time in a production-ready release.
One of the goals of the YUI 3 redesign was to make it easy to use without sacrificing power, performance and configurability. You can have your first YUI 3 app running in less than a minute following three simple steps.
Step 1: Put the YUI seed file on the page, pulling down a slim 6.2KB script file off of the Yahoo CDN:
<script type="text/javascript" src="http://yui.yahooapis.com/3.0.0/build/yui/yui-min.js"></script>
Step 2: Make use of any YUI module or submodule. The seed file will take care of calculating your dependencies and loading any additional scripts you need in (usually) a single combo-handled, non-blocking HTTP request. So, you can use the Drag & Drop plugin to make an element draggable like this:
<div id="demo">I'm draggable.</div> <script type="text/javascript" src="http://yui.yahooapis.com/3.0.0/build/yui/yui-min.js"></script> <script> YUI().use('dd-plugin', function(Y) { Y.one('#demo').plug(Y.Plugin.Drag); }); </script>
Step 3: There is no step 3. Relax, grab a soda. Work on your short game. Life is good.
use()
Anything, But Not EverythingYUI 3’s simplicity of use (particularly in its ability to use()
any module with intrinsic, efficient loading) is paired with new levels of power and control.
For example, one of the characteristics you’ll find throughout the YUI 3 project is an emphasis on granularity. We’ve worked hard to take structures that were monolithic in YUI 2 and break them down into smaller packages in YUI 3. As a result, you’ll find that many modules — component-level packages like IO or Animation — are comprised of various submodules. Usually, all you’ll need is the base submodule.
Charting the evolution of components from YUI 2 to YUI 3 tends to yield visualizations like this one for DataSource (comparing gzipped filesizes):
Because any given DataSource implementation is likely to need only one slender submodule from the DataSource family, the savings in terms of performance and K-weight — especially for complex implementations — are often substantial.
Take the time to explore the Dependency Configurator as you’re setting up your YUI().use()
statements. Instead of picking top-level modules, explore the submodule structures and see if the featureset you need is encompassed in a submodule. You may find yourself using modules like io-base
instead of io
and anim-base
instead of anim
— and saving yourself a lot of K-weight in the process.
Along with the promotion of YUI 3 to general availability with today’s release, we’ve updated the YUI website to better support the growing communities using both YUI 2 and YUI 3. Today, when you visit YUI on the Yahoo! Developer Network you’ll find a meta-page with project-wide links along with direct links into the YUI 2 and YUI 3 areas of the site.
Meanwhile, we continue to build out our project-tracking and forums platform on YUILibrary.com and host the YUI project source code for forking and contributions on GitHub You can also find a lot of YUI folks hanging out in #YUI on Freenode; feel free to drop in and join the conversation as you explore YUI 3.0.0.