Wednesday, 11 December 2013

Measuring network performance with Resource Timing API

Author PhotoBy Ilya Grigorik, Developer Advocate and Web Performance Engineer

Network performance is a critical factor in delivering a fast and responsive experience to the user. In fact, our goal is to make all pages load in under one second, and to get there we need to carefully measure and optimize each and every part of our application: how long the page took to load, how long each resource took to load, where the time was spent, and so on.

The good news is that the W3C Navigation Timing API gives us the tools to measure all of the critical milestones for the main HTML document: DNS, TCP, request and response, and even DOM-level timing metrics. However, what about all the other resources on the page: CSS, JavaScript, images, as well as dozens of third party components? Well, that’s where the new Resource Timing API can help!

resource timing api diagram

Resource Timing allows us to retrieve and analyze a detailed profile of all the critical network timing information for each resource on the page - each label in the diagram above corresponds to a high resolution timestamp provided by the Resource Timing API. Armed with this information, we can then track the performance of each resource and determine what we should optimize next. But enough hand-waving, let’s see it in action:
img = window.performance.getEntriesByName("http://mysite.com/mylogo.webp");

var dns = parseInt(img.domainLookupEnd - img.domainLookupStart),
tcp = parseInt(img.connectEnd - img.connectStart),
ttfb = parseInt(img.responseStart - img.startTime),
transfer = parseInt(img.responseEnd - img.responseStart),
total = parseInt(img.responseEnd - img.startTime);

logPerformanceData("mylogo", dns, tcp, ttfb, transfer, total);

Replace the URL in the example above with any asset hosted on your own site, and you can now get detailed DNS, TCP, and other network timing data from browsers that support it - Chrome, Opera, and Internet Explorer 10+. Now we’re getting somewhere!

Measuring network performance of third party assets

Many applications rely on a wide variety of external assets such as social widgets, JavaScript libraries, CSS frameworks, web fonts, and so on. These assets are loaded from a third party server and as a result, their performance is outside of our direct control. That said, that doesn’t mean we can’t or shouldn’t measure their performance.

Resources fetched from a third party origin must provide an additional opt-in HTTP header to allow the site to gather detailed network timing data. If the header is absent, then the only available data is the total duration of the request. On that note, great news, we have been working with multiple teams, including at Facebook and Disqus, to do exactly that! You can now use the Resource Timing API to track performance of:


Curious to know how long your favorite web font, or jQuery library hosted on the Google CDN is taking to load, and where the time is spent? Easy, Resource Timing API to the rescue! For bonus points, you can then also beacon this data to your analytics server (e.g. using GA’s User Timing API) to get detailed performance reports, set up an SLA, and more, for each and every asset on your page.

Third party performance is a critical component of the final experience delivered to the user and Resource Timing is a much needed and a very welcome addition to the web platform. What we can measure, we can optimize!

(Note: due to long cache lifetime of some of the assets that are now Resource Timing enabled, some users may not be able to get immediate access to timing data as they may be using a cached copy. This will resolve itself as more users update their resources).


Ilya Grigorik is Developer Advocate at Google, where he spends his days and nights on making the web fast and driving adoption of performance best practices.

Posted by Scott Knaster, Editor

Google+ Sign-In improvements

Author PhotoBy Yaniv Yaakubovich, Product Manager, Google+

Cross-posted from the Google+ Developers Blog

Today we’re launching three updates to Google+ Sign-In, making it easier and more effective to include Google authentication in your app:

1. Support for all Google account types 
Google+ Sign-In now supports all Google account types, including Google Apps users, and users without a Google+ profile.

2. Easy migration from other auth methods 
If you’re using OpenID v2 or OAuth 2.0 Login for authentication and want to upgrade to Google+ Sign-In, we’ve made it easy to do so; it’s entirely your choice. Google+ Sign-In can grow your audience in multiple ways — including over-the-air installs, interactive posts, and cross-device sign-in — and now it’s fully compatible with the OpenID Connect standard. For more details, see our sign-in migration guide.

3. Incremental auth
Incremental auth is a new way to ask users for the right permission scopes at the right time, versus all permissions at once.

For example:
  • If your app allows users to save music playlists to Google Drive, you can ask for basic profile info at startup, and only ask for Google Drive permissions when they’re ready to save their first mix. 
  • Likewise: you can ask for Google Calendar permissions only when users RSVP to an event, and so on.
Now that incremental auth is available for Google+ Sign-In, we recommend asking for the minimum set of permissions up front, then asking for further permissions only when they’re required. This approach not only helps users understand how their information will be used in your app, it can also reduce friction and increase app engagement.

8Tracks only asks for the necessary permissions to get users started in their app.


Once in the app, 8Tracks prompts users to connect their YouTube account to get mix recommendations.


When users click ‘Connect Your YouTube account’, 8Tracks asks users for the additional YouTube permission.

If you have any questions, join our Developing with Google+ community, or tag your Stack Overflow posts with ‘google-plus’.


+Yaniv Yaakubovich is a Product Manager on the Google+ Platform team, working on Google+ Sign-in. When he is not working he enjoys reading and exploring California with his wife and son.

Posted by +Scott Knaster, Editor

Tuesday, 10 December 2013

Our checklist for improving mobile websites

Author PhotoBy Maile Ohye, Developer Programs Tech Lead

To help you capitalize on the huge opportunity to improve your mobile websites, we published a checklist for prioritizing development efforts. Several topics in the checklist reference relevant studies or business cases. Others contain videos and slides explaining how to use Google Analytics and Webmaster Tools to understand mobile visitors' experiences and intent. Copied below is an abridged version of the full checklist. And speaking of improvements… we'd love your feedback on how to enhance our checklist as well!

Checklist for mobile website improvements

Step 1: Stop frustrating your customers
  • Remove cumbersome extra windows from all mobile user-agents | Google recommendation, Article
    • JavaScript pop-ups that can be difficult to close
    • Overlays, especially to download apps (instead consider a banner such as iOS 6+ Smart App Banners or equivalent, side navigation, email marketing, etc.)
    • Survey requests prior to task completion
  • Provide device-appropriate functionality
    • Remove features that requires plugins or videos not available on a user’s device (e.g., Adobe Flash isn’t playable on an iPhone or on Android versions 4.1 and higher) | Business case
    • Serve tablet users the desktop version (or if available, the tablet version) | Study
    • Check that full desktop experience is accessible on mobile phones, and if selected, remains in full desktop version for duration of the session (i.e., user isn’t required to select “desktop version” after every page load) | Study
  • Correct high traffic, poor user-experience mobile pages


How to improve high-traffic, poor user-experience mobile pages with data from Google Analytics bounce rate and events (slides)

For all topics in the category “Stop frustrating your customers”, please see the full Checklist for mobile website improvement.

Step 2: Facilitate task completion
  • Optimize search engine processing and the searcher experience | Business case
    • Unblock resources (CSS, JavaScript) that are robots.txt disallowed
    • For RWD: Be sure to include CSS @media query
    • For separate m. site: add rel=alternate media and rel=canonical, as well as Vary: User-Agent HTTP Header which helps Google implement Skip Redirect
    • For Dynamic serving: Vary: User-Agent HTTP header
  • Optimize popular mobile persona workflows for your site

How to use Google Webmaster Tools and Google Analytics to optimize the top mobile tasks on your website (slides)

For all topics in the category “Facilitate task completion”, please see the full Checklist for mobile website improvement.

Step 3: Turn customers into fans!
  • Consider search integration points with mobile apps | Background, Information
  • Investigate and/or attempt to track cross-device workflow | Business case
    • Logged in behavior on different devices
    • “Add to cart” or “add to wish list” re-visits
  • Brainstorm new ways to provide value
    • Build for mobile behavior, such as the in-store shopper | Business case
    • Leverage smartphone GPS, camera, accelerometer
    • Improve sharing or social behavior | Business case
    • Consider intuitive/fun tactile functionality with swiping, shaking, tapping


Maile Ohye is a Developer Advocate on Google's Webmaster Central Team. She very much enjoys chatting with friends and helping companies build a strategic online presence.

Posted by Scott Knaster, Editor

Thursday, 5 December 2013

Changes to “all-following” behavior in Google Calendar

Author PhotoBy Gregory Yakushev, Software Engineer

Today we are introducing new behavior for “all-following” changes to recurring events. Previously, we cut a recurring event at the point an “all-following” change was made and created a new recurring event starting at that point. Now, in most cases we keep the recurring event intact, while still applying the relevant changes to all following instances.

This means that users can now perform operations on the entire recurring series even after an “all-following” change has been made. They can modify, reply to, delete, or apply additional “all-following” changes. Also, in many cases, changes to specific instances of a recurring event will still be preserved after an “all-following” change.

To preserve backward compatibility, API clients will still see a separate recurring event after each “all-following” change. A separate post will announce API support for making these “all-following” changes and accessing whole recurring events with multiple “all-following” changes in them.

For example: suppose I have a recurring event “Daily Meeting” for my team. Paul knows that he will be on vacation, so he declined a few instances next month. I know that we will get new intern in a month, so I invite him starting next month to “all following” instances. Also I want to move it to a different room starting next week, so I change location and apply to “all following” instances.

After all these operations Paul's responses are still preserved: I see that he will not attend a few meetings next month. I also see that on instances two months ahead both of my “all-following” changes are reflected correctly: the room is changed and intern is invited. And all attendees still see all “Daily Meeting” instances as one recurrence: they can accept, decline or remove all of them with one click.


Grisha Yakushev ensures Calendar servers keep your data consistent and safe. He enjoys travelling the world, preferably by hitchhiking.

Posted by Scott Knaster, Editor

Monday, 2 December 2013

Google Compute Engine is now Generally Available with expanded OS support, transparent maintenance, and lower prices

Author PhotoBy Ari Balogh, Vice President, Cloud Platform

Cross-posted from the Google Cloud Platform Blog

Google Cloud Platform gives developers the flexibility to architect applications with both managed and unmanaged services that run on Google’s infrastructure. We’ve been working to improve the developer experience across our services to meet the standards our own engineers would expect here at Google.

Today, Google Compute Engine is Generally Available (GA), offering virtual machines that are performant, scalable, reliable, and offer industry-leading security features like encryption of data at rest. Compute Engine is available with 24/7 support and a 99.95% monthly SLA for your mission-critical workloads. We are also introducing several new features and lower prices for persistent disks and popular compute instances.

Expanded operating system support
During Preview, Compute Engine supported two of the most popular Linux distributions, Debian and Centos, customized with a Google-built kernel. This gave developers a familiar environment to build on, but some software that required specific kernels or loadable modules (e.g. some file systems) were not supported. Now you can run any out-of-the-box Linux distribution (including SELinux and CoreOS) as well as any kernel or software you like, including Docker, FOG, xfs and aufs. We’re also announcing support for SUSE and Red Hat Enterprise Linux (in Limited Preview) and FreeBSD.

Transparent maintenance with live migration and automatic restart
At Google, we have found that regular maintenance of hardware and software infrastructure is critical to operating with a high level of reliability, security and performance. We’re introducing transparent maintenance that combines software and data center innovations with live migration technology to perform proactive maintenance while your virtual machines keep running. You now get all the benefits of regular updates and proactive maintenance without the downtime and reboots typically required. Furthermore, in the event of a failure, we automatically restart your VMs and get them back online in minutes. We’ve already rolled out this feature to our US zones, with others to follow in the coming months.

New 16-core instances
Developers have asked for instances with even greater computational power and memory for applications that range from silicon simulation to running high-scale NoSQL databases. To serve their needs, we’re launching three new instance types in Limited Preview with up to 16 cores and 104 gigabytes of RAM. They are available in the familiar standard, high-memory and high-CPU shapes.

Faster, cheaper Persistent Disks
Building highly scalable and reliable applications starts with using the right storage. Our Persistent Disk service offers you strong, consistent performance along with much higher durability than local disks. Today we’re lowering the price of Persistent Disk by 60% per Gigabyte and dropping I/O charges so that you get a predictable, low price for your block storage device. I/O available to a volume scales linearly with size, and the largest Persistent Disk volumes have up to 700% higher peak I/O capability. You can read more about the improvements to Persistent Disk in our previous blog post.

10% Lower Prices for Standard Instances
We’re also lowering prices on our most popular standard Compute Engine instances by 10% in all regions.

Customers and partners using Compute Engine
In the past few months, customers like Snapchat, Cooladata, Mendelics, Evite and Wix have built complex systems on Compute Engine and partners like SaltStack, Wowza, Rightscale, Qubole, Red Hat, SUSE, and Scalr have joined our Cloud Platform Partner Program, with new integrations with Compute Engine.
“We find that Compute Engine scales quickly, allowing us to easily meet the flow of new sequencing requests… Compute Engine has helped us scale with our demands and has been a key component to helping our physicians diagnose and cure genetic diseases in Brazil and around the world.”
- David Schlesinger, CEO of Mendelics
"Google Cloud Platform provides the most consistent performance we’ve ever seen. Every VM, every disk, performs exactly as we expect it to and gave us the ability to build fast, low-latency applications."
- Sebastian Stadil, CEO of Scalr
We’re looking forward to this next step for Google Cloud Platform as we continue to help developers and businesses everywhere benefit from Google’s technical and operational expertise.


Ari Balogh is the Vice President, Cloud Platform at Google and manages the teams responsible for building Google Cloud Platform and other parts of Google’s internal infrastructure.

Posted by Scott Knaster, Editor

Monday, 25 November 2013

GDL Weekly: Chrome Dev Summit, Google Glass Development, account security

Author PhotoBy Louis Gray, Program Manager, Google Developers Live

Cross-posted from +Google Developers


It was an exceptional week for Google Developers last week. On +GDL, we streamed two high quality days of the Chrome Dev Summit, showed off the +Google Glass Development Kit, and hosted +Tim Bray and +Breno de Medeiros talking about passwords and account security. That’s more than enough, but we weren't finished. Catch our 3 minute summary video, Google Developers Live Weekly, for the details.



Go directly to the videos and posts mentioned:

To make sure you don't miss a single event, subscribe to Google Developers on YouTube or just click the red YouTube button on the right nav (over there --> see it?), and check us out at http://developers.google.com/live.

+Louis Gray is a Program Manager on Google's Developer Relations Team, running Google Developers Live. He believes life is but a (live) stream.

Posted by Scott Knaster, Editor

Wednesday, 20 November 2013

“A Journey through Middle-earth”: A Chrome Experiment for the multi-device web

Author PhotoBy Max Heinritz, Product Manager and (Tolkien) Troll Evader

Cross-posted from the Chromium Blog

For the past few years, building multimedia web experiences for mobile devices has been difficult. Phones and tablets are less powerful than their counterparts, and mobile browsers have traditionally had limited API support. Despite these challenges, the mobile web is evolving rapidly. In the last few months alone, Chrome for Android gained support for WebGL, WebRTC, and Web Audio.

“A Journey through Middle-Earth”, our latest Chrome Experiment, demonstrates what’s now possible on the mobile web. Developed by North Kingdom in collaboration with Warner Bros. Pictures, New Line Cinema and Metro-Goldwyn-Mayer Pictures, the experiment uses the latest web technologies to deliver a multimedia experience designed specifically for tablets, phones, and touch-enabled desktops.


The experiment starts with an interactive map of Middle-earth. It may not feel like it, but this cinematic part of the experience was built with just HTML, CSS, and JavaScript. North Kingdom used the Touch Events API to support multi-touch pinch-to-zoom and the Full Screen API to allow users to hide the URL address bar. It looks natural on any screen size thanks to media queries and feels low-latency because of hardware-accelerated CSS Transitions.

Clicking or tapping a location in the map reveals a second layer with horizontal parallax scrolling, again built just with HTML, CSS, and JavaScript. Eventually users reach an immersive WebGL-based 3D environment optimized for Android phones and tablets with high-end GPUs. These environments also use the Web Audio API for interactive audio.

The multi-device web is evolving rapidly, and we’re looking forward to more sites like “A Journey Through Middle-earth” that show the potential of the platform’s latest mobile features. For a deeper technical case study, check out North Kingdom’s new HTML5 Rocks article about using WebGL in Chrome for Android*. We’re also planning to host a Google Developers Live session with the team in early December; circle +Google Chrome Developers for details.

*Update: you can now read North Kingdom's second HTML5 Rocks case study on building the rest of the HTML5 front-end for "A Journey through Middle-earth".

Max Heinritz is an Associate Product Manager on the Chrome Web Platform team. He's helping the web reach its potential to become the universal application platform. On the weekends you can find him exploring the Northern California wilderness.

Posted by Scott Knaster, Editor

Tuesday, 19 November 2013

Civic Information API: now connecting US users with their representatives

Author PhotoBy Jonathan Tomer, Software Engineer

Cross-posted from the Google Politics & Elections Blog

Many applications track and map governmental data, but few help their users identify the relevant local public officials. Too often local problems are divorced from the government institutions designed to help. Today, we're launching new functionality in the Google Civic Information API that lets developers connect constituents to their federal, state, county and municipal elected officials—right down to the city council district.

The Civic Information API has already helped developers create apps for US elections that incorporate polling place and ballot information, from helping those affected by Superstorm Sandy find updated polling locations over SMS to learning more about local races through social networks. We want to support these developers in their work beyond elections, including everyday civic engagement.

In addition to elected representatives, the API also returns your political jurisdictions using Open Civic Data Identifiers. We worked with the Sunlight Foundation and other civic technology groups to create this new open standard to make it easier for developers to combine the Civic Information API with their datasets. For example, once you look up districts and representatives in the Civic Information API, you can match the districts up to historical election results published by Open Elections.

Developers can head over to the documentation to get started; be sure to check out the "Map Your Reps" sample application from Bow & Arrow to get a sense of what the API can do. You can also see the API in action today through new features from some of our partners, for example:

  • Change.org has implemented a new Decision Makers feature which allows users to direct a petition to their elected representative and lists that petition publicly on the representative's profile page. As a result, the leader has better insight into the issues being discussed in their district, and a new channel to respond to constituents.
  • PopVox helps users share their opinions on bills with their Congressional Representatives in a meaningful format. PopVox uses the API to connect the user to the correct Congressional District. Because PopVox verifies that users are real constituents, the opinions shared with elected officials have more impact on the political process.
Over time, we will expand beyond US elected representatives and elections to other data types and places. We can’t grow without your help. As you use the API, please visit our Developer Forum to share your experiences and tell us how we can help you build the next generation of civic apps and services.

This release is an investment in making the world’s civic data universally accessible and useful. We’ll continue to work with civic developers who are tackling real-world challenges. Together, we can build new tools to improve civic life for everyone.


Jonathan Klabunde Tomer is a software engineer in Google's Washington, DC office. He enjoys bicycling, good food, good wine, and open data.

Posted by Scott Knaster, Editor

Monday, 18 November 2013

GDL Weekly: Dart 1.0, Portable Native Client, Hackademy

Author PhotoBy Louis Gray, Program Manager, Google Developers Live

Cross-posted from +Google Developers


Last week was a busy one for Google Developers. We showed off Portable Native Client, talked about Google Developers Hackademy, and launched Dart 1.0. Find out about these and more on Google Developers Live Weekly. To make sure you don't miss a single event, subscribe to Google Developers on YouTube and check us out at https://developers.google.com/live.



Go directly to the videos and posts mentioned:

To make sure you don't miss a single event, subscribe to Google Developers on YouTube or just click the red YouTube button on the right nav, and check us out at http://developers.google.com/live.

+Louis Gray is a Program Manager on Google's Developer Relations Team, running Google Developers Live. He believes life is but a (live) stream.

Posted by Scott Knaster, Editor

From your CS class to the real world: a deep dive into open source

By Stephanie Taylor, Open Source Programs

Cross posted from the Official Google Blog

Today marks the start of Google Code-in, a global online contest for pre-university students (13-17 years old) interested in learning more about open source software. Participating students have an opportunity to work on real world software projects and earn cool prizes for their effort.

For the next seven weeks students from around the world will be able to choose from an extensive list of tasks created by 10 open source projects. Some tasks require coding in a variety of programming languages, creating documentation, doing marketing outreach or working on user interfaces.

Participants earn points for each task they successfully complete to win T-shirts and certificates. At the end of the contest, 20 students will be selected as grand prize winners and flown to Google’s Mountain View, California headquarters. Winners will receive a trip to San Francisco, a tour of the Googleplex and a chance to meet with Google engineers.
Google Code-in 2012 grand prize winners at the Googleplex with a self driving car

More than 1,200 students from 71 countries and 730 schools have participated in Google Code-in over the past three years. Last year, our 20 grand prize winners came from 12 countries on five continents!

We hope this year’s participants will enjoy learning about open source development while building their technical skills and making an impact on these organizations. Please review our program site for contest rules, frequently asked questions and to get started!


Written by Stephanie Taylor, Open Source Programs

Posted by Scott Knaster, Editor

Thursday, 14 November 2013

Offline disk import and the OmNomNom machine

Author PhotoBy Benjamin Bechtolsheim, Product Marketing Manager

Cross-posted from the Google Cloud Platform Blog

Yesterday, we announced that we are expanding our offline disk import service to better serve users globally. With new disk upload centers in Switzerland, Japan and India, as well as our US center, it’s easier for people around the world to import large data sets by mailing hard drives to us rather than sending hundreds of terabytes over their slow, expensive or unreliable Internet connection.

But importing large amounts of data at scale isn’t simple. Our engineers have been working on the challenge for years. Originally, offline disk import was handled at our data centers as a way to efficiently import large amounts of data from the hard drives in our Street View cars - vehicles that capture terabytes of photographs and information about the landscape as they build a navigable, visual database of the world.

And although it’s technically challenging, the system we built for rapidly ingesting and processing these large data sources has a playful name. We call it OmNomNom. Here is one of the test OmNomNom machines that the disk import team has at its offices in Mountain View:

But rapidly ingesting and processing large data sets and making them usefully available is a bit more complex than gobbling down a cookie. As we’ve improved our ability to quickly import these drives, we dramatically reduced the time between capturing these images and making them available to users around the world.

Now, we are helping people across the world take advantage of the speed, scale and global availability of Google Cloud Storage as well as this rapid disk-upload technology. Even though it might sound like something out of Sesame Street, this is another example of how Google Cloud Platform is making the advantages of Google-sized scalable infrastructure available to you. All you need to do is send us your EncFS encrypted hard drives, and we will let you know once your encrypted bytes are imported to your designated GCS bucket. Once uploaded, we can mail your drives back to you, or if you prefer, safely and securely handle disk destruction free of charge. Check out our website for how you can be part of the Limited Preview of International Offline Disk Import.

(Oh, and for those of you who want to use Google Cloud Storage but don’t need offline disk import, you always have quick access Google Cloud Storage from the command line using gsutil).


Benjamin Bechtolsheim is a Product Marketing Manager for Google Cloud Platform. When not getting developers to code on Cloud Platform, he's probably riding his bike, trying to keep his container garden alive, or playing guitar, piano or the shaky egg.

Posted by Scott Knaster, Editor

Dart 1.0: A stable SDK for structured web apps

Author PhotoBy Lars Bak, Software Engineer and Chief Dartisan


Today we’re releasing the Dart SDK 1.0, a cross-browser, open source toolkit for structured web applications. In the two years since we first announced Dart, we’ve been working closely with early adopters to mature the project and grow the community. This release marks Dart's transition to a production-ready option for web developers.

The Dart SDK 1.0 includes everything you need to write structured web applications: a simple yet powerful programming language, robust tools, and comprehensive core libraries. Together, these pieces can help make your development workflow simpler, faster, and more scalable as your projects grow from a few scripts to full-fledged web applications.

On the tools side, the SDK includes Dart Editor, a lightweight but powerful Dart development environment. We wanted to give developers the tools to manage a growing code base, so we added code completion, refactoring, jump to definition, a debugger, hints and warnings, and lots more. Dart also offers an instant edit/refresh cycle with Dartium, a custom version of Chromium with the native Dart VM. Outside the browser, the Dart VM can also be used for asynchronous server side computation.

For deployment, dart2js is a translator that allows your Dart code to run in modern browsers. The performance of generated JavaScript has improved dramatically since our initial release and is in many cases getting close to that of idiomatic JavaScript. In fact, the dart2js output of the DeltaBlue benchmark now runs even faster than idiomatic JavaScript. Similarly, dart2js output code size has been reduced substantially. The generated JavaScript for the game Pop, Pop, Win! is now 40% smaller than it was a year ago. Performance of the VM continues to improve as well; it’s now between 42% to 130% faster than idiomatic JavaScript running in V8, depending on the benchmark.

DeltaBlue benchmark results
The Dart SDK also features the Pub package manager, with more than 500 packages from the community. Fan favorites include AngularDart and polymer.dart, which provide higher-level frameworks for building web apps. Dart developers can continue using their favorite JavaScript libraries with Dart-JavaScript interop.

Going forward, the Dart team will focus on improving Dartium, increasing Dart performance, and ensuring the platform remains rock solid. In particular, changes to core technologies will be backward-compatible for the foreseeable future.

Today’s release marks the first time Dart is officially production-ready, and we’re seeing teams like Blossom, Montage, Soundtrap, Mandrill, Google's internal CRM app and Google Elections, already successfully using Dart in production. In addition, companies like Adobe, drone.io, and JetBrains have started to add Dart support to their products.

To get started, head over to dartlang.org and join the conversation at our Dartisans community on Google+. We’re excited to see what you will build with the new stable Dart SDK 1.0.


Lars Bak is a veteran virtual machinist, leaving marks on several software systems: Beta, Self, Strongtalk, HotSpot, CLDC HI, OOVM Smalltalk, and V8.

Posted by Scott Knaster, Editor

Tuesday, 12 November 2013

Open attachments with your web app directly from Gmail

Author PhotoBy Nicolas Garnier, Developer Relations

The Google Drive SDK lets you build apps that deeply integrate with Google Drive and today that integration is getting even better: users can now easily discover, connect, and use your Drive-enabled app right within Gmail.

When viewing attachments in Gmail, users will be able to open files with a connected app, just like they can in Google Drive. And for certain file types, they’ll also see suggestions for relevant Drive apps that let them to do more than just view their email attachments.

Opening an image in Gmail with a connected app. (Credit: Krzysztof P. Jasiutowicz)

If users don’t see the app they want, the “Connect more apps” option makes it easy for them to discover and connect with any compatible app, all without ever leaving their inbox.

Browse and connect Drive and Gmail-enabled web apps

If your Google Drive-enabled app is already listed in the Chrome Web Store's Drive collection, you don't have to do a thing. Existing users of your app will see it appear as one of their connected apps in Gmail, and new users will be able to search for it in the store.

If your web app isn’t yet Drive-enabled, check out our getting started guide to ensure your app is ready for use by any Gmail and Drive user!


Nicolas Garnier joined Google’s Developer Relations in 2008 and lives in Zurich. He is a Developer Advocate for Google Drive and Google Apps. Nicolas is also the lead engineer for the OAuth 2.0 Playground.

Posted by Scott Knaster, Editor

Monday, 11 November 2013

GDL Weekly: Make your pages faster, startup founding teams, Google Cloud Tour

Author PhotoBy Louis Gray, Program Manager, Google Developers Live

Cross-posted from +Google Developers


Each Monday I spend a few minutes reviewing what happened last week on Google Developers Live. Please take a look.



Go directly to the videos and posts mentioned:
To make sure you don't miss a single event, subscribe to Google Developers on YouTube or just click the red YouTube button on the right nav, and check us out at http://developers.google.com/live.


+Louis Gray is a Program Manager on Google's Developer Relations Team, running Google Developers Live. He believes life is but a (live) stream.

Posted by Scott Knaster, Editor

Friday, 8 November 2013

Fridaygram: Connected Classrooms, migrating pronghorns, new Helpouts

Author PhotoBy +Scott Knaster, Google Developers Blog Editor

When you’re a kid in school, there’s nothing like a field trip to get you out into the world to learn something new. Now, thanks to the web, there is something like a field trip: Connected Classrooms on Google+. This new program lets kids from everywhere make virtual trips to museums, zoos, factories, and other cool places.



Connected Classrooms features great tour guide partners like Seattle Aquarium, Earthecho Expeditions, and SLAC National Accelerator Laboratory, and we expect the list to grow. If you’re a teacher and you want to find out more, you can join the Connected Classrooms community. And if you have a destination to offer and you want to be considered as a tour guide, you can fill out this form.

Speaking of getting out into the world, each year the pronghorn population of Wyoming migrates 150 km between their summer and winter homes. Along this route the pronghorns face their greatest foe: traffic on U.S. Highway 191, which they must cross. To aid the pronghorn migration, last year the Wyoming Department of Transportation built a system of fences, overpasses, and underpasses for safe crossing. At first the pronghorns weren’t sure what to make of the crossings, but after several hours they began to cross. This time, in the second year of the crossings, the pronghorns knew just what to do and crossed without hesitation. The animals (including the human ones) seem to have adapted nicely.

Finally, take some time this weekend to check out Helpouts by Google, a new way to get help on all sorts of issues via Google+ Hangouts. There are Helpouts for cooking, fitness and nutrition, home and garden, and a bunch of other topics. You can get help solving a problem, or learn a new skill. Maybe you can even offer other developers some coding help in the Computers and Electronics category.


Kids gotta field trip, pronghorns gotta migrate, Fridaygram gotta publish. Each week we take a break from our usual developer fare to bring you fun and nerdy stuff to kick off your weekend. Thanks for being here, and please be careful crossing the highway.

Thursday, 7 November 2013

Speeding up mobile pages with mod_pagespeed and ngx_pagespeed

Author PhotoBy Jan-Willem Maessen, Software Engineer

Recent betas (1.5 and later) of mod_pagespeed and ngx_pagespeed introduce new optimizations that render pages up to 2x faster, particularly on mobile devices. This webpagetest video shows the results:



This speedup comes thanks to two new PageSpeed optimizations:
  • prioritize_critical_css finds the CSS rules that are used to initially render your page.
  • The critical image beacon identifies the images that appear on screen when your page is first rendered and uses this to guide lazyload_images and inline_preview_images.
These combine with two existing PageSpeed optimizations:
  • defer_javascript prevents scripts from running until the page has loaded.
  • convert_jpeg_to_webp reduces the size of images that are downloaded by webp-capable browsers.

What happens when you turn on these optimizations

Let’s compare the behavior of the page before and after optimization:
Before optimization, first render doesn’t occur until the CSS has arrived (vertical gray bar at 2.3s), and images (purple) don’t finish downloading until substantially later. The large above the fold image dominates above the fold rendering time (vertical blue bar at 3.4s).
During optimization, the prioritize_critical_css filter inlines the CSS required to render the page directly into the HTML. The inline_preview_images filter creates a low-quality version of the large above the fold image and inlines that in the page. The core image optimization filter also inlines some of the smaller above the fold images. The above the fold content is fully rendered after only 2.0s (vertical orange line).

Only after the page is rendered is the JavaScript code run thanks to the defer_javascript filter. At the same time the full-resolution version of the above the fold image is loaded; since convert_jpeg_to_webp is enabled, it is served in webp format and is smaller than the original even though the quality is the same. The full resolution version of the page is available after 3.1s (vertical gray line – still faster than the original page). The lazyload_images filter means the remaining images won’t be loaded until the page is scrolled (or, in the most recent version of PageSpeed, after all the other page content has been loaded).

Enabling these optimizations

To enable these optimizations in mod_pagespeed, download and install the latest beta. To do a test like the one you see here, simply add these lines to your pagespeed.conf file:
ModPagespeedEnableFilters prioritize_critical_css,defer_javascript
ModPagespeedEnableFilters convert_jpeg_to_webp
ModPagespeedEnableFilters lazyload_images,inline_preview_images

If you use ngx_pagespeed, install from the latest source and enable the filters in your configuration:
pagespeed EnableFilters prioritize_critical_css,defer_javascript;
pagespeed EnableFilters convert_jpeg_to_webp;
pagespeed EnableFilters lazyload_images,inline_preview_images;

Compare your results to pages loaded with ?ModPagespeed=off or with ?ModPagespeedFilters=core. If you see breakage on your site with the new filters, try omitting defer_javascript and/or lazyload_images from the list of enabled filters.

Conclusion

We hope you’ll try out the latest optimizations and let us know how they work for you. Meanwhile we’ll continue our quest for better optimizations and faster page performance. If you’re interested in joining the conversation, check out https://code.google.com/p/modpagespeed/ and subscribe to our discussion list, mod-pagespeed-discuss@googlegroups.com.


Jan-Willem Maessen is a Software Engineer on the PageSpeed team, and helped build many of the optimizations including the ones you see here. He's been known to hit otherwise reasonable consenting software engineers about the head with a steel longsword.

Posted by Scott Knaster, Editor

Monday, 4 November 2013

GDL Weekly: KitKat, Apps Script, startup revenue

Author PhotoBy Louis Gray, Program Manager, Google Developers Live

Cross-posted from +Google Developers


Take a few minutes and catch up with everything new on Google Developers Live.



You can go directly to the videos and posts mentioned:

To make sure you don't miss a single event, subscribe to Google Developers on YouTube or just click the red YouTube button on the right nav, and check us out at http://developers.google.com/live.


+Louis Gray is a Program Manager on Google's Developer Relations Team, running Google Developers Live. He believes life is but a (live) stream.

Posted by Scott Knaster, Editor

Thursday, 31 October 2013

Google Play Services 4.0

Cross-posted from the Android Developers Blog

Today we're launching a new release of Google Play services. Version 4.0 includes the Google Mobile Ads SDK, and offers improvements to geofencing, Google+, and Google Wallet Instant Buy APIs.

With over 97% of devices now running Android 2.3 (Gingerbread) or newer platform versions, we’re dropping support for Froyo from this release of the Google Play services SDK in order to make it possible to offer more powerful APIs in the future. That means you will not be able to utilize these new APIs on devices running Android 2.2 (Froyo).

We’re still in the process of rolling out to Android devices across the world, but you can already download the latest Google Play services SDK and start developing against the new APIs using the new Android 4.4 (KitKat) emulator.

Google Mobile Ads

If you’re using AdMob to monetize your apps, the new Google Mobile Ads SDK in Google Play services helps provide seamless improvements to your users. For example, bug fixes get pushed automatically to users without you having to do anything. Check out the post on the Google Ads Developer Blog for more details.

Maps and Location Based Services

The Maps and Geofencing APIs that launched in Google Play services 3.1 have been updated to improve overall battery efficiency and responsiveness.

You can save power by requesting larger latency values for notifications alerting your app to users entering or exiting geofences, or request that entry alerts are sent only after a user stays within a geofence for a specified period of time. Setting generous dwell times helps to eliminate unwanted notifications when a user passes near a geofence or their location is seen to move across a boundary.

The Maps API enhances map customization features, letting you specify marker opacity, fade-in effects, and visibility of 3D buildings. It’s also now possible to change ground overlay images.

Google+ and Google Wallet Instant Buy

Apps that are enabled with Google+ Sign-In will be updated with a simplified sign-in consent dialog. Google Wallet Instant Buy APIs are now available to everyone to try out within a sandbox, with a simplified API that streamlines the buy-flow and reduces integration time.

Google Wallet Instant Buy also includes new Wallet Objects, which means you can award loyalty points to a user's saved rewards program ID for each applicable Google Wallet Instant Buy purchase.

New user control over advertising identifier

To give users better controls and to provide you with a simple, standard system to continue to monetize your apps, this update contains a new, anonymous identifier for advertising purposes (to be used in place of Android ID). Google Settings now includes user controls that enable users to reset this identifier, or opt out of interest-based ads for Google Play apps.

More About Google Play Services

To learn more about Google Play services and the APIs available to you through it, visit the Google Services area of the Android Developers site.



Posted by Scott Knaster, Editor

Wednesday, 30 October 2013

Google Cloud SQL is now accessible from just about any application, anywhere

Author PhotoBy Joe Faith, Product Manager

Cross-posted from the Google Cloud Platform Blog

Google Cloud SQL is a fully managed MySQL service hosted on Google Cloud Platform. Today, we are embracing open standards and expanding customers’ choice of tools, technologies and architectures by adding support for native MySQL connections.

MySQL Wire Protocol is the standard connection protocol for MySQL databases. It lets you access your replicated, managed, Cloud SQL database from just about any application, running anywhere. Here are some of the top features enabled by the MySQL Wire Protocol:

Native connectivity also gives you great flexibility and control over managing and deploying your cloud databases. For example, you can use DBMoto from HiTSW to replicate data between Cloud SQL and on-premise databases -- including Oracle, SQL Server, and DB2. And you can use DBShards from CodeFutures to manage sharding across Cloud SQL instances, and migrate on- and off-cloud with no downtime.

Genoo, a SaaS provider of online marketing tools, has already put wire protocol support to use. They were outgrowing their existing cloud services provider, but were worried about migrating a live application to another environment. So Kim Albee, Genoo’s founder and President, turned to DBShards who used native connectivity to migrate Genoo’s database without any service disruption. She said, "I've been amazed by what Cloud SQL's support for native connections can do. Before this feature, migrating between cloud providers would have been too costly."

You can read more about how they did it in this case study, or learn more about Cloud SQL.


Joe Faith is a Product Manager on the Google Cloud Team. In a previous life he was a researcher in machine learning, bioinformatics, and information visualization, and was founder of charity fundraising site Fundraising Skills.

Posted by Scott Knaster, Editor

Tuesday, 29 October 2013

A new look for managing your APIs

Author PhotoBy Akshay Kannan, Product Manager, Google Cloud Platform

Back in 2010, we launched the Google APIs Console, enabling you to manage multiple Google APIs from a single, centralized console.

Today, we are introducing the Google Cloud Console, our next evolution of the APIs Console. The new Google Cloud Console makes managing the over 60 Google APIs housed within easier than ever. It brings an entirely new visual design and integrates tightly with our Cloud Platform services, enabling you to manage an end-to-end application deployment. For the past few weeks, we've given you the ability to opt in to the new experience, and starting soon we'll be making it the default (with the ability to go back to the old experience if you prefer).

cloud console screenshot

You'll notice an entirely new visual design, a hierarchical navigation, and even a friendly new URL structure.

cloud console screenshot

We’ve also simplified the process of getting API credentials. Now, you can register an app on the platform you are building on, then see all the possible credential types for your application, making it easier to quickly grab the credentials you need.

cloud console screenshot

If you haven't already, give the new Cloud Console a shot. We'd love to hear your thoughts and feedback in the comments section below.

cloud console screenshot


Akshay Kannan is a Product Manager on the Google Cloud Console team. His focus is on providing an integrated, beautiful developer experience for all Google Developers.

Posted by Scott Knaster, Editor

Monday, 28 October 2013

New AdSense data in the Analytics Core Reporting API

Author PhotoBy Nick Mihailovksi, Product Manager, Google Analytics API Team

Cross-posted from the Google Analytics Blog

Google AdSense is a free, simple way for website publishers to earn money by displaying targeted Google ads on their websites. Today, we’ve added the ability to access AdSense data from the Google Analytics Core Reporting API. The AdSense and Analytics integration allows publishers to gain richer data and insights, leading to better optimized ad space and a higher return on investment.

In the past, accessing AdSense data using the Analytics Core Reporting API has been a top feature request. We’ve now added 8 new AdSense metrics to the Analytics Core Reporting API, enabling publishers to streamline their analysis.

Answering Business Questions
You can now answer the following business questions using these API queries:

Which pages on your site contribute most to your AdSense revenue?


dimensions=ga:pagePath
&metrics=ga:adsenseCTR,ga:adsenseRevenue,ga:adsenseECPM &sort=-ga:adsenseRevenue

Which pages generate a high number of pageviews but aren't monetizing as well as other pages?
dimensions=ga:pagePath
&metrics=ga:pageviews,ga:adsenseCTR
&sort=-ga:pageviews



Which traffic sources contribute to your revenue?
dimensions=ga:sourceMedium
&metrics=ga:adsenseCTR,ga:adsenseRevenue,ga:adsenseECPM
&sort=-ga:adsenseRevenue
Reporting Automation
By accessing this data through the API, you can now automate reporting and spend more time doing analysis. You can also use the API to integrate data from multiple sites into a single dashboard, build corporate dashboards to share across the team, and use the API to integrate data into CRM tools that display AdSense Ads.

Getting Started
To learn more about the new AdSense data, take a look at our Google Analytics Dimensions and Metrics Explorer. You can also test the API with your data by building queries in the Google Analytics Query Explorer.

Busy? In that case, now’s a great time to try these Analytics API productivity tools:
  • Magic Script: A Google Spreadsheets script to automate importing Analytics data into Spreadsheets, allowing for easy data manipulation. No coding required!
  • Google Analytics superProxy: An App Engine application that reduces all the complexity of authorization.

We hope this new data will be useful, and we're looking forward to seeing what new reports developers build.


Nick Mihailovski oversees the Google Analytics APIs. In his spare time, he likes to travel around the world.

Posted by Scott Knaster, Editor