Never take life seriously. Nobody gets out alive anyway.

google recent releases

Inside Search Press Site

On June 14, 2011, Amit Singhal, Scott Huffman, Mike Cohen and Johanna Wright hosted a search event at the San Francisco Yerba Buena Center for the Arts to share our vision for removing barriers between you and the answers you’re looking for. We announced that we’re bringing our speech recognition and computer vision technology from mobile to the desktop with Voice Search and Search by Image, and we’re taking the next step for Google Instant with Instant Pages.

Feel free to explore the resources on this site to learn more about the announcements from the Inside Search event. If you have additional questions or would like to set up an interview, please email press@google.com.

Navigate this page with the following links:

Official Google Blog Post

Knocking down barriers to knowledge

As much as technology has advanced, there are still many barriers between you and the answers you’re looking for—whether you’re juggling a clunky mobile keyboard or waiting for a website to load. Today we held a media event in San Francisco where we talked about some of the latest things we’re doing to tackle these barriers on mobile, announced that we’re bringing our speech recognition and computer vision technology to the desktop, and took the next step for Google Instant—Instant Pages.

The thirst for knowledge doesn’t stop when you step away from your computer, it continues on your mobile device. In the past two years, mobile search traffic has grown five-fold. Mobile search today is growing at a comparable pace to Google in the early years.

Here you can see that mobile search traffic growth over the past three years (the red line) is comparable to overall Google search traffic growth over the same duration (the blue line) but earlier in our history.

One of the technologies driving this growth is speech recognition. With Google Voice Search, you don’t have to type on a tiny touchscreen. You can just speak your query and the answer is on the way. We’ve invested tremendous energy into improving the quality of our recognition technology—for example, today we teach our English Voice Search system with 230 billion words from real queries so that we can accurately recognize the phrases people are likely to say. As the quality has increased, so has usage: in the past year alone, Voice Search traffic has grown six-fold, and every single day people speak more than two years worth of voice to our system.

We first offered speech recognition on mobile search, but you should have that power no matter where you are. You should never have to stop and ask yourself, “Can I speak for this?”—it should be ubiquitous and intuitive. So we’ve added speech recognition into search on desktop for Chrome users. If you’re using Chrome, you’ll start to see a little microphone in every Google search box. Simply click the microphone, and you can speak your search. This can be particularly useful for hard-to-spell searches like [bolognese sauce] or complex searches like [translate to spanish where can I buy a hamburger]. Voice Search on desktop is rolling out now on google.com in English, but in the meantime you can check it out in our video:

Searching with speech recognition started first on mobile, and so did searching with computer vision. Google Goggles has enabled you to search by snapping a photo on your mobile phone since 2009, and today we’re introducing Search by Image on desktop. Next to the microphone on images.google.com, you’ll also see a little camera for the new Search by Image feature. If you click the camera, you can upload any picture or plug in an image URL from the web and ask Google to figure out what it is. Try it out when digging through old vacation photos and trying to identify landmarks—the search [mountain path] probably isn’t going to tell you where you were, but computer vision may just do the trick. Search by Image is rolling out now globally in 40 languages. We’re also releasing Chrome and Firefox extensions that enable you to search any image on the web by right-clicking.

Whether you type, speak or upload a photo, once you’ve indicated what you’re looking for the next step in your search is to sift through the results and pick one. To make this faster, last year we introduced Google Instant, which gives you search results while you type. We estimated Google Instant saves you between two and five seconds on typical searches. But once you’ve picked a result, you click, and then wait again for the page to load—for an average of about five seconds.

We want to help you save some of that time as well, so today we took the next step for Google Instant: Instant Pages. Instant Pages can get the top search result ready in the background while you’re choosing which link to click, saving you yet another two to five seconds on typical searches. Let’s say you’re searching for information about the Smithsonian Folklife Festival, so you search for [dc folklife festival]. As you scan the results deciding which one to choose, Google is already prerendering the top search result for you. That way when you click, the page loads instantly.

Instant Pages will prerender results when we’re confident you’re going to click them. The good news is that we’ve been working for years to develop our relevance technology, and we can fairly accurately predict when to prerender. To use Instant Pages, you’ll want to get our next beta release of Chrome, which includes prerendering (for the adventurous, you can try Instant Pages today with the developer version). It’s one more step towards an even faster web.

To learn more about today’s news, visit our new Inside Search website at http://www.google.com/insidesearch. There you’ll find a recording of the event (when it’s ready), answers to common questions and links to other blog posts about today’s news on the Mobile blog, Inside Search blog and the Chrome blog. The Inside Search website is our new one-stop shop for Google search tips, games, features and an under-the-hood look at our technology, so there’s plenty for you to explore.

We’re far from the dream of truly instantaneous access to knowledge, but we’re on our way to help you realize that dream.

Posted by Amit Singhal, Google Fellow

For more, check out our other blog posts about today’s news:

Product Screenshots

Click the images below to view product screenshots.

Mobile

Voice Search

Search by Image

Google Images with instant

Instant Pages

Event Photos

Click below to find the Picasa album with photos from the June 14th, 2011 Inside Search event in San Francisco.

Inside Search Event Photos

Videos

To embed a video, click on the YouTube button in the bottom right of the player and then click on the ‘Share’ link under the video player on YouTube. You can then click the ‘Embed’ button in the lower left side of the screen, revealing code which will allow you to embed the video directly in your website.

Event Video

Mobile

Voice Search

http://www.youtube.com//embed/MQnZe_Iggx0

Search by Image

Instant Pages

Spokespeople

Amit Singhal, Google Fellow

Amit Singhal is a Google Fellow who is responsible for the development of Google Search. Amit has worked in the field of search for over twenty years, first as an academic researcher and now as a Google engineer. His research interests include information retrieval, its application to web search, web graph analysis, and user interfaces for search. At Google, Amit oversees the search quality team, the team responsible for Google’s search algorithms.

The team tests thousands of changes to search in a given year and typically launches about 500. Prior to joining Google in 2000, Amit was a senior member of technical staff at AT&T Labs. Amit has an undergraduate degree in India from IIT, Roorkee, a MS from the University of Minnesota and a Ph.D. from Cornell University, all in Computer Science. At Cornell, he studied Information Retrieval with the late Gerard Salton, one of the founders of the field.

Amit has co-authored more than thirty scientific papers and numerous patents.

Scott Huffman, Engineering Director

Scott Huffman is an Engineering Director at Google, where he leads the search quality evaluation and mobile search teams. He has been at Google for five and a half years, and has been working on search for around 15 years. Prior to joining Google, he was VP of Engineering at Knova, an enterprise search and knowledge management company in Silicon Valley. Scott has a PhD in computer science from the University of Michigan and did his undergraduate work at Carnegie Mellon University. He has authored dozens of academic papers in information retrieval, machine learning and information extraction, and is the inventor or co-inventor on several patents.

Mike Cohen, Manager, Speech Technology

Mike Cohen is a Research Scientist at Google, where he created and leads the company’s speech technology efforts. Prior to joining Google in 2004, Mike was at Nuance Communications for ten years, which he co-founded to develop over-the-telephone spoken language applications. While at Nuance he coauthored the book “Voice User Interface Design” (Addison-Wesley, 2004). Earlier, Mike spent more than ten years at SRI, where he was principal investigator on a series of DARPA projects which included research in acoustic modeling, pronunciation modeling, and the development of spoken language understanding systems. Mike has a PhD in computer science from UC Berkeley. He received a lifetime achievement award at the 2004 SpeechTek conference.

Johanna Wright, Director of Product Management, Web Search

Johanna leads the team developing Google’s search user interface and features, including Google Instant, Mobile Search, Universal Search, and Autocomplete, as well as hundreds of improvements made to Google search each year. Prior to joining Google, Johanna worked at a number of software start-ups in New York City.

Johanna holds an MBA from UCLA and a bachelor’s degree in mathematics from Barnard College.

Gabriel Stricker, Director of Communications, Search

Gabriel is Director of Global Communications & Public Affairs at Google, where he heads Search communications — addressing everything from web search and other search properties to issues pertaining to partnerships, content, and the use of intellectual property. Gabriel received his undergraduate degree from the University of California at Berkeley and his master’s degree in International Affairs from Columbia University. He is the author of the bestselling book on guerrilla marketing entitled, Mao In the Boardroom, published by St. Martin’s Press. In his spare time, Gabriel works as a hospice volunteer at the Zen Hospice Project of San Francisco.

FAQ

  1. Q: What did you announce today?

    A: Today we announced a number of new search features designed to help bring down the barriers between you and the answers you’re looking for. We introduced a number of enhancements on mobile, including a new mobile search homepage, local search features, and a new version of Autocomplete. We’re also making speech recognition and computer vision technologies more ubiquitous, bringing Voice Search and Search by Image to the desktop. Finally, we announced the next step for Google Instant — Instant Pages, which prerenders search results saving you between 2 and 5 seconds on typical searches.

  2. Q: Why is this significant?

    A: As much as technology has advanced, there are still many barriers between you and the answers you’re looking for—whether you’re sifting through irrelevant search results, juggling a clunky mobile keyboard or waiting for a website to load. Our new features help to tackle these barriers, saving you time and bringing you one step closer to the answers you’re looking for.

Mobile

  1. Q: What details can you share about mobile growth?

    A: The momentum we’re seeing in mobile search is incredible. In the past two years, mobile search traffic has grown 5X, which is similar to the growth we saw in the early days of Google. Voice Search is growing even faster — enabling more than two-thirds of the world’s population to search by speaking. In just one year’s time, mobile voice traffic has increased 6X.

  2. Q: Can you tell me more about the new mobile features you announced today?

    A: We introduced a number of mobile search improvements designed to help you perform a search and more easily understand the results. Specifically, we introduced mobile local search to google.com, an improved local result experience, autocomplete improvements, and Russian language OCR in Google Goggles.

  3. Q: Why are these changes important?

    A: We are continually striving to removing barriers to knowledge and improving the way you enter your searches to find the right results. We’ve noticed that changes like the ones we discussed at the Inside Search media event make a meaningful impact on the ease of use and time to result in the daily search experience.

Voice Search on Desktop

  1. Q: Can you tell me more about Voice Search on desktop?

    A: Voice Search on desktop is a new way to search on Google. Instead of typing, you can speak your search and find whatever you’re looking for. Go to google.com using your Chrome browser and you will see a microphone icon in the right side of the search box. Click the microphone, and say, for example, “recipe for cannelloni with bolognese sauce” and instantly see the Google search results just as if you had typed. Your query appears along with a short list of alternate predictions.

  2. Q: Why is Google launching Voice Search on desktop?

    A: Our goal is complete ubiquity for users. Whether you’re searching on mobile or desktop, you can now speak and get your search results immediately.

  3. Q: How is this different from Voice Search on your phone?

    A: The technology behind Voice Search on your computer and Voice Search on your phone is almost the same. The main difference is that we’ve created acoustic models for laptops and desktop microphones. There is also tight integration with Google Instant, which makes Voice Search even faster.

  4. Q: How does the technology work?

    A: Speech recognition is based on statistical modeling. To recognize spoken words, we compare the input speech to a statistical model of the language and try to find the closest match – the system’s best guess at what you said. The statistical model is huge – it must cover all of the fundamental sounds of the language (phonemes), all of the words, and all of the different ways that the words can be strung together in the spoken language. Furthermore, it must capture all of the variations among users that happen when a language is spoken, for example all of the different dialects and accents and individual differences in the sound of the voice (e.g., male vs. female, young vs. old).

  5. Q: When will Voice Search on desktop be available?

    A: We are rolling Voice Search on desktop out beginning June 14th and it should be rolled out to everyone in about a week.

  6. Q: Where will this work?

    A: Voice Search is available only with Google Search and works on any Windows, Mac, Linux, or ChromeOS computers using the Google Chrome browser. All you need is Google Chrome 11 or higher and a built-in or attached microphone. The speech models have been optimized only for US English users at this time, so only users of google.com will see this feature.

Search by Image on Desktop

  1. Q: Can you tell me more about Search by Image on desktop?

    A: Search by Image is a new way to search Google using an image, rather than text, as your query. Search by Image works with images on the web or your own photos. There are three ways to search by image: copy and pasting an image URL, uploading an image, or dragging and dropping an image into the images.google.com search box or images search results page. We are also launching Chrome and Firefox extensions that enable you to search any image on the web by right-clicking on it. When you use an image as your query, you’ll see a new results page with information related to the image.

  2. Q: Why is Google launching Search by Image on desktop?

    A: Search is not just about typing text into a box and landing on a webpage — often you’re looking for answers about visual content too. Search by image enables you to dive into images you find on the web and uncover the wealth of information, opinion, and facts waiting to be discovered.

  3. Q: How is the Search by Image technology new?

    A: We don’t just return copies of the image — we give you relevant information about the image and direct you to web results. This lets you tap the most comprehensive index of images in the world. We also use a number of algorithms to match what’s actually inside the pixels of the image.

  4. Q: How does the technology work?

    A: Google uses computer vision techniques to match your image to images in the Google Images index and other image collections. (It doesn’t have to be an exact match; for example, you could use old vacation photos you took of the Eiffel Tower to find other images of the Eiffel Tower on the web.) It analyzes the image to find its most distinctive points, lines and textures and creates a mathematical model. We match this model against billions of images in our index, and page analysis helps us derive a best guess text description of your image.

  5. Q: Where is this available?

    A: Search by Image is rolling out on images.google.com and will be available in most countries over the next couple of days. The Chrome and Firefox extensions are available for download now, but please note that they won’t be functional in areas where Search by Image is not available.

Instant Pages

  1. Q: Can you tell me more about Instant Pages?

    A: Instant Pages is a feature on Google Web Search that takes advantage of new functionality in upcoming versions of Chrome. For some web searches, Chrome will start the work necessary to fetch and display the results page before you even click on it in the results. The page appears to load much faster–and in many cases appears instantly.

  2. Q: Why is this important?

    A: Prerendering saves you time by doing the work of fetching and rendering a page before you request it. You don’t have to do anything special — to you, the page simply loads much faster than before.

  3. Q: How does the technology work?

    A: This feature comes in two parts: a component in Chrome and a component in Google Search. Often webpages will know with high confidence which link you are most likely to click next. In those cases, webpages can instruct the browser to start fetching the necessary page, so it can be ready for you when you do request it. Chrome can now take these types of hints from webpages–the webpage merely needs to insert a special tag into its HTML. We refer to this feature as “prerendering”, and it’s a technically challenging feature to implement correctly. Chrome will accept these hints from any website that provides them; it’s not just limited to Google pages.

    Google Search uses this capability in cases when we are highly confident that we know what you will click a search result. The end result is that when you click on the link it can load much faster.

  4. Q: Where is this available?

    A: Instant Pages will activate when you’re using a recent version of Chrome and we’re very confident that we know what result you’ll want to click on. The feature is enabled for all languages and domains.

    The feature is currently enabled on the Google Chrome Dev channel and coming soon to the Beta channel. It will be enabled to stable channel users in an upcoming version.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s