Style Switcher

Predefined Colors

Google Keynote (Google I/O’19)

[Music] [Applause] good morning good morning wonderful to be back here at Shoreline with to all of you it’s been a really busy few months for us at Google we just wrapped up cloud next to the San Francisco with over 30,000 attendees as well as YouTube brand cast last week in New York of course today’s about you all our developer community and thank you all for joining us in person and to the millions around the world watching on livestream I would love to say welcome in all the languages our viewers speak but we are going to keep the keynote under two hours especially since Barcelona kicks off against Liverpool that known for you should be an amazing game every year at i/o we learn and try to make things a little bit better that’s why we have lots of Sun screen hope the Sun comes out plenty of water and shade but this year we want to make it easier for you to get around so we are using a are to help to get started open your eye o app and choose explore i/o and then you can just point your phone where you want to go we really hope this helps you get around and answers the number one question people have where the sessions are actually it’s not that they want to know where the food is and we have plenty of it around we also have a couple of Easter eggs and we hope you enjoy them as well this is a pretty compelling use case and we actually want to generalize this approach so that he can explore and navigate the whole world that way there’s a lot of hard work ahead and it’s a hard computer science problem but it’s the type of challenge we love tackling these kinds of problems is what has kept us going for the past 21 years and it all begins with our mission to organize the world’s information and make it universally accessible and useful and today our mission feels as relevant Simar but the way we approach it is constantly evolving we are moving from a company that helps you finances to a company that helps you get things done this morning we’ll introduce you to many products built on a foundation of user trust and privacy and I’ll talk more about that later we want our products to work harder for you in the context of your job your home and your life and they all share a single goal to be helpful so we can be there for you in moments big and small over the course of your day for example helping you write your emails faster with automatic solutions from smart reply and giving you the chance to take them back if you didn’t get it right the first time helping you find the fastest route home at the end of a long day and when you get there removing distractions so that you can spend time with the people most important to you and when you capture those perfect moments backing them up automatically so you never lose them simply put our goal is to build a more helpful Google for everyone and when we say helpful we mean giving you the tools to increase your knowledge success health and happiness we feel so privileged to be developing products for billions of users and with that scale comes a deep sense of responsibility to create things that improve people’s lives by focusing on these fundamental attributes we can empower individuals and benefit society as a whole of course building a more helpful Google for us always starts with search and the billions of questions users trust Google with every day but there is so much more we can do to help our users last year we launched a new feature in Google News called full coverage and we have gotten great feedback on it from our users will be bringing full coverage directly to search to better organize results for news related topics let’s take an example if you search for black hole will surface the relevant top news it was in the news recently we use machine learning to identify different types of stories and give you a complete picture of how a story is being reported from a wide variety of sources you can click into full coverage it surfaces a breadth of content but allows you to drill down into what interests you you can check out different aspects of the story like how the black hole got its name you can even now see a timeline of events and we’ll be bringing this to search later this year podcasts are another important source of information and we’ll be bringing them directly to search as well by indexing podcasts we can surface relevant episodes based on their content not just a title and you can tap to listen right in search results or you can save an episode for listening later on your commute or your Google home these are all examples of how we are making search even more helpful for our users surfacing the right information in the right context and sometimes what’s most helpful and understanding the world is being able to see it visually to show you how we are bringing you visual information directly in search here sup or not [Music] [Applause] whether you’re learning about the solar system or trying to choose a color scheme for your home seeing is often understanding with computer vision and augmented reality the camera in our hands is turning into a powerful visual tool to help you understand the world around you so today we are excited to bring the camera to Google search adding a new dimension to your search results actually three dimensions to be precise so let’s take a look say you’re a student studying human anatomy now when you search for something like muscle flexion you can view a 3d model built by visible body right from the search results not only that pretty go not only that you can also place it in your own space look it’s one thing to read about flexion or extensions but seeing it in action right in front of you while you’re studying the concept very handy okay let’s take another example say instead of studying your shopping for a new pair of shoes that happens with New Balance you can look at shoes up close from different angles again directly from search that way you get a much better sense for things like what does the grip look like on the sole or how they match with the rest of your clothes okay this last example is a really fun one so you may have all seen a great white shark in the movies jaws anyone but what does it actually look like up close let’s find out shall we okay I have Archana here with me to help with the demo so let’s go ahead search for great white shark on Google as you scroll through you get information on the knowledge panel facts but also see the shark in 3d directly from the knowledge panel why don’t we go one step further why don’t we invite the shark to the stage you know it’s one thing to read a fact like a great white can be anywhere between 17 feet to 21 feet long but to see it in front of you at scale filling up the shoreline stage like a rock star that is truly understanding its scale okay let’s take a closer look it’s an AR shark it won’t bite oh look at those layers of teeth you know I don’t know about you all but I’d much rather see these teeth up close and a R than in real life thank you Archana really excited about bringing the camera and a our capabilities to Google search now sometimes though the things that you’re interested in they’re difficult to describe in a search box so that’s why we created Google lense to help you search and do more with what you see by simply pointing your camera we’ve built lens as a capability across products so you can access it directly from the Google assistant but we’ve also built it into Google photos and the camera app on many Android devices people have already used lens more than a billion times so far and abused it to ask questions about what they see like what kind of flower that is or where to get a lamp like that or just who the artist is one way we’ve been thinking about it is with lens we’re indexing the physical world billions of places and products and so on much like search indexes the billions of pages on the web okay today let me show you some new ways that we’re making lens more helpful to you say you’re at a restaurant trying to figure out what to order instead of going from the menu to different apps on the phone and back to the menu and so on you can simply point your camera lens automatically highlights the popular dishes at this restaurant right on the menu and of course if you want to know more you can tap on any dish on the menu and you can see what it looks like again at the restaurant and of course check out what other people are saying about it on Google Maps by the way when you’re done eating lens can help pay for your meal not so fast it’s not picking up your tab but it can calculate the tip and even split the total again just by pointing your camera at the receipt and voila so you saw how we connected the menu with information from Google Maps but we’re starting to think of other ways that we can connect helpful digital information with the things in the physical world so I’m going to give you just one example so you’re flipping through a Bon Appetit magazine and you see a recipe you like soon you can point your camera at the recipe and see the page come alive showing you how to make the dish we’re starting to work with more partners like museums magazine publishers and retailers to bring unique visual experiences like this there’s one final area where we think that the camera can be particularly helpful to people around the world there are more than 800 million adults for struggling to read the words that they come across in their daily lives bus schedules Bank forms etc and many of them are coming online for the first time with a smartphone so to help with that we’ve integrated a new camera capability into Google go this is our search app for entry-level devices take this sign in English next to an ATM now for someone who does not understand the language and cannot read the words this is important information that they’re not getting access to and we think that the camera can help here so let me show you how so directly from the Google search bar you can use lens open it point it at the sign to hear the text read out aloud to you information for card holders all customers using old proprietary magnetic stripe cards should be advised what is nice here is that it is highlighting the words as they’re spoken that way even if you can’t read the language well you can follow along and you understand the full context of what you see you can also translate it into your own language like this notice that the translated text is overlaid right on top of the original sign you know it almost feels like the sign was written in your own language to start with and again you can hit listen and hear the words read out loud this time in your own language informaciĆ³n para los titulares de la tarjeta what you’re seeing here is text speech computer vision the power of translate and twenty years of language understanding from search all coming together now our teams in India have been working with some early testers and getting a lot of feedback to make the product better and I want to now show you how one of them is using it in her daily life take a look if you get any value merit in bikini it’s cool man Jatin oh but you go home back nigga receptor ligand telomere watch it see our whole day – Roxanna qualidade will be perolli kako combo bonito kar sakthe Barbican apertium Logan KK mclubbe chorus are patottie would you believe are the same but you go my love barbar captain Eid mclubbe for tool a d-cup swapping by Yumi charka panadol guru Bali though is named for a kazoo nya Oh Jason McLeod report card ow Motoko a greasy Hindi got it we can Musa botticella become a clafouti dirty Gemma club just two days after McCarthy league abroad but you can’t delay it Charu cata budget cut off a bidet of just a real Vegas ticket katana the bharatas actio KCK Sarah Newlin Appleton a plasma Baja veteran [Music] thank you Ramallah for testing it and giving us a lot of feedback for the team to make the product better the power to read is the power to buy a train ticket to shop in a store to follow the news it’s the power to get things done so we want to make this feature accessible to as many people as possible so it already works in more than a dozen languages and the teams worked incredibly hard to compress all of this tech to just over a hundred kilobytes that way it can work on phones that cost as little as $35 so we’re super excited about this and all the other features across search and lens to help you throughout the day you’ll start to see these updates roll out later this month thank you [Applause] [Music] [Applause] thanks Aparna helpfulness is also about saving time and making your day a little bit easier that’s why last year at i/o we gave you a first look at our duplex technology duplex enables Google Assistant to make restaurant reservations on your behalf by actually placing a call it’s now available in 44 states across the US and we’ve gotten great feedback not only from our users but from businesses as well for us duplexes to approach by which we train AI on simple but familiar tasks to accomplish them and save you time duplex was launched with restaurant reservations on the phone but now we are moving beyond voice and extending duplex to tasks on the web we again want to focus on narrow use cases to start so we are looking at rental car bookings as well as movie ticketing today when you make a new reservation online you have to navigate a number of pages and steps filling out information and making selections along the way I’m sure you’re all familiar with this experience it’s time consuming and if users leave during the work flow businesses lose out as well we want to make this experience better for both users and businesses so let me show you how the assistant can do it better say you get a calendar reminder about an upcoming trip and you want to book a rental car you can just ask google book a national car rental for my next trip the assistant opens the national website and automatically starts filling out your information on your behalf including the dates of the trip you can confirm the details with just a tab and then the assistant continues to navigate the site it even selects which car you like it’s acting on your behalf and helping you save time but you’re always in control of the flow let’s go ahead and add a car seat and once all the details are in you can check everything one last time and just tap to finalize the reservation you’ll immediately get a booking confirmation it’s amazing to see the assistant complete a task online on your behalf in a personalized way it understands the dates of your trip and your car preferences based on trip confirmations in Gmail I also want to point out that this was not a custom integration this required no action on part of the business to implement what you just saw is an early preview of what we are calling duplex on the web we’re going to be thoughtful and get feedback from both users and businesses to improve the experience and we’ll have more details to share later this year the Google assistant helps people around the world with all kinds of tasks whether they are at home or on the go but we want to build an even more helpful assistant in order to process speech today we rely on complex algorithms it includes multiple machine learning models one model Maps incoming sound bites into phonetic units another one takes and assembles these phonetic units into words and then a third model predicts the likelihood of these words in a sequence they are so complex that they require hundred gigabytes of storage and a network connection bringing these models to your phone think of it as putting the power of a googol data center in your pocket it’s an incredibly challenging computer science problem I’m excited to share we reached a significant milestone further advances in deep learning have allowed us to combine and shrink the hundred gigabyte models down to half a gigabyte small enough to bring it on to mobile devices this eliminates network latency and makes the assistant so much faster so fast that tapping to use your phone would seem slow I think this is going to transform the future of the assistant and I’m thrilled to bring Scott to tell you more about our next generation assistant [Applause] [Music] [Applause] thanks sundar well what if we could bring the AI that powers the assistant right onto your phone what if the assistant was so fast at processing your voice that tapping to operate your phone would almost seem slow it opens up many new use cases and we want to show you how fast it is now internally we’ve been calling this the next-generation assistant running on device it can process and understand requests in real time and deliver the answers up to 10 times faster now Maggie’s here and she’s gonna help us test it out starting with some back-to-back commands to demonstrate its speed now this demo is hot off the press so please send your positive energy over a neg ease direction hey Google open calendar open calculator open photos set a timer for 10 minutes what’s the weather today what about tomorrow show me John Legend on Twitter get a lift ride to my hotel turn the flashlight on turn it off take a selfie all right now as you could see yeah I’ve often Meggie was able to open and navigate apps instantly now you might have also noticed that with continued conversation she was able to make several requests in a row without having to say hey Google each time now beyond an effortless way to operate your phone you can start to imagine how the assistant fused into the device could orchestrate tasks across apps let’s look at another demo where Meg is chatting with a friend he’s gonna ask her about a recent trip noticed how easy it is for her to respond with her voice and even share a photo reply had a great time with my family and it was so beautiful show me my photos from Yellowstone the ones with animals send it to Justin now another example is when a friend asks you a question and you need to look up the information to respond Justin wanted to know when Maggie’s flight arrives when’s my flight [Music] when’s my flight reply I should get in around 1 p.m.

Alright so notice how it helped Maggie multitask more easily across different apps saving her a lot of back-and-forth now you can even imagine this next generation assistant handling more complex speech scenarios like composing and sending an email hey Google send an email to Jessica hi Jessica I just got back from Yellowstone and completely fell in love with it set subject to Yellowstone adventures let me know if next weekend works for dinner so I can tell you all about it send it now as you can see this required the assistant to understand when Maggie was dictating part of the message versus when she was asking it to complete an action thanks Maggie thanks Scott by moving these powerful AI models right onto your phone we’re envisioning a paradigm shift this next generation assistant will let you instantly operate your phone with your voice multitask across apps and complete complex actions all with nearly zero latency and actions like turning on the flashlight opening Gmail or checking your calendar will even work offline now it’s a very hard problem we’ve been solving and I’m really excited to share the realization of this vision is not far off in fact this next generation assistant is coming to the new pixel phones later this year [Applause] now our missions to make the assistant the best way to get things done you just saw how we’re making it much faster but it also has to be personal enough to really help you now personalize helps especially important in areas where people’s preferences completely differ like choosing what to listen to what to do on the weekend or even what to eat so let’s look at a recipe example hey google what should I cook for dinner here are some recipe pics for you now as you can see the assistant picked recipes tailored to me for example it suggested a bourbon chicken recipe because it’s helped me with barbecue recipes in the past now what I really love is that different people get completely different results we call this feature picks for you and it will be launching on smart displays later this summer starting with recipes podcasts and events now beyond your preferences becoming more personal means the assistant will better understand the people places and events that are important to you now one important person in my life is my mom who I’m gonna visit right after i/o so let’s say I ask my assistant how’s the traffic to mom’s house and we all understand what I mean by mom’s house right well if I’m in Toledo mom’s house might have meant this place a non-profit childcare center in other cities mom’s house can be a restaurant or a grocery store in fact there’s lots of things in the world called mom’s house now in linguistics the process is figuring out which thing a phrase refers to is called reference resolution and it’s fundamental to understanding human language at Google we approach this problem using our knowledge graph of things in the world and their relationships it’s what allows us to understand something like the Starbucks near the Golden Gate Bridge today we’re expanding the assistants ability to understand you better by applying those same techniques to the things in your world we call it personal references so if I shared my mom’s contact info with the assistant I can ask hey Google what’s the weather like at Mom’s house this weekend Friday and Saturday carmichael will be partly cloudy how long will take to get there with light traffic it will take you 2 hours and 14 minutes to get to 123 Main Street by car remind me to order flowers a week before mom’s birthday alright I’ll remind you on July 3rd and it goes beyond mom if you’ve shared important people places and events with the assistant you’ll be able to ask for things more naturally like show me photos of my son or directions to the restaurant reservation or remind me to pick up chocolates on my anniversary and rest assured you’re always in control you can edit or delete this information at any time in the updated you tab in assistant settings now one place where the assistant can be especially helpful is in the car offering a safer hands-free way to get everything you need while you’re on the road now we’ve been focused on the main things that we all want when we’re driving to get where we’re going safely to catch up with friends and to listen to something interesting along the way last year we brought the assistant to Android auto and earlier this year we added it to navigation in Google Maps I’m happy to share the assistant is also coming to ways in the next few weeks now I’d like to show you the future of how we’re improving your mobile driving experience even more introducing the assistance new driving mode just put your phone in the car and say hey Google let’s drive driving mode has a thoughtfully designed dashboard that brings your most relevant activities front and center while you’re driving and includes suggestions personalized for you for example if you have a dinner reservation on your calendar you’ll see a convenient shortcut to navigate to the restaurant or if you started a podcast at home in the morning once you get in your car it’ll display a shortcut to resume the episode right where you left off now it also highlights top contacts making easy to call them or message them and recommendations for other things to listen to now once your navigating phone calls and music appear in a low-profile way so you can get things done without leaving your navigation screen hey Google play some jazz sure check out this jazz music station on YouTube music now everything is voice enabled so if a call comes in the assistant will tell you who’s calling and ask if you want to answer without having to take your eyes off the road do you want to pick it up no thanks but thanks for your help with the demo mom all right so best of all with the assistant already on your phone there’s no need to download an app just start driving driving mode will be available this summer on any Android phone with the assistant now today the Google assistant is available on over 1 billion devices in over 30 languages across 80 countries and with duplex on the web the next generation assistant personalized help and assistant it’s in the car we’re continuing to build on our mission to be the fastest most personal way to help you get things done now before I go I want to share a little something that a lot of you have been asking for check this out stop now you can stop your timers and alarms just by saying stop no hey Google need it and it’s rolling out on smart displays and Google homes in english-speaking look house starting today thanks very much [Applause] [Music] hey Google open the pod bay doors I [Music] found a few restaurants near you ordering you a grande vanilla latte from Starbucks this is a cat the forecast is 72 and sunny how do I slice of mango turn alone to Christmas spirit begin operation Kevin operation Kevin underway show me I make an octopus costume on YouTube [Music] [Applause] thanks God it’s great to see the momentum of Google assistant and how its able to help users get things done so far we’ve talked about building a more helpful Google it’s equally important to us that we do this for everyone for everyone is a core philosophy for us at Google that’s why from the earliest days search works the same whether you’re a professor at Stanford or a student in rural Indonesia it’s why we build affordable laptops for classrooms everywhere and it’s why we care about the experience on low-cost phones in countries where users are just starting to come online with the same passion as we do with premium phones and it goes beyond our products and services it’s why we offer free training and tools through grow at Google helping people grow their skills find jobs and build their businesses and it’s how we develop our technology ensuring the responsible development of AI privacy and security that works for everyone and products that are accessible at their core let’s start with building AI for everyone bias has been a concern in science long before machine learning came along but the stakes are clearly higher with AI it’s not enough to know if your model works we need to know how it works we want to ensure that our AI models don’t reinforce bias that exists in the real world it’s a hard problem which is why we are doing fundamental computer science research to improve the transparency of machine learning models and reduce bias let me show you what I mean when computer scientists deploy machine learning models it can sometimes be difficult to understand why they make a certain prediction that’s because most machine learning models appear to operate on lower level features edges and lines in a picture color of a single pixel that’s very different than the higher-level concepts more familiar to humans like stripes on a zebra to tackle this problem googly-eye researchers are working on a new methodology called decaf or testing with concept activation vectors let me give you an example if it’s a machine learning model trained to detect zebras you would want to know which variables were being used to decide if the image contained a zebra or not decaf can help you understand if the concept of stripes was important to the models prediction in this particular case it makes sense stripes are an important predictor for the model now suppose a classifier was trained on pictures of doctors if the training data was mostly males wearing coats and stethoscopes then the model could inaccurately assume that being male was an important prediction factor there are other important examples as well now imagine an AI system that could help with detecting skin cancer to be effective it would need to recognize a vied variety of skin tones representative of the entire population there’s a lot more to do but we are committed to building AI in a way that’s fair and works for everyone including identifying and addressing bias in our own ml models and sharing tools and open datasets to help you as well another way we built for everyone is by ensuring that our products are safe and private and that people have clear meaningful choices around their data we strongly believe that privacy and security are for everyone not just a few this is why powerful privacy features and controls have always been built into Google services we launched incognito mode in chrome over a decade ago the pioneered Google takeout which gives you easy controls to export your data from email contacts photos all of our anytime you choose to but we know our work on privacy and security is never done and we want to do more to stay ahead of constantly evolving user expectations we’ve been working on a significant set of enhancements and I want to talk you through a few today you can already find all your privacy and security settings in one place in your Google account to make sure your Google account is always at your fingertips we are making it easily accessible from your profile photo if you’re in search you can tap on your photo and you can quickly access the most relevant privacy controls for search in case your data in search here you can view and manage your recent activity and you can easily change your privacy settings last week we announced auto-delete controls which you’ll all also be able to access right from that data helps make search work better for you and with auto-delete you can choose how long you want it to be saved for example 3 or 18 months after which any old data will be automatically and continuously deleted from your account this is launching today for web and app activity we’ll be rolling it out to location history in the coming weeks and we’ll continue to bring features such as this to more controls over time in addition one tap access to your Google account will be coming to our major products including chrome search assistant YouTube Google News and maps and speaking of maps if you tap on your profile photo in addition defining easy access to your privacy controls you’ll find a new feature incognito mode incognito mode as a pop has been a popular feature in chrome since it launched and we are bringing this to maps while in incognito in maps your activity like the places you search and navigate to won’t be linked to your account we want to make it easy to enter in and out of incognito and maps will soon join Chrome and YouTube with support for incognito and we’ll be bringing it to search as well this year another way we ensure your privacy is by working hard to keep your data secure from Safe Browsing which now protects over four billion devices every day to using tensorflow to significantly reduce phishing attacks in Gmail we also encourage users to use 2-step verification because an additional layer of protection is always helpful today we are making 2-step verification even more convenient for everyone by bringing the protection of security keys directly into your Android phone so now you can confirm a sign-in with just a tap and today it’ll be available to over 1 billion compatible devices [Applause] we always want to do more for users but do it with less data over time so we are applying the same cutting-edge VI research that makes our products better and applying it to your hands user privacy federated learning this is a new approach to machine learning developed by Google is one example it allows Google’s AI products to work better for you and work better for everyone without collecting raw data from your device’s instead of selling data to the cloud we flip the model we ship machine learning models directly to your device each phone computes an update to the global model and only those updates not the data is securely uploaded and aggregated in large batches to improve the global model and then the updated global model is sent back to everyone’s device let me explain it with a concrete example take G board Google’s keyboard using on device learning alone when new words become popular G board would not be able to suggest them until you have typed them many times federated learning however allows G board to learn new words like BTS or Yolo after thousands of people start using them without Google ever seeing anything you type actually with BTS is probably millions of people this is not just research in fact G board is already using federated learning to improve next word prediction as well as emoji prediction across tens of millions of devices it’s still very early but we are excited about the progress and the potential of federated learning across many more of our products privacy and security are the foundation for all the work we do and will continue to push the boundaries of technology to make it even better for our users building for everyone also means ensuring that everyone can access our products the World Health Organization estimates that 15 percent of the world’s population over 1 billion people has a disability we believe technology can help us be more inclusive and AI is providing us with new tools to dramatically improve experience for people with disabilities for example there are almost 500 million people in the world who are deaf or hard of hearing think of how many conversations are challenging from in person discussions and phone calls to even experiencing videos online a few months ago we launched live transcribe powered by Google’s cloud speech API to caption conversations in real time you can leave your phone open with the app and when someone speaks to you it transcribes their speech into text those who cannot or prefer not to speak can also respond by typing was really inspired by how the product came about to offer our Google researchers Dimitri and Chet saw an opportunity to help people and collaborate it to develop the app together with a small team of engineers and people who volunteered their 20% time they built live transcribe and it is now available in over 70 languages and dialects on Android devices [Applause] today we are going further and extending this technology we are announcing a new feature called live caption live caption makes all content no matter its origin more accessible to everyone incredible thing is that it works completely on device so there’s no delay with one click you can turn on captions for a web video podcast or even on a moment you capture at home only possible due to our recent breakthroughs in speech recognition technology we recently tested live caption with some uses let’s take a look this volume button here and then we turn on that button amazing it feels like wow it’s such a simple feature but it has such an impact on me it’s gonna make our lives so much easier I wake up two o’clock in the morning to walk so wake up my mom or dad [Applause] you can imagine all the use cases for the broader community – for example the ability to watch any video if you’re in a meeting or on the subway without disturbing the people around you the Android team is going to talk a little bit later today about what made live caption possible we are also exploring how this technology can caption phone calls but we want to go one step further and actually allow more people to respond and accomplish tasks over their phones as you see in this example Nicole who is deaf and prefers not to speak can receive a call from her hairstylist with smart compose and smart reply she can answer the call and interact let’s take a look hi this is Nicole’s assistive Chat she’ll see what you say and her responses will be read back to you starting now hi Nicole its Jaime how are you hey Jaime I’m good and you great are we still on for your 1:00 p.m.

Haircut tomorrow sorry can you do 3 p.m. uh yes I can do 3 p.m. we have a lot to catch up on I want to hear all about your trip perfect thumbs up great see you tomorrow bye thumbs up indeed [Applause] [Music] we call this new technology live relay while there’s still more work to do we are excited to see how it can help people like Nicole get things done more easily just like with live caption this runs completely on device and these conversations remain private to you we also want to help those with speech disorders or people whose speech has been affected by a stroke or ALS researchers from google AI are exploring the idea of personalized communication models that can better understand different types of speech as well as how a I can help even those who cannot speak to communicate we call this research project Dafoe Nia let’s take a look but it is not fun it will not offend to you no one’s ever Co active large datasets of people whose speech is hard for others to understand we’re okay working for GF who cuts so bad ever they’re not used in training the speech recognition models the game is is to record things and then have it recognized things that you say that aren’t in the trainings you’re Dimitri recorded 15,000 phrase it wasn’t obvious that this was going to work he just sat there he kept recording all interactive devices able to understand any person who speaks to them you can see that it’s possible to make a speech recognizer to work for Dimitri it should be possible to make it work for many people even people if you can’t speak because they’ve lost the ability to speak the work that Zhang Sheng is done on you know voice utterances from sounds alone you can communicate but there might be other ways of communicating most people with ALS end up using an on-screen keyboard and having to type each individual letter with their eyes for me communicating is Steve Mike crack a joke and it’s related to something that happened you know a few minutes ago the idea is to create a tool so that Steve constrain machine learning models him stone to understand his facial expressions [Applause] to be able to laugh to be able to cheer to be able to boo things that seem maybe so worthless but actually aren’t so core to human I still think this is only the tip of the iceberg we’re not even scratching the surface yet of what is possible if we can get speech recognizers to work with small numbers of people who were in lessons which we can then combine to build something that really works for everyone but understand we understood the absolute I’m doing we are working hard to provide these voice recognition models through the Google Assistant in the future but as you saw in Dimitri’s case this will only be possible with many more speech samples to train our models on if you or someone you know has slurred or hard to understand speech we’d like to invite you to submit voice samples to help accelerate this effort fundamentally I research which enables new products for people with disabilities is an important way we drive our mission forward live transcribe live caption live relay and project fo Nia will ultimately result in products that work better for all of us it’s a perfect example of what we mean by building a more helpful Google for everyone one of the most powerful ways we deliver help to our users is through our open source platforms like Android to tell you more I’d like to invite Stefan to the stage [Music] it’s amazing we’re here to talk about Androids version 10 and we get to celebrate a milestone together today there are over 2.5 billion active Android devices and today we want to walk you through what’s coming next in Android queue innovation security and privacy the central theme of the Q release and digital well-being a lot has changed since one dot o smartphones have evolved from an early vision to this integral tool in our lives and they are incredibly helpful looking ahead we see another big wave of innovation coming to make them even more helpful QED shows Android shaping the leading edge of mobile innovation with over a hundred and eighty device makers around the world driven by this powerful ecosystem many innovations have been first on Android from large screens to the first OLED display and this year display technology will take an even bigger leap with foldables coming from multiple Android OMS these devices open up a completely new category which though early just might change the future of mobile computing foldables take advantage of a completely new display technology they literally Bend and fold from phone to tablet sized screen and cue maximizes what’s possible on these screens for instance foldables are great for multitasking so I can watch some funny videos my sister sent me while we chat about what we’re gonna do for my mom on mother’s day but the feature I’m most excited about is screen continuity so let’s say we finished chatting it’s time to head out and I’m standing around waiting for my ride so I start playing a game on the folded smaller screen when I sit down and unfold the game seamlessly transfers to the larger screen it is so cool and I can pick up exactly where I was playing now multiple OMS will launch foldables this year all run Android another exciting innovation is 5 g 5g networks mean consistently faster speeds with lower latency so apps and especially games can target rich immersive experiences to these 5g connected phones and Android Q supports 5g natively this year more than 20 carriers will launch networks and our OEMs have over a dozen 5g ready phones all launching this year and they’ll all be running Android now in addition to hardware innovation we’re also seeing huge firsts in software driven by advances in on device machine learning center showed live caption now I would really like you to see it in action and then take you under the hood please welcome Tristan [Applause] like many people I watch videos without sound when I’m on the go with captions I can still keep up even if I’m in a crowded space I’m sitting in a meeting so for me they’re super helpful but for the almost 500 million people who are deaf or hard of hearing captions are critical today loads of mobile content embeds audio from video to voice messages and everything in between without captions this content is nowhere near as accessible live caption in Q takes audio and instantly turns it into text let’s take a look at this video my friend Heather sent me yesterday to turn it on I open the volume rocker and tap the live caption body is walking away so as you can see these captions appear in real-time over a video that would normally never have captions you can expand them contract them move them up and down it’s a lot of fun but what makes this feature so incredible is that it’s entirely done on device in fact it doesn’t need to be connected to the Internet at all if we take a look this entire demo I’ve done in airplane mode thank you thank you okay so how is this possible it’s because of a huge breakthrough in speech recognition that we made earlier this year this one’s required streaming audio to the cloud to run a two gigabyte model for processing now we can do that same processing on device using a recurrent neural net in just 80 megabytes the live speech model is running on the phone and no audio stream ever leaves it all this protects user privacy and this is OS wide which means you get those captions in all your apps and in web content too now the same on device machine-learning powers another useful cue feature which is smart reply with smart reply the OS helpfully suggests what you’ll type next it’ll predict the text you’ll type even emoji and it’s a huge time-saver what’s really cool is this works now for all messaging apps and Android like in signal you can see the OS providing these helpful suggestions and smart reply can now even predict the actions that you’ll take so say a friend sends you an address and normally you copy and paste that into maps that’s kind of a hassle with smart reply you just tap and it will open for you and all this is saving you time on device machine learning powers everything from these incredible breakthroughs like live caption to helpful everyday features like smart reply and it does this with no user input ever leaving the phone all of which protects user privacy now there’s one more addition to Android cue that’s small but you’ve been asking us about for a while and that is dark theme and we’re launching it and cue so you can activate it by using the quick tile or by turning on battery saver and in fact it will help you save battery your OLED display is one of the most power hungry components in your phone so by lighting up less pixels will save you battery so that’s innovation but we feel all innovation must happen within a frame of security and privacy people now carry phones constantly and we trust them with a lot of personal information you should always be in control of what you share and who you share it with and that’s why the central a second area will cover and the central focus of the release is security and privacy now over the years androids built out a huge set of protections already file based encryption SSL by default security and s work profiles and many of these were first on Android and Android has the most widely deployed security and anti-malware service of any OS with Google Play protect it runs on every device and it scans over 50 billion apps a day in fact in Gartner’s 2019 security report which was published this week Android scored the highest possible rating in 26 out of 30 categories it’s ahead on multiple points from authentication to network security to malware protection and more at the same time we wanted to go much further and that’s why Android q’ includes almost 50 features focused on security and privacy all providing more protection transparency and control so first in queue we brought privacy to the top level in settings and there you’ll find a number of important controls all in one place activity data location history add settings and you decide what’s on or off now location is another place we’ve created tools for more transparency and control now location can be really helpful especially when you’re lost in a new place but it’s also some of your most personal information and you should again always be in control of who you share it with and how they can use it so first if you’re wondering which apps can be accessing your location we make it easy for you to know with Q your device will give you helpful reminders whenever an app accesses location when you’re not actively using that app so you can review and decide do you want to continue sharing or not second Q will give you more control over how you share location data with apps for example say you want to get pizza delivered you can choose to share your location only while the app is in use and as soon as you close you’ll stop sharing location finally what if you’re wondering what kind of location do all my apps have in queue we’ve brought location controls to the forefront in settings so you can quickly review every app and change location access with simple controls now there are many many more enhancements to security and privacy throughout the OS like TLS v3 encryption for low-end devices randomizing your MAC address by default and many more and you can read about all of these in our blog post this week but there’s one more really big thing for security now your Android device gets regular up security updates already but you still have to wait for the release and you have to reboot when they come we want you to get these faster even faster and that’s why in cue we’re making a set of OS modules updatable directly over the air so now these can be updated individually as soon as they are available and without a reboot of the device now this was a huge technical challenge for updating these in the background the same way we’re updating Google Apps it’s easier for our partners with whom we’re working closely but more importantly it’s much better for you you can learn more about this at the session what’s new in Android now there’s one more thing that’s changed since the early days of Android now people carry smartphones everywhere because they’re really helpful but we’re also spending a lot of time on phones and people tell us sometimes they wish they’d spent more time on other things we want to help people find balance and digital well-being and yes sometimes this means making it easier to put your device away entirely and focus on the times that really matter that’s why last year we launched digital well-being tools which dashboards app timers flipped ashesh and winddown to help you set the phone down and get to sleep at night and these tools are really helping app timers helped users stick to their goals over 90% of the time and users of wind down had a 27 percent drop in night time usage if you’re not using these already I would really recommend them but this year we want to help even more with distraction a lot of times I just want to sit down and focus to get something done and when I’m trying to do this like working maybe it’s studying for you I don’t want email or anything else to distract me and that’s why we’ve created a new mode for Android it’s called focus mode when I enter focus mode I can select the apps that I find distracting for me that’s email the news so now they’re turned off and I could really get to work those apps that distract me are disabled but I can still keep text because it’s important to me that my family can always get a hold of me until I come out of focus mode and then everything is back focus mode is coming to devices on P and Q this fall [Applause] now finally I want to talk about families for 84% of us parents technology used by our kids is a top concern in the US the average age of kids getting phones is now eight in queue family linked parental controls will be built right into the settings of the device so when you set up a device for someone in your family family link will help connect it to a parent and you can review any apps that your child wants to install after that you can set daily screen time limits you can check where the apps where my kids are spending time and you can set a device bedtime so your kids can disconnect and get to sleep and now in Android cue you can set time limits on specific apps and when your child hits light device bedtime if you want to give them just five more minutes now we have Bonus Time now there’s a ton more in queue that we don’t have time to cover a ton everything from streaming media to hearing aids to better connectivity to new gesture UI and more so today I’m excited to announce that cue beta 3 is available on 21 devices that is twelve OEMs plus all pixels and that is more than double last year we hope you head over to the link to get it on your phone because we would love to have you try it out and now I will hand it over to Rick thank you very much thanks Steph well we’ve heard about some terrific innovations today an Android AI and the assistant and real breakthroughs and how we’re able to help our users I’d like to spend a few minutes and talk about how some of those come to life and are made by Google products now we continue to believe that the biggest breakthroughs are happening at the intersection of AI software and hardware whether that’s a tensor processing unit an entire data center the phone in your hand or a helpful smart display in your home let’s start there the smart home of today is fragmented and frustrating to deliver real help in the home you can’t start with technology you have to start with people and we’ve always worked to put people first and build technology around their needs there’s no more important place to get this right than in the home let’s take a look Oh as we roll down this unfamiliar Road this way just know you’re not [Music] [Music] [Music] [Applause] your home is the most special place in your life so we need to be thoughtful about the technology we create for it by putting people first we’re going beyond the idea of a smart home to create a truly helpful home over the past year we’ve brought the nest in Google teams together to deliver on our vision of the helpful home and today we’re further simplifying things bringing all of these products together under the nest name as a single team and a single product family we’re following a set of guiding principles that reflect our commitment to putting people first now to start we believe technology should be easy for everyone in the home to use whether they’re five or ninety five the helpful home should also be personal for everyone with Google assistant at the core we can provide a personalized experience for the entire household even in communal spaces and the tech in your home should work together for a single seamless experience across rooms and devices most importantly the helpful home needs to respect your privacy and today we’re publishing privacy commitments for our home products that clearly explain how they work the data we’re storing and how it’s used [Applause] our vision for the helpful home is anchored in the insistant and as you heard from Scott we’re continuing to get more helpful over time we want to make sure that you can get the help you need where you need it Google Home Hub which were renaming nest hub was designed specifically to bring the helpfulness of the assistant to any room in your house now we’ve also been working on a new display that builds on the things that people love about hub but is designed for communal spaces in the home or the family gathers introducing nest hub max it’s a new product that has a camera in a larger 10 inch display which is perfect for the center of your helpful home hub max pulls together your connected devices together into a home view dashboard where you can see your nest cams you can switch on lights control your music and adjust your thermostat hub max also supports thread so just like nest connect it communicates directly with thread supported devices that need a low-power connection like door locks or motion sensors and we’ve designed hub max with an incredibly helpful camera if you want to know what’s going on in your home you can choose to use it like a nest cam you can turn it on when you’re away from home you can check on things right from the nest app and your phone and just like a nest cam it’s easy to see your vent history an able home and away assist and you also get a notification if the camera detects any motion or see someone it doesn’t recognize in your home now video calling is easy too with Google duo the camera has a wide-angle lens and it automatically adjusts to keep you centered in the frame you can chat with any iOS or Android device or a PC with a Chrome browser you can also you also use duo to leave video messages for members of your household hub max is designed to give you full control over the camera nothing is streamed or recorded unless you intentionally enable it and you’ll always know when the cameras on with a green indicator light you have multiple controls to disable camera features and a physical switch on the back electrically disconnects the camera on the microphones and you can see all these controls clearly on the display hub max thank you hub max is designed to be used by multiple people in your home and provide everyone with the help they need in a personalized way now to help with that we’ve offered users the choice to enable voice match so the assistant can recognize their voice and respond directly to you but today we’re also extending the options to personalize using the camera with a feature we call face match for each person in your family that chooses to turn it on the assistant guides you through a process of creating a face model which is then encrypted and stored on the device then whenever you walk in front of the camera hub max recognizes you and shows just your information and not anyone elses face matches facial recognition technology is processed locally on the device using on device machine learning so the camera data never leaves the device and in the morning I can walk into the kitchen and the assistant knows to greet me with my calendar my commuting details the weather and any other information I need to start my day and when I get home hub max welcomes me home with any reminders that might be waiting for me and the assistant offers personalized recommendations for music and TV shows and I can even see if anyone’s left me a video message one of my favorite things about hub max is that it’s a great digital photo frame no matter what kind of day I’m having nothing makes me feel better than seeing some of my favorite memories on this beautiful screen and the Google Photos integration makes this whole process really simple I can select my family and friends and hub max displays the best photos of them from years ago or from earlier today and now with a simple voice commands sharing my favorite shots is easier than ever the big screen also makes hub max the kitchen TV you’ve always wanted tell it what you want to watch or if you need help deciding just ask the assistant to pull up our new on-screen guide hub max can stream your favorite live shows and sports on YouTube TV but unlike your kitchen TV it can also teach you how to cook see who’s at the front door and play your music you’re also getting full stereo sound with a powerful rear facing woofer and now when the volumes up instead of yelling at the assistant to turn it down or pause the game with the camera it’s as simple as a gesture you just raise your hand and hug max uses on-device machine learning to instantly identify your gesture and pause your media hub max is a google assistant smart display it’s also a smart home controller a TV for your kitchen a great digital photo frame an indoor camera and it’s perfect for video calling all this will be available on nest hub max later this summer for just $229 and today we’re lowering the price of the original nest hub from 149 to 129 dollars and we’re expanding its availability to 12 new markets and supporting 9 new languages so whether you prefer a hub with a camera without one we have a device that’ll help you in your home as I said earlier there’s a fundamental difference between a smart home and a helpful home and we’re excited to unify all our products under the nest brand to make the helpful home more real for more people alright next I want to talk about pixel and our yeah thank you and I love talking about pixel I want to talk about our work to bring a more helpful smartphone experience to more people a core element of googles mission is to make technology more available and accessible for everyone and sundar said it earlier we need to ensure that technology benefits the many not just the few but there’s been a really troubling trend in the smartphone industry to support the latest technologies everyone’s high-end phones are getting more and more expensive so we challenged ourselves to see if we could optimize our software and AI to work great on more affordable Hardware so we can deliver these high-end experiences at a more accessible price point I want to introduce you to the newest members of the pixel family Google pixel 3a and 3a Excel designed to deliver premium features at a price people will love we didn’t compromise on the capabilities and performance you’d expect from a premium device which is why we branded them pixel they start at just 399 dollars it’s about half the price half the price of typical flagship phones and I want to introduce Sabrina to tell you more about how we did it [Applause] [Music] thanks Rick delivering premium features with high performance on a phone at this price point it’s been a huge engineering challenge and I’m really proud of what our team has been able to accomplish with pixel 3a so let’s start with the design pixel 3a follows the design language of the pixel family the familiar two-tone look smooth finish and ergonomic unibody design it feels good in your hand and it looks beautiful pixel 3a comes in three colors just black clearly white and a new color purple-ish everything looks amazing on the vibrant OLED display and your music your podcasts they sound great in premium stereo sound pixel 3a supports Bluetooth 5.0 and USB C digital audio and we’ve also included a 3.5 millimeter audio jack because we’ve heard some people want more headphone options but what pixel is really known for is this incredible camera and with software optimizations we found a way to bring our exclusive camera features and our industry-leading image quality into pixel 3a so photos look stunning in any light what other smartphone cameras try to do with expensive hardware we can deliver with software and AI including high-end computational photography so here’s what that means pixel 3a can take amazing photos in low light with night sights it’s one of pixels most popular features we’ve also enabled pixels portrait mode on both the front and rear cameras and our super resume applies computational photography so you can get closer to your subject while still maintaining a high degree of resolution and all of your beautiful photos are backed up for free in high quality with Google photos pixel 3ei also has the helpful features you’d expect in a pixel just squeeze the sides of your phone to bring up the Google assistant we’re using the AI in pixel 3 a to help manage your phone calls too I’m pretty sure we all hate getting robo calls and calls screen uses Google speech recognition and natural language processing to help you filter out those unwanted calls it’s already screening millions of them now you might remember last year we shared our vision for using AR in Google Maps starting today on pixel phones when you use walking directions instead of staring at that blue dot on your phone you’re going to see arrows in the real world to tell you where to turn next we’re just beginning our journey with AR and maps and we’re really excited for pixel users to experience this early preview battery life it’s one of the most important features on a smartphone it makes sense people need to know that their phone won’t quit on them before the end of their day pixel 3a has adaptive battery uses machine learning to optimize based on how you use your phone so you can get up to 30 hours on a single charge and with the included 18 watt charger you’ll get up to 7 hours of battery life with just 15 minutes of charging nixel 3 doesn’t compromise on security either it’s got the same comprehensive approach as pixel 3 on the hardware side our Titan M security chip protects your sensitive data on the device like login credentials disk encryption out data and OS integrity on the software side you get the latest Google security patches and updates for 3 years including Android q this summer so instead of getting slower and less secure over time your pixel gets better with every update we think this hybrid approach provides the strongest data protection and in a recent Gartner report pixel scored the highest for built-in security among smartphones pixal 3a offers the complete pixel experience and we’re proud to make it available and affordable to more people around the world Verizon has been a great partner over the past two and a half years in the US and we’re excited to be partnering with them again for the launch of pixel 3a and for the first time we’re expanding our US carrier partnerships so the entire pixel family is now available for sale at t-mobile Sprint and US Cellular you can also get pixel 3a from the Google store and use it on any US carrier including Google 5 and AT&T pixel 3a and three axl are available in 13 markets starting today you can find more details online at the Google store we’re really excited to have you try it out next Jeff will tell you about our efforts in Google ai but first here’s a quick look at our new pixel [Music] so what you want [Music] Hey [Music] for real [Music] [Applause] [Music] hi everyone everything from building a low-cost premium device like the one you just saw without compromising on capabilities to developing a truly helpful assistant are all built on a tremendous amount of research and innovation under the covers and they’re examples of what we do at Google AI Google AI is a collection of teams focused on making progress in artificial intelligence research across a wide range of different domains we focus on solving fundamental computer science challenges in order to solve problems for people that includes things like improving speech recognition models to answer questions faster or let you interact with your device quickly or pushing the boundaries of computer vision to help people interact for their worlds in new ways as we’ve seen today we published papers release open source software and apply our research to Google products the goal is really to solve problems every day that touch billions of people one of the things I’m most excited about is progress and language understanding as Scott mentioned earlier so much of our daily life depends on actually understanding language reading traffic signs and shopping lists writing emails communicating with the people around us we’d really want computers to have the same fluency with languages that we do not just understand surface forms of the words but actually understand what sentences mean unlocking that would get us closer to our mission of organizing the world’s information and making it universally accessible and useful in the past few years we’ve made major strides take teaching a machine to answer questions like this one about Carlsbad Caverns a national park in New Mexico only recently the state of the art language record architecture for language understanding was something called a recurrent neural network or RNN Aaron ends process words sequentially one at one after another they work well for modeling short sequences like sentences but they struggle to make abstract associations like knowing that stalactites and stalagmites or natural formations and that cement pathways for example or not in 2017 we made a leap forward with our research on transformers models that process words in parallel one year later we used it as a foundation for a technique we called bi-directional encoder representations from transformers it’s a bit of a mouthful so we just call it Burt Burt models can consider the full context of a word by looking at the words that come before and after it they’re pre trained using plain text from the web and other textual sources to do that we use a process to Train it that’s a little like the word game of mad libs we hide about 20% of the input words and we train the model to guess those missing words you can actually try this at home with a bit of text that you have hide a few words and see if you can guess what they are that’s effectively what we’re doing this approach is much more effective for understanding language when we published the research Burt obtained state of the art results on eleven different language processing tasks fast forward to today and we’re excited to see how Burt can help us answer more complex questions that are relevant to you whether that’s getting the flight time from Indiana to Honolulu learning a new weight lifting exercise or translating between languages research like this gets us closer to technology that can truly understand language we’re now working with product teams all across Google to see how we can use Burt to solve more problems in more places we’re excited to bring those to people around the world to help them get the information they need every day all this machine learning momentum though wouldn’t be possible without platform innovation tensorflow is the software infrastructure that underlies our work in machine learning and artificial intelligence when we developed tensorflow we wanted everyone to be able to use machine learning so we made it an open-source platform and while it’s been essential to our work we’ve been amazed to see what other people outside of Google have used it for all kinds of different things we’ve seen engineers at Roma tre University in Italy parsing handwritten medieval manuscripts we’ve seen coders in France colorizing black and white photography we’ve even seen companies developing Fitness sensors for cows the work that people are doing is really inspiring to us it pushes us to keep asking ourselves how come machine learning crack open previously unsolvable problems in order to help more people one example is our work in the field of healthcare we’re really optimistic that our research can create real world impact in medicine by improving solutions and establishing new diagnostic procedures to share more here’s dr.

Lilly Peng from the Google AI healthcare team [Music] thanks Jeff so as a doctor what I care about most is improving patients lives and that means good care and accurate diagnosis that’s why I was so excited two years ago at i/o when we shared our work in diabetic retinopathy this is a complication of diabetes that puts over 400 million people around the world at risk for vision loss since then we’ve been piloting this work with patients in clinical settings our partners at verily recently received European regulatory approval for the machine learning model and we have clinical deployments in Thailand and in India that are already screening thousands of patients in addition to diabetes one of the other areas we think AI can help doctors is in oncology today we’d like to share our work on another project in cancer screening where AI can help catch lung cancer earlier so lung cancer causes more deaths than any cancer it’s actually the most common causes cause of death mortality of death globally accounting for 3 percent of annual mortality we know that when cases are diagnosed early patients have a higher chance of survival but unfortunately over 80% of lung cancers are not caught early randomized controlled trials have shown that screening with low dose ETS can help reduce mortality but there’s opportunity to make them more accurate so in a paper we are about to publish in Nature Medicine we describe a deep learning model that can analyze CT scans and predict lung malignancies to do it we trained a neural network with de-identified lung cancer scans from our partners at the NCI the National Cancer Institute and Northwestern University by looking at many examples the model learns to detect malignancy with performance that meets or exceeds that of trained radio so concretely helmet this help very early-stage cancer is miniscule and can be hard to see even for seasoned radiologists which means that many patients with late stage lung cancer have subtle signs on earlier scans so take this case where at asymptomatic patient with no history of cancer had a CT scan for screening this scan was interpret as normal one year later that same patient had another scan they picked up a late stage cancer one that’s much harder to treat so we used our AI system to review that initial scan so let’s be clear this is a tough case we showed this initial scan to other radiologists and five out of six missed this cancer but our model was able to detect these early signs one year before the patient was actually diagnosed one year and that year could translate to an increased survival rate of 40% for patients like this [Music] so clearly this is a promising but early results and we’re very much looking forward to partnering with medical community to use technology like this to help improve outcomes for patients now I hand it back to Jeff thanks Lily the same technologies that you saw it just saw driving healthcare innovation have applications across almost every field imaginable our AI for social good program brings together our efforts to use AI to explore and address some of the world’s most challenging problems last year we announced the program and it’s two pillars research and engineering and building the external ecosystem let’s talk first about research and engineering one project for where we’re working that’s already creating impact is our work on flood forecasting floods are the most common deadliest natural disasters on the planet every year they affect up to 230 million people across the world more than storms and earthquakes combined 20% of flood fatalities happen in India alone this is a problem that we’re even seeing this week with the impact from cyclone phony floods prevent kids from being able to play in their neighborhoods or parents from protecting and providing for their families often because they don’t have enough advance warning and without consistent accurate warning systems people are prone to ignore warnings and be unprepared that’s especially detrimental in areas hit with annual monsoons that’s why last fall we shared our work on flood forecasting models that can more accurately predict flood timing location and severity through a partnership with India central Water Commission we began sending early flood warnings to the phones of users who might be affected today we’re thrilled to announce the expansion of our detection and alerting system for the upcoming monsoon season the expanded area will cover millions of people living along the Ganges and Brahmaputra River areas not only are we increasing the area of coverage but we’re also better forecasting where the floods will hit hardest through a new version of our public alerts people can better understand whether they’ll be affected so they can protect themselves and their families our model simulates water behavior across the floodplain showing the exact areas that will be affected we combined thousands of satellite images to create high-resolution elevation maps using a process similar to stereographic imaging to figure out the height of the ground we then use neural networks to correct the terrain so it’s even more accurate and then we use physics to simulate how flooding will happen we also collaborate with the government to receive up-to-date streamgage measurements and send forecasts in real time we’re excited to continue working with partners to increase the accuracy and precision of these models which we hope will make people safer from flooding all around the world research like this is critical but we also know that AI will have the biggest impact when people from many different backgrounds all come together to develop new solutions to problems they see that’s why the second pillar of our AI for social good program is to build the external ecosystem we want to empower everyone to use AI to solve problems they see in their communities last year we partnered with to launch the Google AI impact challenge it was a call for nonprofits social enterprises and universities to share their ideas for using AI to address societal challenges we’ve received applications from 119 countries across six continents representing all kinds of sizes and types of organizations today we’re really excited to announce the 22nd to day let’s give them a warm welcome there’s the list of organizations these organizations are working on some of the world’s most meaningful issues laFonda see ahmed saul saul frontier is using image recognition to help medical staff analyze antimicrobial images in order to prescribe the right antibiotics for bacterial infections New York University in partnership of the fire department of new york city is building a model to help speed up emergency response times this could really improve public health and safety and ma carrera University in Uganda will use AI to create a high-resolution monitoring network to shape public policies for improving their quality we’ll be supporting our 20 grantees and bringing these ideas to life we’re providing 25 million dollars in funding from Google org as well as coaching and resources from teams all across Google congratulations to all our grantees as we head into the next decade I’m really excited about what’s to come there are so many promising avenues for fundamental research for instance machine learning models today typically we are we can get them to be good at solving individual tasks but what if they could generalize across thousands of tasks solving new problems faster and with just a few examples to learn from the keys to progress on these kinds of research problems are those most human characteristics perseverance and ingenuity as you heard sundar mentioned at the start of the day we’re moving from a company that helps you find answers to a company that also helps you get things done in all the products we showed you today share a single goal to be helpful at the same time we want to ensure that the benefits of technology are felt everywhere continue to uphold our foundation of user trust and build a more helpful Google for everyone to everyone joining us on the live stream thank you for tuning in and everyone here with us in the audience today welcome to Google i/o 2019 thank you and enjoy the rest of my out [Music] you [Music]

Posted in Hollow Earth Theory