Google I/O developer conference was a roller coaster ride with fascinating and thrilling new features being announced that we all can’t wait to try. AI was definitely one of the defining themes of this year’s Google I/O. Jeffrey Dean, the head of google ai went big on the usage of AI in different fields. AI using google assistance being a major part of previous Google I/O developer conferences, saw varied upgrades. This is no surprise as the CEO of google believes that AI is profound to humankind, even more than electricity or fire.
Here are the noteworthy AI mentions in Google I/O 2019:
Nest Hub Max
Earlier called the Google Home Hub, Next Hub Max comes with a Nest camera. With facial recognition technology, it personalises your daily feed and shows only your data on the screen. It stores your facial composition along with other family members and when you come in contact with it, Nest Hub automatically recognizes you. It also has a feature wherein you can communicate with your family anytime you are not home. It will switch on automatically and adjust to the keep the subjects in focus and at the centre of the screen. This device comes with a physical switch through which you can switch off the access to camera along with disconnect options inside the smart device. Accompanied by speaker that pauses the loud music with just one hand gesture, the future seems promising for this device with on-device machine learning and AI being used in abundance to develop it.
Pick your dish with ease
Google lens has gone big on augmented reality for some time now and with features that were just fantasies last year have now become available to android users. Point your google camera at a menu and voila! You will have a highlighted list of the hot and popular dishes of the week which you can choose from easily. Not only that, Google lens will calculate the tip for you and it can split the bill among your friends. I definitely see more advancement and future updates in this feature. Following feature is possible with the help of neural networks scraping data from online reviews and clustering into meaningful information that the lens can use. Along with this, lens will also be able to read aloud text in many different languages and may work as an interpreter for you while you are on a foreign trip.
Performance speed of Google assistant increased by 10x:
With the application of new neural networks developed and on-device machine learning, Google was able to increase the response time of google assistant by 10 times. This update was announced followed by an on-stage example of how fast it works. It was impressive to see the speed with which it followed all the commands. When we compare speed with other assistants like Alexa and Siri, I think google may have a front foot in it. The pixel phones will receive this new update later this summer.
Helping with online booking
After the fiasco of Google assistant creeping people out by talking like a human, they have introduced duplex for the web which, along with the help of google assistant will automatically fill in your details for activities like booking a car or renting a car. This is another example of on-device Machine learning. As we can see google is focusing hugely on AI/ML aspect of the tech industry. This feature will be available on android phones at the end of 2019, though many pixel users are already enjoying this new attribute.
Drive Mode with new feature
Last year the google drive mode was introduced, this year it comes with the capability to start your podcast from where you left off while you are driving. Another feature : Announce the name of the caller calling you so that you don’t have to look into the screen and also it will take an answer from you if you want to pick the phone up.
All these features and more were discussed in this year’s Google I/O which uses AI and ML to enhance the smart devices into more comfortable and easier to use gadgets.
We also saw Jeffrey Dean talking about usage of AI in the medical field. He declared Google’s $25 million global AI impact grants program, along with revealing the three ongoing accessibility projects enabled by AI technologies. Google is also making efforts in solving complex real-life problems with the help of AI/ML. There was also talk about the self-driving car made by Google’s parent Alphabet and how AI has enabled it to make trips for at least a 1000 paying customers.
Neural network is all about learning through imitation and Google’s AI team has been using that to teach robots new skills. This involves self-supervised imitation technique where the input is unlabelled data with some labelled data. This AI training technique has enabled a robot to learn to pour soda with skill resembling that of an average 8-year-old child after 15 trials with 15 minutes of training.
Google has also claimed that an AI system can find out much more from your retinal scan similar to what you can know from a blood report. Some of the properties include gender of the person whose retinal scan it belongs to, haemoglobin levels, and age with the accuracy of it being within 3 years of the actual age.
Through these announcements we definitely know that the muse for the developers right now is AI and ML and it will be so for a long period of time as there is yet a lot more to discover and develop.