Folks at Google are real visionaries. What I love about them is that they're truly driven by making lives better with the technology they invent. And I'm not talking only about high-end, luxury technological goods that only a handful of people can afford. Their innovations are already changing the way we work, learn, govern our communities, and live our daily life. But without exaggeration, some of those announced at this year’s Google I/O conference might even change the course of human history. But I'll get to that in a minute.
Google I/O is an annual developer conference held in Mountain View, California. "I/O" stands for input/output, as well as the slogan "Innovation in the Open". After listening to this year’s keynote, I couldn't shake off the feeling that the singularity is truly closer than we think.
Some may fear the dystopian scenarios when intelligent machines destroy humanity. I believe in the complete opposite when technology merges with nature and becomes just another milestone in the evolution of humankind for the better. It seems like we are one step closer towards this becoming a reality and I really can't wait to see this happen.
There were many exciting announcements made, so you'll have to forgive me for picking a few that are on my personal top list. If you want to hear them all, you can still watch the entire keynote here.
Being a language geek, I got super-excited about the level of progress Google made in the field of AI and natural language processing capabilities of computers. Language with all its nuances and complexities is one of humanity’s greatest tools — and one of computer science’s most difficult puzzles.
The latest breakthrough innovation called LaMDA, brings scientists and engineers a bit closer towards putting this puzzle together by solving a crucial section of it: conversation. LaMDA — short for “Language Model for Dialogue Applications” — is an AI that can engage in a free-flowing conversation about a seemingly endless number of topics. Google's CEO, Sundar Pichai, demonstrated the power of LaMDA by letting us listen to a conversation his team had with the program about the planet Pluto and a paper plane. LaMDA already knows a lot about space, planets and millions of other topics and had the conversation with humans as the subject and the object of the conversation itself. Have a listen here.
The level of understanding of not just words, but the emotional context of the dialogue is truly remarkable. The company is looking not only at how specific and sensible the AI’s answers are, but also how interesting they are and whether the responses are insightful, unexpected, or witty.
This ability could possibly unlock more natural ways of interacting with technology and entirely new categories of helpful applications in the future. Pichai explained that the company is exploring how it could be integrated into Google’s search engine, voice assistant, or Google Workspace and how it could provide new capabilities to developers and enterprise customers.
Language can be a significant barrier to accessing information. But people don't communicate only through language. Information is conveyed also in images, audio and video. Google announced the progress they made in how their search engine understands complex queries on the web- they are calling it MUM - Multitask Unified Model.
MUM has the potential to transfer knowledge across languages. It can learn from sources that aren’t written in the language you wrote your search in, and help bring that information to you. MUM is multimodal, which means it understands information across text and images. In the future, it can expand to more modalities like video and audio. Soon, your google searches could look something like this:
Let’s say you wanted to know if you could use a particular set of boots to climb a mountain. You'd take a photo of your boots and ask your Google assistant if they are suitable for a trip you are about to take. MUM understands the image and connects it with your question to let you know your boots would work just fine. It then points you to a blog with a list of recommended gear. It could also tell you that the weather on the mountain at this time of year is usually rainy. It would suggest that you should probably wear a waterproof jacket.
Or maybe while you're on a road trip you could ask about routes with beautiful mountain views. The AI with its deep understanding of complex concepts would not only suggest a route with landmarks worth visiting but also give you links to resources on the web about other sites worth exploring in the area.
Beyond search, Google announced new ways that AI was being used to increase the level of sophistication of Google Maps. The company said it’s planning to make more than 100 different improvements to boost the detail of routes available in Maps using AI in 2021. Maps will now incorporate sidewalks, crosswalks, and pedestrian islands. For a wheelchair user, for example, who needs to carefully plan their trip, this would become a game-changer. Maps will also help you move around the inside of buildings that are usually hard to navigate like train stations or airports. Depending on what you are interested in, Google Maps could show you how busy certain areas or shops/services are at certain times. When considering which routes to recommend, Google Maps will also now factor in the safety of the recommended routes. At the conference, Google announced that these details will be added to more than 50 cities this year.
Another helpful way of utilising AI is in healthcare. The company previewed the tool that will help people identify skin, hair, or nail conditions.
You will be able to use your phone’s camera to take pictures of the problem area — for example, a rash on your arm. You’ll then answer a series of questions about your skin type and other symptoms. The tool then gives you a list of possible conditions from a set of 288 that it’s trained to recognize.
When it was tested on around 1,000 images of skin problems from patients, it identified the correct condition in the top three suggestions 84 per cent of the time. It included the correct condition as one of the possible issues 97 per cent of the time. Google is working with a Stanford University research team to test how well the tool works in a health care setting.
The company obtained a Class I medical device mark for the tool in the European Union, designating it as a low-risk medical device.
Google’s computing infrastructure is how the company drives and sustains this kind of advances. And Tensor Processing Units (TPUs) are a big part of that. Google's CEO announced that they have developed a next-generation Tensor Processing Unit TPUv4 which is twice as fast as the previous generation TPUv3. TPUs are connected together into supercomputers called pods. TPUv4 pod is more than 1 exaflop of computing power (1 exaFLOP = 1018 floating-point operations per second) What does this mean in layman's terms? To give you a rough idea- 1 exaflop is an estimated processing power of the human brain at the neural level.
This truly is a historic moment, as no single machine ever reached this kind of computing power before. Previously to reach 1 exaFLOP you'd have to build a custom supercomputer. Google Cloud users will be able to use TPUv4 pods later this year. Most of them will be operating at 90 per cent (or near) carbon-free energy.
I was saving the best for last. During this year’s keynote, we had a chance to have a look inside Google's quantum AI campus in Santa Barbara, California. Quantum computing, even though still in its early stages, is a promising field that can influence our ability to deal with complex problems better in future.
“To build better batteries (to lighten the load on the power grid), or to create a fertilizer to feed the world without creating 2% of global carbon emissions (as nitrogen fixation does today), or to create more targeted medicines (to stop the next pandemic before it starts), we need to understand and design molecules better. That means simulating nature accurately. But you can’t simulate molecules very well using classical computers. As you get to even modestly sized molecules, you quickly run out of computing resources. Nature is quantum mechanical: The bonds and interactions among atoms behave probabilistically, with richer dynamics that exhaust the simple classical computing logic.” - says Erik Lucerno, a main engineer at the campus in his blogpost.
Quantum computers will operate totally differently than computers we know today. The result of a computation is not the direct result of mathematical operations between inputs encoded as bits set to 0 or 1. Instead, a quantum computer works by transforming a state composed of particles that can be seen as 0 and 1 at the same time - a physical phenomenon called superposition in the context of quantum mechanics. The type of transformation and the interference between particles then gives a result that classical computers just cannot reach efficiently.
Within the decade, Google aims to build a useful, error-corrected quantum computer. This will allow scientists to mirror the way molecules behave in nature and thus help them design new chemical processes and new materials before investing in costly real-life prototypes. It will significantly propel the process of coming up with solutions for some of the world's most pressing problems, like sustainable energy and reduced emissions to feed the world's growing population and unlocking new scientific discoveries.
To put this in simpler words, these quantum computers compared to the ones we know today are what shinkansen or self-driving cars are to the first wheel ever invented. The successful development of a quantum computer will pose a historic milestone of similar significance to the discovery of the use of fire. We will have machines capable of simulating nature itself. My imagination goes wild thinking about what sort of implication this will have on our everyday life.
“Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly, it's a wonderful problem, because it doesn't look so easy” Richard Feynman.