In order to assist users in finding and exploring content in novel ways, Google has unveiled six new search features and developments.
The Head of Communication for West Africa and Sub-Saharan Africa, Mr. Taiwo Kola-Ogunlade, stated this in a statement on Thursday in Lagos.
According to Kola-Ogunlade, the new capabilities that use machine learning would enable individuals to obtain knowledge in novel ways.
Multi-search, multi-search near me, translation in the blink of an eye, Google for iOS updates, even faster ways to locate what you’re looking for, and new ways to explore information were the six new features he highlighted.
Read Also: 5G network to go live on 24th August, MTN and NCC confirms
34 recipients of the third Google Innovation Challenge include 5 Nigerians.
According to Kola-Ogunlade, more than eight billion inquiries are answered each month using lenses.
‘’With multi-search, you can take a picture or use a screenshot and then add text to it — similar to the way you might naturally point at something and ask a question about it.
‘’Multisearch is available in English globally, and will now be rolling out in 70 languages in the next few months.
‘’Multisearch near me allows you to take a screenshot or a photo of an item, and then find it nearby.
‘’So, if you have a hankering for your favorite local dish, all you need to do is screenshot it, and Google will connect you with nearby restaurants serving it,” he said.
He asserts that one of the most effective features of visual cognition is its capacity to overcome linguistic barriers in a split second of translation.
“Google has gone beyond translating text to translating pictures – with Google translating text in images over one billion times monthly in more than 100 languages,” Kola-Ogunlade said.
He claimed that because to significant progress in machine learning, Google can now incorporate translated text into intricate visuals.
“Google has also optimized their machine learning models to do all this in just 100 milliseconds — shorter than the blink of an eye,” Kola-Ogunlade added.
He claimed that this makes use of the same generative adversarial networks (GAN models) that power the Magic Eraser on Pixel.
“Google is working to make it possible to ask questions with fewer words- or even none at all and still help you find what you are looking for.
“For those who do not know exactly what they are looking for until they see it, Google will help you to specify your question.
“Google is reinventing the way it displays search results to better reflect the ways people explore topics to see the most relevant content,” Kola-Ogunlade said.