Google’s search engine hasn’t always been very smart. In its early years, Google only tried to find the pages that matched the words from your query and ranked them. It’s hard to answer a question without understanding it, but that’s what Google did.
Google constantly improved its algorithms, added personalization options, started to match synonyms and expand abbreviations, but Knowledge Graph and Hummingbird were the greatest leaps that put machine learning to work and made Google smarter. Google started to understand the meaning behind a question, to disambiguate words and to find answers, not just pages that include the words from the query.
Bloomberg reports that Google uses even more artificial intelligence to answer questions and rank results. RankBrain is a new AI system that has been used for the past few months to improve search results. “If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries.”
15% of the queries Google gets every day are new and RankBrain helps Google understand them. Here’s an example of complicated query: “What’s the title of the consumer at the highest level of a food chain?” RankBrain finds words and phrases that have a similar meaning and highlights them (for example: predators). “In the few months it has been deployed, RankBrain has become the third-most important signal contributing to the result of a search query.”
Google’s CEO, Sundar Pichai, says that “machine learning is a core transformative way by which we are rethinking everything we are doing”. Machine learning has already helped Google improve image search, automatic translation, speech recognition and deep learning is already showing some promising results: smarter photo search with object recognition.
“In tandem with other researchers at Google, Andrew Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain. This movement seeks to meld computer science with neuroscience — something that never quite happened in the world of artificial intelligence,” reports Wired. “Deep Learning is a first step in this new direction. Basically, it involves building neural networks — networks that mimic the behavior of the human brain. Much like the brain, these multi-layered computer networks can gather information and react to it. They can build up an understanding of what objects look or sound like.”