Microsoft Enhances Neural Network Model to Improve the Bing Searches
In a recent announcement, Microsoft has stated that it has detailed a large neural network model in order to improve the relevance of Bing searches. The company has said that the model, called a "sparse" neural network, will be complementing the currently existing large Transformer-based networks like OpenAI's GPT-3. Transformer-based models have been gaining a lot of attention in the machine learning world. These models excel at understanding semantic relationships, and thereby they have been used to enhance Bing search.
Microsoft's new Make Every feature Binary (MEB) model is used to improve the Bing Searches. The sparse model has 135 billion parameters and space for over 200 billion binary features. Microsoft has stated that this new MEB can map single facts to features, which allows the model to gain a more nuanced understanding of individual facts.
The company also added that the MEB, trained on more than 500 billion queries for over three years of Bing searches, runs in production for 100 percent of Bing searches in all regions and languages. It's now the largest universal language model the company is serving to date. The model occupies 720GB when loaded into memory and sustain 35 million feature lookups during peak traffic time.