Introducing Bing Search Quality Insights

Dr. Harry Shum, Corporate Vice President, Bing R&D

I’ve been working in search at Microsoft for the last seven years and there has never been a more fascinating or challenging time to be in the search space. Today, search spans the entire spectrum of computer science from distributed systems and machine learning to natural language understanding and user experience design. The graph theory we learned in graduate school 20 years ago with a few dozen nodes and edges needs to be extended to cope with the web graph of billions of documents. There’s more web data created in a single day than in the entirety of 1999. Moreover, the types of data crawled by search engines are evolving in astonishing ways. What started as indexing simple web documents has blossomed into a dizzying array of data types – from rich multimedia content to real-time streams to social conversations, just to name a few.

In June 2009, we launched Bing (www.bing.com) with the aspiration of building a search engine that better understands what people have in their mind while searching so that we can help them get their tasks accomplished faster and more easily. The core to a great search engine has been and will always remain the same: delivering comprehensive, relevant and unbiased results that people can trust. We use thousands of signals from queries to documents and user feedback to determine the best search results and in turn make hundreds of improvements to our features every year, from small tweaks to core algorithm updates. As the web grows, we continue to take advantage of the collective wisdom from social, real-time, geospatial and contextual signals to further improve the search quality of Bing.

Over the last two and half years we have made tremendous progress in Bing search quality, measured scientifically by offline human judges and online user engagement. We have employed human judges to rate and compare search results from different algorithms. We also flight new approaches to a selected number of users to get online user feedback before rolling out new improvements. Search quality goes beyond scientific numbers such as DCG, nDCG, side-by-side, pSkip, however. The users often perceive the search quality by how fast the search results show up in a search engine, or how helpful it is to formulate the query e.g. auto-suggestion or type-ahead, or how easy it is to inform the clicks e.g. better snippets for the web documents. There is always so much more that users expect from search quality than what we understand and present.

With that in mind, today we are launching a new blog series we’re calling “Bing Search Quality Insights” aimed at giving you deeper insight into the algorithms, trends and people behind Bing. This blog is the first in a series that will take you behind the search box for an up close view into the core of the Bing search engine. Quality improvements in Bing are often subtle but often those little changes are the result of years of research. In the coming weeks and months, you will hear from members of my team on a range of topics, from the complexities of social search and disambiguating spelling errors to whole page relevance and making search more personal. We will also highlight the ideas and projects we have collaborated with colleagues from Microsoft Research and academia to advance the state of the art for our industry. We hope this will not only be useful information for our blog readers, but that they will spark conversations that help us all move the search industry forward.

Today, my colleague Jan Pedersen, Chief Scientist for Core Search at Bing kicks off the discussion with an overview of how we’re tackling the topic of whole page relevance at Bing. Jan delves into how we go beyond the traditional concept of page rank to deliver rich “answers” like video, images and maps that are relevant and help you get more done.

We would love to hear from you about what you would like to see covered so please join the conversation.

On behalf of the Bing team,

Harry Shum