Introducing WebQA : A Multi-hop, Multi-modal & Open Domain Reasoning Challenge & Benchmark

Introducing WebQA : A Multi-hop, Multi-modal & Open Domain Reasoning Challenge & Benchmark

We are proud to introduce WebQA, a dataset for multi-hop, multi-modal open-domain question answering challenge, to be hosted at NeurIPS 2021 Competition Track. Designed to simulate the heterogeneous information landscape one might expect when performing web search, WebQA contains 46K knowledge-seeking queries whose answers are to be found in either images or text snippets, where a system must determine relevant sources first before reasoning to...
Read More

Deep multimodality models in image search ranking stack

Deep multimodality models in image search ranking stack

Rank Multimodal (RankMM) The RankMM model effectively combines the search paradigms of a text query, page context, and images to aid image and video retrieval. RankMM models are Visual Language (VL) models which take page context into account to improve image and video retrieval performance in a web-scale search engine.
Read More

Introducing @MSBing_Dev: Your new way to learn all things Bing

Introducing @MSBing_Dev: Your new way to learn all things Bing

Want to learn more about what’s new with Bing? Now you can get official news and updates from our recently launched @MSBing_Dev Twitter handle. This account is our way to share more of the technology we build and talk all things Bing, directly from our engineers to the people who use it.
Read More