Query Weighting for Ranking Model Adaptation

Peng Cai1,  Wei Gao2,  Aoying Zhou1,  Kam-Fai Wong3
1East China Normal University, 2The Chinese University of Hong Kong, 3The Chinese University of Hong Kong, MoE Key Laboratory on High Reliability Software Technology (CUHK Sub-Lab)


Abstract

We propose to directly measure the importance of queries in the source domain to the target domain where no rank labels of documents are available, which is referred to as query weighting. Query weighting is a key step in ranking model adaptation. As the learning object of ranking algorithms is divided by query instances, we argue that it's more reasonable to conduct importance weighting at query level than document level. We present two query weighting schemes. The first compresses the query into a \emph{query feature vector}, which aggregates all document instances in the same query, and then conducts query weighting based on the query feature vector. This method can efficiently estimate query importance by compressing query data, but the potential risk is information loss resulted from the compression. The second measures the similarity between the source query and each target query, and then combines these \emph{fine-grained} similarity values for its importance estimation. Adaptation experiments on LETOR3.0 data set demonstrate that query weighting significantly outperforms document instance weighting methods.




Full paper: http://www.aclweb.org/anthology/P/P11/P11-1012.pdf