|
|
Line 1: |
Line 1: |
| {{Orphan|date=February 2009}}
| | Student Counsellor Jaquez from Drayton Valley, has pastimes which include wind surfing, photo booth and drawing. Found some fascinating spots following 9 weeks at Landscape of the Pico Island Vineyard Culture.<br><br>Feel free to visit my webpage; [http://allesy.hol.es/Wikkawiki/LillyuhHosskn How To Choose A Party Booth Company] |
| {{Unreferenced|date=March 2008}}
| |
| == IR Evaluation ==
| |
| The evaluation of information retrieval system is the process of assessing how well a system meets the information needs of its users. Traditional evaluation metrics, designed for [[Standard Boolean model|Boolean retrieval]] or top-k retrieval, include [[precision and recall]].
| |
| | |
| *'''Precision''' is the fraction of retrieved documents that are [[Relevance (information retrieval)|relevant]] to the query:
| |
| | |
| :<math> \mbox{precision}=\frac{|\{\mbox{relevant documents}\}\cap\{\mbox{retrieved documents}\}|}{|\{\mbox{retrieved documents}\}|} </math>
| |
| | |
| *'''Recall''' is the fraction of the documents relevant to the query that are successfully retrieved:
| |
| | |
| :<math> \mbox{recall}=\frac{|\{\mbox{relevant documents}\}\cap\{\mbox{retrieved documents}\}|}{|\{\mbox{relevant documents}\}|} </math>
| |
| | |
| For modern (Web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. [[Precision and recall#Precision|Precision]] at k documents (P@k) is still a useful metric (e.g., P@10 corresponds to the number of relevant results on the first search results page), but fails to take into account the positions of the relevant documents among the top k.
| |
| | |
| Virtually all modern evaluation metrics (e.g., [[Information retrieval#Mean average precision|mean average precision]], [[Information retrieval#Discounted cumulative gain|discounted cumulative gain]]) are designed for '''ranked retrieval''' without any explicit rank cutoff, taking into account the relative order of the documents retrieved by the search engines and giving more weight to documents returned at higher ranks.
| |
| | |
| ==See also==
| |
| * [[Information retrieval]]
| |
| * [[Precision and recall]]
| |
| * [[Web search engine]]
| |
| | |
| ==Further reading==
| |
| * Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. [http://www-csli.stanford.edu/~hinrich/information-retrieval-book.html Introduction to Information Retrieval]. Cambridge University Press, 2008.
| |
| *Stefan Büttcher, Charles L. A. Clarke, and Gordon V. Cormack. [http://www.ir.uwaterloo.ca/book/ Information Retrieval: Implementing and Evaluating Search Engines]. MIT Press, Cambridge, Mass., 2010.
| |
| | |
| [[Category:Information retrieval]]
| |
| [[Category:Searching]]
| |
Student Counsellor Jaquez from Drayton Valley, has pastimes which include wind surfing, photo booth and drawing. Found some fascinating spots following 9 weeks at Landscape of the Pico Island Vineyard Culture.
Feel free to visit my webpage; How To Choose A Party Booth Company