Minuscule 88: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Leszek Jańczuk
link
 
en>Leszek Jańczuk
link
Line 1: Line 1:
Name: Florida Counsel<br>Age: 25<br>Country: Australia<br>Town: Mudgegonga <br>ZIP: 3737<br>Address: 63 Villeneuve Street<br><br>My web blog - [https://sourceforge.net/projects/haydayhackcheats/ hay day hack andorid]
{{mergeto|Latent semantic analysis|date=July 2012}}
'''Latent semantic indexing''' ('''LSI''') is an indexing and retrieval method that uses a mathematical technique called [[singular value decomposition]] (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text.  LSI is based on the principle that words that are used in the same contexts tend to have similar meanings.  A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.<ref>Deerwester, S., et al, Improving Information Retrieval with Latent Semantic Indexing, Proceedings of the 51st Annual Meeting of the American Society for Information Science 25, 1988, pp. 36–40.</ref>
 
LSI is also an application of [[correspondence analysis]], a multivariate statistical technique developed by [[Jean-Paul Benzécri]]<ref>{{ cite book
| author = Benzécri, J.-P.
| publisher=Dunod |location= Paris, France
| year = 1973
| title = L'Analyse des Données. Volume II. L'Analyse des Correspondences
}}</ref> in the early 1970s, to a [[contingency table]] built from word counts in documents.
 
Called Latent Semantic Indexing because of its ability to correlate semantically related terms that are latent in a collection of text, it was first applied to text at Bell Laboratories in the late 1980s.  The method, also called [[latent semantic analysis]] (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches.  Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria.
 
__TOC__
 
== Benefits of LSI ==
 
LSI overcomes two of the most problematic constraints of Boolean keyword queries: multiple words that have similar meanings ([[synonymy]]) and words that have more than one meaning ([[polysemy]]).  Synonymy is often the cause of [[vocabulary mismatch|mismatches in the vocabulary]] used by the authors of documents and the users of information retrieval systems.<ref>{{cite doi|10.1145/32206.32212}}</ref><ref>{{cite doi|10.1145/1871437.1871474}}</ref>  As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant.
 
LSI is also used to perform automated document categorization.  In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text.<ref>Landauer, T., et al., Learning Human-like Knowledge by Singular Value Decomposition: A Progress Report, M. I. Jordan, M. J. Kearns & S. A. Solla (Eds.), Advances in Neural Information Processing Systems 10, Cambridge: MIT Press, 1998, pp. 45–51.</ref>    Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories.<ref>{{cite doi|10.1145/288627.288651}}</ref>  LSI uses ''example'' documents to establish the conceptual basis for each category.  During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents.
 
Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI.  Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster.  This is very useful when dealing with an unknown collection of unstructured text.
 
Because it uses a strictly mathematical approach, LSI is inherently independent of language.  This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri.  LSI can also perform cross-linguistic concept searching and example-based categorization.  For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.
 
LSI is not restricted to working only with words.  It can also process arbitrary character strings.  Any object that can be expressed as text can be represented in an LSI vector space.<ref>Zukas, Anthony, Price, Robert J., Document Categorization Using Latent Semantic Indexing, White Paper, [[Content Analyst Company]], LLC</ref>  For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.<ref>{{cite doi|10.1093/bioinformatics/bth464}}</ref>
 
LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.).<ref>{{cite doi|10.1007/11427995_68}}</ref>  This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion.  LSI also deals effectively with sparse, ambiguous, and contradictory data.
 
Text does not need to be in sentence form for LSI to be effective.  It can work with lists, free-form notes, email, Web-based content, etc.  As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text.
 
LSI has proven to be a useful solution to a number of conceptual matching problems.<ref>Ding, C., A Similarity-based Probability Model for Latent Semantic Indexing, Proceedings of the 22nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 1999, pp. 59–65.</ref><ref>Bartell, B., Cottrell, G., and Belew, R., Latent Semantic Indexing is an Optimal Special Case of Multidimensional Scaling, Proceedings, ACM SIGIR Conference on Research and Development in Information Retrieval, 1992, pp. 161–167.</ref>  The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.<ref>Graesser, A., and Karnavat, A., Latent Semantic Analysis Captures Causal, Goal-oriented, and Taxonomic Structures, Proceedings of CogSci 2000, pp. 184–189.</ref>
 
== LSI Timeline ==
 
'''Mid-1960s''' – Factor analysis technique first described and tested (H. Borko and M. Bernick)
 
'''1988''' – Seminal paper on LSI technique published (Deerwester et al.)
 
'''1989''' – Original patent granted (Deerwester et al.)
 
'''1992''' – First use of LSI to assign articles to reviewers<ref>Dumais, S., and Nielsen, J., Automating the Assignment of Submitted Manuscripts to Reviewers, Proceedings of the Fifteenth Annual International Conference on Research and Development in Information Retrieval, 1992, pp. 233–244.</ref>  (Dumais and Nielsen)
 
'''1994''' – Patent granted for the cross-lingual application of LSI (Landauer et al.)
 
'''1995''' – First use of LSI for grading essays (Foltz, et al., Landauer et al.)
 
'''1999''' – First implementation of LSI technology for intelligence community for analyzing unstructured text (SAIC).
 
'''2002''' – LSI-based product offering to intelligence-based government agencies (SAIC)
 
'''2005''' – First vertical-specific application – publishing – EDB (EBSCO, [[Content Analyst Company]])
 
== Mathematics of LSI ==
 
LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text.  In general, the process involves constructing a weighted term-document matrix, performing a '''Singular Value Decomposition''' on the matrix, and using the matrix to identify the concepts contained in the text.
 
=== Term Document Matrix ===
 
LSI begins by constructing a term-document matrix, <math>A</math>, to identify the occurrences of the <math>m</math> unique terms within a collection of <math>n</math> documents.  In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, <math>a_{ij}</math>, initially representing the number of times the associated term appears in the indicated document, <math>\mathrm{tf_{ij}}</math>.  This matrix is usually very large and very sparse.
 
Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data.  The weighting functions transform each cell, <math>a_{ij}</math> of <math>A</math>, to be the product of a local term weight, <math>l_{ij}</math>, which describes the relative frequency of a term in a document, and a global weight, <math>g_i</math>, which describes the relative frequency of the term within the entire collection of documents.
 
Some common local weighting functions <ref>
Berry, M. W., and Browne, M., Understanding Search Engines: Mathematical Modeling and Text Retrieval, Society for Industrial and Applied Mathematics, Philadelphia, (2005).</ref> are defined in the following table.
 
{| class="wikitable" border="1" style="width:60%" cellpadding="25" cellspacing="5" align="center"
|-
|  style="width:22%" | '''Binary''' ||
| <math>l_{ij} = 1</math> if the term exists in the document, or else <math>0</math>
|-
|  style="width:22%" | '''TermFrequency''' ||
| <math>l_{ij} = \mathrm{tf}_{ij}</math>, the number of occurrences of term <math>i</math> in document <math>j</math>
|-
|  style="width:22%" | '''Log''' ||
| <math>l_{ij} = \log(\mathrm{tf}_{ij} + 1)</math>
|-
|  style="width:22%" | '''Augnorm''' ||
| <math>l_{ij} = \frac{\Big(\frac{\mathrm{tf}_{ij}}{\max_i(\mathrm{tf}_{ij})}\Big) + 1}{2}</math>
|}
 
Some common global weighting functions are defined in the following table.
 
{| class="wikitable" border="1" style="width:60%" cellpadding="25" cellspacing="5" align="center"
|-
| style="width:22%" | '''Binary''' ||
| <math>g_i = 1</math>
|-
| style="width:22%" | '''Normal''' ||
| <math>g_i = \frac{1}{\sqrt{\sum_j \mathrm{tf}_{ij}^2}}</math>
|-
| style="width:22%" | '''GfIdf''' ||
| <math>g_i = \mathrm{gf}_i / \mathrm{df}_i</math>, where <math>\mathrm{gf}_i</math> is the total number of times term <math>i</math> occurs in the whole collection, and <math>\mathrm{df}_i</math> is the number of documents in which term <math>i</math> occurs.
|-
| style="width:22%" | '''Idf''' ||
| <math>g_i = \log_2 \frac{n}{1+ \mathrm{df}_i}</math>
|-
| style="width:22%" | '''Entropy''' ||
| <math>g_i = 1 - \sum_j \frac{p_{ij} \log p_{ij}}{\log n}</math>, where <math>p_{ij} = \frac{\mathrm{tf}_{ij}}{\mathrm{gf}_i}</math>
|}
 
Empirical studies with LSI report that the Log Entropy weighting functions work well, in practice, with many data sets.<ref>Landauer, T., et al., Handbook of Latent Semantic Analysis, Lawrence Erlbaum Associates, 2007.</ref>  In other words, each entry <math>a_{ij}</math> of <math>A</math> is computed as:
 
<math>g_i = 1 - \sum_j \frac{p_{ij} \log p_{ij}}{\log n}</math>
 
<math>a_{ij} = g_i \ \log (\mathrm{tf}_{ij} + 1)</math>
 
=== Rank-Reduced Singular Value Decomposition ===
 
A rank-reduced, [[Singular Value Decomposition]] is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text.  The SVD forms the foundation for LSI.<ref>Berry, Michael W., Dumais, Susan T., O'Brien, Gavin W., Using Linear Algebra for Intelligent Information Retrieval, December 1994, SIAM Review 37:4 (1995), pp. 573–595.</ref>  It computes the term and document vector spaces by transforming the single term-frequency matrix, <math>A</math>, into three other matrices— an '''''m''''' by '''''r'''''  term-concept vector matrix <math>T</math>, an '''''r''''' by '''''r''''' singular values matrix <math>S</math>, and a '''''n''''' by '''''r''''' concept-document vector matrix, <math>D</math>, which satisfy the following relations:
 
<math>A = TSD^T</math>
 
<math>T^T T = I_r \quad D^T D = I_r </math>
 
<math>S_{1,1} \geq S_{2,2} \geq \ldots \geq  S_{r,r} > 0 \quad S_{i,j} = 0 \; \text{where} \; i \neq j</math>
 
In the formula, '''A''' is the supplied '''''m''''' by '''''n''''' weighted matrix of term frequencies in a collection of text where '''''m''''' is the number of unique terms, and '''''n''''' is the number of documents.  '''T''' is a computed '''''m''''' by '''''r''''' matrix of term vectors where '''''r''''' is the rank of '''A'''—a measure of its unique dimensions '''≤ min(''m,n'')'''.  '''S''' is a computed '''''r''''' by '''''r''''' diagonal matrix of decreasing singular values, and '''D''' is a computed '''''n''''' by '''''r''''' matrix of document vectors.
 
The LSI modification to a standard SVD is to reduce the rank or truncate the singular value matrix '''S''' to size '''''k''''' « '''''r''''', typically on the order of a '''''k''''' in the range of 100 to 300 dimensions, effectively reducing the term and document vector matrix sizes to '''''m''''' by '''''k''''' and '''''n''''' by '''''k''''' respectively.  The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of '''A'''.  This reduced set of matrices is often denoted with a modified formula such as:
 
:::::::'''A ≈ A''<sub>k''</sub> = T''<sub>k''</sub> S''<sub>k''</sub> D''<sub>k''</sub><sup>T</sup>'''
 
Efficient LSI algorithms only compute the first '''''k''''' singular values and term and document vectors as opposed to computing a full SVD and then truncating it.
 
Note that this rank reduction is essentially the same as doing [[Principal Component Analysis]] (PCA) on the matrix '''A''', except that PCA subtracts off the means.  PCA provides cleaner mathematics, but loses the sparseness of the '''A''' matrix, which can make it infeasible for large lexicons.
 
== Querying and Augmenting LSI Vector Spaces ==
 
The computed '''T''<sub>k''</sub>''' and '''D''<sub>k''</sub>''' matrices define the term and document vector spaces, which with the computed singular values, '''S''<sub>k''</sub>''', embody the conceptual information derived from the document collection.  The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors.
 
The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index.  By a simple transformation of the '''A = T S D<sup>T</sup>''' equation into the equivalent '''D = A<sup>T</sup> T S<sup>−1</sup>''' equation, a new vector, '''''d''''', for a query or for a new document can be created by computing a new column in '''A''' and then multiplying the new column by '''T S<sup>−1</sup>'''.  The new column in '''A''' is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document.
 
A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored.  These terms will have no impact on the global weights and learned correlations derived from the original collection of text.  However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors.
 
The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called ''folding in''.  Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added.  When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in <ref name="brand2006">{{cite journal | url=http://www.merl.com/reports/docs/TR2006-059.pdf |format=PDF| title=Fast Low-Rank Modifications of the Thin Singular Value Decomposition | author=Matthew Brand | journal=Linear Algebra and Its Applications | volume=415 | pages=20–30 | year=2006 | doi=10.1016/j.laa.2005.07.021 }}</ref>) be used.
 
== Additional Uses of LSI ==
 
It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems.  As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome.
 
LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization.<ref>Dumais, S., Latent Semantic Analysis, ARIST Review of Information Science and Technology, vol. 38, 2004, Chapter 4.</ref>  Below are some other ways in which LSI is being used:
 
* Information discovery<ref>Best Practices Commentary on the Use of Search and Information Retrieval Methods in E-Discovery, the Sedona Conference, 2007, pp. 189–223.</ref>  (eDiscovery, Government/Intelligence community, Publishing)
* Automated document classification (eDiscovery, Government/Intelligence community, Publishing)<ref>Foltz, P. W. and Dumais, S. T. Personalized Information Delivery:  An analysis of information filtering methods, Communications of the ACM, 1992, 34(12), 51-60.</ref>
* Text summarization<ref>Gong, Y., and Liu, X., Creating Generic Text Summaries, Proceedings, Sixth International Conference on Document Analysis and Recognition, 2001, pp. 903–907.</ref>  (eDiscovery, Publishing)
* Relationship discovery<ref>Bradford, R., Efficient Discovery of New Information in Large Text Databases, Proceedings, IEEE International Conference on Intelligence and Security Informatics, Atlanta, Georgia, LNCS Vol. 3495, Springer, 2005, pp. 374–380.</ref>  (Government, Intelligence community, Social Networking)
* Automatic generation of link charts of individuals and organizations<ref>Bradford, R., Application of Latent Semantic Indexing in Generating Graphs of Terrorist Networks, in: Proceedings, IEEE International Conference on Intelligence and Security Informatics, ISI 2006, San Diego, CA, USA, May 23–24, 2006, Springer, LNCS vol. 3975, pp. 674–675.</ref>  (Government, Intelligence community)
* Matching technical papers and grants with reviewers<ref>Yarowsky, D., and Florian, R., Taking the Load off the Conference Chairs: Towards a Digital Paper-routing Assistant, Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in NLP and Very-Large Corpora, 1999, pp. 220–230.</ref>  (Government)
* Online customer support<ref>Caron, J., Applying LSA to Online Customer Support: A Trial Study, Unpublished Master's Thesis, May 2000.</ref> (Customer Management)
* Determining document authorship<ref>Soboroff, I., et al, Visualizing Document Authorship Using N-grams and Latent Semantic Indexing,  Workshop on New Paradigms in Information Visualization and Manipulation, 1997, pp. 43–48.</ref>  (Education)
* Automatic keyword annotation of images<ref>Monay, F., and Gatica-Perez, D., On Image Auto-annotation with Latent Space Models, Proceedings of the 11th ACM international conference on Multimedia, Berkeley, CA, 2003, pp. 275–278.</ref>
* Understanding software source code<ref>Maletic, J., and Marcus, A., Using Latent Semantic Analysis to Identify Similarities in Source Code to Support Program Understanding, Proceedings of 12th IEEE International Conference on Tools with Artificial Intelligence, Vancouver, British Columbia, November 13–15, 2000, pp. 46–53.</ref>  (Software Engineering)
* Filtering [[Spam (electronic)|spam]]<ref>Gee, K., Using Latent Semantic Indexing to Filter Spam, in: Proceedings, 2003 ACM Symposium on Applied Computing, Melbourne, Florida, pp. 460–464.</ref>  (System Administration)
* Information visualization<ref>Landauer, T., Laham, D., and Derr, M., From Paragraph to Graph: Latent Semantic Analysis for Information Visualization, Proceedings of the National Academy of Science, 101, 2004, pp. 5214–5219.</ref>
* [[Automated essay scoring|Essay scoring]]<ref>Foltz, Peter W., Laham, Darrell, and Landauer, Thomas K., Automated Essay Scoring: Applications to Educational Technology, Proceedings of EdMedia,  1999.</ref>  (Education)
* [[Literature-based discovery]]<ref>Gordon, M., and Dumais, S., Using Latent Semantic Indexing for Literature Based Discovery, Journal of the American Society for Information Science, 49(8), 1998, pp. 674–685.</ref>
 
LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation.  In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential.  Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.<ref>There Has to be a Better Way to Search, 2008, White Paper, Fios, Inc.</ref>
 
== Challenges to LSI ==
 
Early challenges to LSI focused on scalability and performance.  LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques.<ref>Karypis, G., Han, E., Fast Supervised Dimensionality Reduction Algorithm with Applications to Document Categorization and Retrieval, Proceedings of CIKM-00, 9th ACM Conference on Information and Knowledge Management.</ref>  However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome.  Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are not uncommon in some LSI applications.
 
Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD.  As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts.  The actual number of dimensions that can be used is limited by the number of documents in the collection.  Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents).<ref>Bradford, R., An Empirical Study of Required Dimensionality for Large-scale Latent Semantic Indexing Applications, Proceedings of the 17th ACM Conference on Information and Knowledge Management, Napa Valley, California, USA, 2008, pp. 153–162.</ref>  However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.<ref>Landauer, Thomas K., and Dumais, Susan T., Latent Semantic Analysis, Scholarpedia, 3(11):4356, 2008.</ref>
 
Checking the amount of variance in the data after computing the SVD can be used to determine the optimal number of dimensions to retain.  The variance contained in the data can be viewed by plotting the singular values (S) in a [[scree plot]].  Some LSI practitioners select the dimensionality associated with the knee of the curve as the cut-off point for the number of dimensions to retain.  Others argue that some quantity of the variance must be retained, and the amount of variance in the data should dictate the proper dimensionality to retain.  Seventy percent is often mentioned as the amount of variance in the data that should be used to select the optimal dimensionality for recomputing the SVD.<ref>Cangelosi, R., Goriely A., Component Retention In Principal Component Analysis With Application to Cdna Microarray Data, BMC Biology Direct 2(2) (2007).</ref><ref>Jolliffe, L. T., Principal Component Analysis, Springer-Verlag, New York, (1986).</ref><ref>Hu, X., Z. Cai, et al., LSA: First Dimension and Dimensional Weighting, 25th Annual Meeting of the Cognitive Science Society, Boston, MA.</ref>
 
==See also==
 
* [[Latent semantic analysis]]
* [[Latent Semantic Structure Indexing]]
* [[Principal component analysis]]
* [[Correspondence analysis]]
 
== References ==
 
{{Reflist}}
 
==External links==
* [http://www.cs.utk.edu/~lsi/ Michael Berry’s site]
* [http://radimrehurek.com/gensim Gensim] contains a scalable Python+[[NumPy]] implementation of LSI, even for datasets larger than the available RAM.
* [http://scgroup.hpclab.ceid.upatras.gr/scgroup/Projects/TMG/ Text to Matrix Generator (TMG)]  MATLAB toolbox that can be used for various tasks in text mining (TM) specifically  i) indexing, ii) retrieval, iii) dimensionality reduction, iv) clustering, v) classification. Most of TMG is written in MATLAB and parts in Perl. It contains implementations of LSI, clustered LSI, NMF and other methods.
* [http://www.youtube.com/watch?v=QGd06MTRMHs Stanford University Andrew Ng Video on LSI]
 
== Further reading ==
 
*{{cite book|authors=Berry, M. W., Browne M.|title=Understanding Search Engines: Mathematical Modeling and Text Retrieval|location=Philadelphia|publisher=Society for Industrial and Applied Mathematics|year=2005|isbn=978-0898715811|url=http://www.mblazquez.es/blog-ccdoc-recuperacion/documentos/book_understanding-search-engines.pdf}}
*{{cite book|editors=Berry, M. W.|title=Survey of Text Mining: Clustering, Classification, and Retrieval|location=New York|publisher=Springer|year=2004|url=https://perso.uclouvain.be/vincent.blondel/publications/08-textmining.pdf|isbn=978-0387955636}}
*{{cite book|authors=Landauer, T., et al.|title=Handbook of Latent Semantic Analysis|publisher=Lawrence Erlbaum Associates|year=2007|isbn= 978-0805854183|url=http://books.google.de/books/about/Handbook_of_latent_semantic_analysis.html?id=jgVWCuFXePEC&redir_esc=y}}
*{{cite book|authors=Manning, C. D., Schutze H.|title=Foundations of Statistical Natural Language Processing|location=Cambridge, MA|publisher=The MIT Press|year=1999|url=http://nlp.stanford.edu/fsnlp/promo/contents.ps|isbn=9780262133609 }} [http://nlp.stanford.edu/fsnlp/ Companion webpage]
 
{{DEFAULTSORT:Latent Semantic Indexing}}
[[Category:Information retrieval]]
[[Category:Semantic Web]]

Revision as of 13:19, 14 July 2013

Template:Mergeto Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.[1]

LSI is also an application of correspondence analysis, a multivariate statistical technique developed by Jean-Paul Benzécri[2] in the early 1970s, to a contingency table built from word counts in documents.

Called Latent Semantic Indexing because of its ability to correlate semantically related terms that are latent in a collection of text, it was first applied to text at Bell Laboratories in the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria.

Benefits of LSI

LSI overcomes two of the most problematic constraints of Boolean keyword queries: multiple words that have similar meanings (synonymy) and words that have more than one meaning (polysemy). Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users of information retrieval systems.[3][4] As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant.

LSI is also used to perform automated document categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text.[5] Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories.[6] LSI uses example documents to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents.

Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text.

Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguistic concept searching and example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.

LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space.[7] For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.[8]

LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.).[9] This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data.

Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text.

LSI has proven to be a useful solution to a number of conceptual matching problems.[10][11] The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.[12]

LSI Timeline

Mid-1960s – Factor analysis technique first described and tested (H. Borko and M. Bernick)

1988 – Seminal paper on LSI technique published (Deerwester et al.)

1989 – Original patent granted (Deerwester et al.)

1992 – First use of LSI to assign articles to reviewers[13] (Dumais and Nielsen)

1994 – Patent granted for the cross-lingual application of LSI (Landauer et al.)

1995 – First use of LSI for grading essays (Foltz, et al., Landauer et al.)

1999 – First implementation of LSI technology for intelligence community for analyzing unstructured text (SAIC).

2002 – LSI-based product offering to intelligence-based government agencies (SAIC)

2005 – First vertical-specific application – publishing – EDB (EBSCO, Content Analyst Company)

Mathematics of LSI

LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing a Singular Value Decomposition on the matrix, and using the matrix to identify the concepts contained in the text.

Term Document Matrix

LSI begins by constructing a term-document matrix, A, to identify the occurrences of the m unique terms within a collection of n documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, aij, initially representing the number of times the associated term appears in the indicated document, tfij. This matrix is usually very large and very sparse.

Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell, aij of A, to be the product of a local term weight, lij, which describes the relative frequency of a term in a document, and a global weight, gi, which describes the relative frequency of the term within the entire collection of documents.

Some common local weighting functions [14] are defined in the following table.

Binary lij=1 if the term exists in the document, or else 0
TermFrequency lij=tfij, the number of occurrences of term i in document j
Log lij=log(tfij+1)
Augnorm lij=(tfijmaxi(tfij))+12

Some common global weighting functions are defined in the following table.

Binary gi=1
Normal gi=1jtfij2
GfIdf gi=gfi/dfi, where gfi is the total number of times term i occurs in the whole collection, and dfi is the number of documents in which term i occurs.
Idf gi=log2n1+dfi
Entropy gi=1jpijlogpijlogn, where pij=tfijgfi

Empirical studies with LSI report that the Log Entropy weighting functions work well, in practice, with many data sets.[15] In other words, each entry aij of A is computed as:

gi=1jpijlogpijlogn

aij=gilog(tfij+1)

Rank-Reduced Singular Value Decomposition

A rank-reduced, Singular Value Decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI.[16] It computes the term and document vector spaces by transforming the single term-frequency matrix, A, into three other matrices— an m by r term-concept vector matrix T, an r by r singular values matrix S, and a n by r concept-document vector matrix, D, which satisfy the following relations:

A=TSDT

TTT=IrDTD=Ir

S1,1S2,2Sr,r>0Si,j=0whereij

In the formula, A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A—a measure of its unique dimensions ≤ min(m,n). S is a computed r by r diagonal matrix of decreasing singular values, and D is a computed n by r matrix of document vectors.

The LSI modification to a standard SVD is to reduce the rank or truncate the singular value matrix S to size k « r, typically on the order of a k in the range of 100 to 300 dimensions, effectively reducing the term and document vector matrix sizes to m by k and n by k respectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A. This reduced set of matrices is often denoted with a modified formula such as:

A ≈ Ak = Tk Sk DkT

Efficient LSI algorithms only compute the first k singular values and term and document vectors as opposed to computing a full SVD and then truncating it.

Note that this rank reduction is essentially the same as doing Principal Component Analysis (PCA) on the matrix A, except that PCA subtracts off the means. PCA provides cleaner mathematics, but loses the sparseness of the A matrix, which can make it infeasible for large lexicons.

Querying and Augmenting LSI Vector Spaces

The computed Tk and Dk matrices define the term and document vector spaces, which with the computed singular values, Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors.

The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of the A = T S DT equation into the equivalent D = AT T S−1 equation, a new vector, d, for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S−1. The new column in A is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document.

A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors.

The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called folding in. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in [17]) be used.

Additional Uses of LSI

It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome.

LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization.[18] Below are some other ways in which LSI is being used:

  • Information discovery[19] (eDiscovery, Government/Intelligence community, Publishing)
  • Automated document classification (eDiscovery, Government/Intelligence community, Publishing)[20]
  • Text summarization[21] (eDiscovery, Publishing)
  • Relationship discovery[22] (Government, Intelligence community, Social Networking)
  • Automatic generation of link charts of individuals and organizations[23] (Government, Intelligence community)
  • Matching technical papers and grants with reviewers[24] (Government)
  • Online customer support[25] (Customer Management)
  • Determining document authorship[26] (Education)
  • Automatic keyword annotation of images[27]
  • Understanding software source code[28] (Software Engineering)
  • Filtering spam[29] (System Administration)
  • Information visualization[30]
  • Essay scoring[31] (Education)
  • Literature-based discovery[32]

LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.[33]

Challenges to LSI

Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques.[34] However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are not uncommon in some LSI applications.

Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents).[35] However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.[36]

Checking the amount of variance in the data after computing the SVD can be used to determine the optimal number of dimensions to retain. The variance contained in the data can be viewed by plotting the singular values (S) in a scree plot. Some LSI practitioners select the dimensionality associated with the knee of the curve as the cut-off point for the number of dimensions to retain. Others argue that some quantity of the variance must be retained, and the amount of variance in the data should dictate the proper dimensionality to retain. Seventy percent is often mentioned as the amount of variance in the data that should be used to select the optimal dimensionality for recomputing the SVD.[37][38][39]

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

External links

Further reading

  1. Deerwester, S., et al, Improving Information Retrieval with Latent Semantic Indexing, Proceedings of the 51st Annual Meeting of the American Society for Information Science 25, 1988, pp. 36–40.
  2. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  3. Template:Cite doi
  4. Template:Cite doi
  5. Landauer, T., et al., Learning Human-like Knowledge by Singular Value Decomposition: A Progress Report, M. I. Jordan, M. J. Kearns & S. A. Solla (Eds.), Advances in Neural Information Processing Systems 10, Cambridge: MIT Press, 1998, pp. 45–51.
  6. Template:Cite doi
  7. Zukas, Anthony, Price, Robert J., Document Categorization Using Latent Semantic Indexing, White Paper, Content Analyst Company, LLC
  8. Template:Cite doi
  9. Template:Cite doi
  10. Ding, C., A Similarity-based Probability Model for Latent Semantic Indexing, Proceedings of the 22nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 1999, pp. 59–65.
  11. Bartell, B., Cottrell, G., and Belew, R., Latent Semantic Indexing is an Optimal Special Case of Multidimensional Scaling, Proceedings, ACM SIGIR Conference on Research and Development in Information Retrieval, 1992, pp. 161–167.
  12. Graesser, A., and Karnavat, A., Latent Semantic Analysis Captures Causal, Goal-oriented, and Taxonomic Structures, Proceedings of CogSci 2000, pp. 184–189.
  13. Dumais, S., and Nielsen, J., Automating the Assignment of Submitted Manuscripts to Reviewers, Proceedings of the Fifteenth Annual International Conference on Research and Development in Information Retrieval, 1992, pp. 233–244.
  14. Berry, M. W., and Browne, M., Understanding Search Engines: Mathematical Modeling and Text Retrieval, Society for Industrial and Applied Mathematics, Philadelphia, (2005).
  15. Landauer, T., et al., Handbook of Latent Semantic Analysis, Lawrence Erlbaum Associates, 2007.
  16. Berry, Michael W., Dumais, Susan T., O'Brien, Gavin W., Using Linear Algebra for Intelligent Information Retrieval, December 1994, SIAM Review 37:4 (1995), pp. 573–595.
  17. One of the biggest reasons investing in a Singapore new launch is an effective things is as a result of it is doable to be lent massive quantities of money at very low interest rates that you should utilize to purchase it. Then, if property values continue to go up, then you'll get a really high return on funding (ROI). Simply make sure you purchase one of the higher properties, reminiscent of the ones at Fernvale the Riverbank or any Singapore landed property Get Earnings by means of Renting

    In its statement, the singapore property listing - website link, government claimed that the majority citizens buying their first residence won't be hurt by the new measures. Some concessions can even be prolonged to chose teams of consumers, similar to married couples with a minimum of one Singaporean partner who are purchasing their second property so long as they intend to promote their first residential property. Lower the LTV limit on housing loans granted by monetary establishments regulated by MAS from 70% to 60% for property purchasers who are individuals with a number of outstanding housing loans on the time of the brand new housing purchase. Singapore Property Measures - 30 August 2010 The most popular seek for the number of bedrooms in Singapore is 4, followed by 2 and three. Lush Acres EC @ Sengkang

    Discover out more about real estate funding in the area, together with info on international funding incentives and property possession. Many Singaporeans have been investing in property across the causeway in recent years, attracted by comparatively low prices. However, those who need to exit their investments quickly are likely to face significant challenges when trying to sell their property – and could finally be stuck with a property they can't sell. Career improvement programmes, in-house valuation, auctions and administrative help, venture advertising and marketing, skilled talks and traisning are continuously planned for the sales associates to help them obtain better outcomes for his or her shoppers while at Knight Frank Singapore. No change Present Rules

    Extending the tax exemption would help. The exemption, which may be as a lot as $2 million per family, covers individuals who negotiate a principal reduction on their existing mortgage, sell their house short (i.e., for lower than the excellent loans), or take part in a foreclosure course of. An extension of theexemption would seem like a common-sense means to assist stabilize the housing market, but the political turmoil around the fiscal-cliff negotiations means widespread sense could not win out. Home Minority Chief Nancy Pelosi (D-Calif.) believes that the mortgage relief provision will be on the table during the grand-cut price talks, in response to communications director Nadeam Elshami. Buying or promoting of blue mild bulbs is unlawful.

    A vendor's stamp duty has been launched on industrial property for the primary time, at rates ranging from 5 per cent to 15 per cent. The Authorities might be trying to reassure the market that they aren't in opposition to foreigners and PRs investing in Singapore's property market. They imposed these measures because of extenuating components available in the market." The sale of new dual-key EC models will even be restricted to multi-generational households only. The models have two separate entrances, permitting grandparents, for example, to dwell separately. The vendor's stamp obligation takes effect right this moment and applies to industrial property and plots which might be offered inside three years of the date of buy. JLL named Best Performing Property Brand for second year running

    The data offered is for normal info purposes only and isn't supposed to be personalised investment or monetary advice. Motley Fool Singapore contributor Stanley Lim would not personal shares in any corporations talked about. Singapore private home costs increased by 1.eight% within the fourth quarter of 2012, up from 0.6% within the earlier quarter. Resale prices of government-built HDB residences which are usually bought by Singaporeans, elevated by 2.5%, quarter on quarter, the quickest acquire in five quarters. And industrial property, prices are actually double the levels of three years ago. No withholding tax in the event you sell your property. All your local information regarding vital HDB policies, condominium launches, land growth, commercial property and more

    There are various methods to go about discovering the precise property. Some local newspapers (together with the Straits Instances ) have categorised property sections and many local property brokers have websites. Now there are some specifics to consider when buying a 'new launch' rental. Intended use of the unit Every sale begins with 10 p.c low cost for finish of season sale; changes to 20 % discount storewide; follows by additional reduction of fiftyand ends with last discount of 70 % or extra. Typically there is even a warehouse sale or transferring out sale with huge mark-down of costs for stock clearance. Deborah Regulation from Expat Realtor shares her property market update, plus prime rental residences and houses at the moment available to lease Esparina EC @ Sengkang
  18. Dumais, S., Latent Semantic Analysis, ARIST Review of Information Science and Technology, vol. 38, 2004, Chapter 4.
  19. Best Practices Commentary on the Use of Search and Information Retrieval Methods in E-Discovery, the Sedona Conference, 2007, pp. 189–223.
  20. Foltz, P. W. and Dumais, S. T. Personalized Information Delivery: An analysis of information filtering methods, Communications of the ACM, 1992, 34(12), 51-60.
  21. Gong, Y., and Liu, X., Creating Generic Text Summaries, Proceedings, Sixth International Conference on Document Analysis and Recognition, 2001, pp. 903–907.
  22. Bradford, R., Efficient Discovery of New Information in Large Text Databases, Proceedings, IEEE International Conference on Intelligence and Security Informatics, Atlanta, Georgia, LNCS Vol. 3495, Springer, 2005, pp. 374–380.
  23. Bradford, R., Application of Latent Semantic Indexing in Generating Graphs of Terrorist Networks, in: Proceedings, IEEE International Conference on Intelligence and Security Informatics, ISI 2006, San Diego, CA, USA, May 23–24, 2006, Springer, LNCS vol. 3975, pp. 674–675.
  24. Yarowsky, D., and Florian, R., Taking the Load off the Conference Chairs: Towards a Digital Paper-routing Assistant, Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in NLP and Very-Large Corpora, 1999, pp. 220–230.
  25. Caron, J., Applying LSA to Online Customer Support: A Trial Study, Unpublished Master's Thesis, May 2000.
  26. Soboroff, I., et al, Visualizing Document Authorship Using N-grams and Latent Semantic Indexing, Workshop on New Paradigms in Information Visualization and Manipulation, 1997, pp. 43–48.
  27. Monay, F., and Gatica-Perez, D., On Image Auto-annotation with Latent Space Models, Proceedings of the 11th ACM international conference on Multimedia, Berkeley, CA, 2003, pp. 275–278.
  28. Maletic, J., and Marcus, A., Using Latent Semantic Analysis to Identify Similarities in Source Code to Support Program Understanding, Proceedings of 12th IEEE International Conference on Tools with Artificial Intelligence, Vancouver, British Columbia, November 13–15, 2000, pp. 46–53.
  29. Gee, K., Using Latent Semantic Indexing to Filter Spam, in: Proceedings, 2003 ACM Symposium on Applied Computing, Melbourne, Florida, pp. 460–464.
  30. Landauer, T., Laham, D., and Derr, M., From Paragraph to Graph: Latent Semantic Analysis for Information Visualization, Proceedings of the National Academy of Science, 101, 2004, pp. 5214–5219.
  31. Foltz, Peter W., Laham, Darrell, and Landauer, Thomas K., Automated Essay Scoring: Applications to Educational Technology, Proceedings of EdMedia, 1999.
  32. Gordon, M., and Dumais, S., Using Latent Semantic Indexing for Literature Based Discovery, Journal of the American Society for Information Science, 49(8), 1998, pp. 674–685.
  33. There Has to be a Better Way to Search, 2008, White Paper, Fios, Inc.
  34. Karypis, G., Han, E., Fast Supervised Dimensionality Reduction Algorithm with Applications to Document Categorization and Retrieval, Proceedings of CIKM-00, 9th ACM Conference on Information and Knowledge Management.
  35. Bradford, R., An Empirical Study of Required Dimensionality for Large-scale Latent Semantic Indexing Applications, Proceedings of the 17th ACM Conference on Information and Knowledge Management, Napa Valley, California, USA, 2008, pp. 153–162.
  36. Landauer, Thomas K., and Dumais, Susan T., Latent Semantic Analysis, Scholarpedia, 3(11):4356, 2008.
  37. Cangelosi, R., Goriely A., Component Retention In Principal Component Analysis With Application to Cdna Microarray Data, BMC Biology Direct 2(2) (2007).
  38. Jolliffe, L. T., Principal Component Analysis, Springer-Verlag, New York, (1986).
  39. Hu, X., Z. Cai, et al., LSA: First Dimension and Dimensional Weighting, 25th Annual Meeting of the Cognitive Science Society, Boston, MA.