Given a large collection of documents, one is always confronted with the problem of locating the desired information. The task of text retrieval thus can be loosely described as effectively finding the documents which contain the information a user needs. This usually involves converting all documents and user's information need (query) into some internal representation ('indexing' documents and queries) and then matching the documents and the queries over the representations. The TR techniques become more and more important as information floods in thanks to the explosive growth of Internet.
Apart from the obvious applications of TR, the techniques can be tailored to solving several similar problems:
This 'mode' of TR can be described as 'subscription' to the information a user concerns. The user could explicitly formulate the rules determining what documents are relevant, or implicitly inform the system by giving sample documents of interests (thus the system 'learns' the user's interests) -- either way a user profile is established. Upon the arrival of new documents the system then evaluates if it conforms with the user's interest and acts accordingly.
In addition to the basic retrieval based on the information need, the TR system can be designed further to answer a user's questions. For example, a user could submit a question "what functions might be provided by hippocampus?" instead of asking the system to find all relevant documents about "functions of hippocampus" and then peruse them to find the answers.
In a similar scenario a TR system can be designed to summarize a predefined set of topics, such as summarizing the "working progress of International Space Station" based on the documents provided by news service.
Generally the task of text retrieval can be viewed as a three-stage process. In the first stage each document, including a query, is parsed and the information conveyed in it is extracted and stored using a particular document representation. Next the matching occurs between the document representation of each document and that of the query, and a ranking list of the similarity between each document and the query is computed. Finally and optionally, the computed ranking is used to refine the query document, and the process starts over again from the first stage, until a satisfactory result is obtained.
There are several important issues here:
This arguably is the most important aspect of a TR system. The representation determines what and how the content of a document is extracted from the original text. This could be purely statistical, purely symbolical (logical), hybrid or some other representation. The important goal here is to extract only the content-bearing portion of the text so that effective and efficient indexing and matching could be done.
Another focus is on how to efficiently represent a document. Latent semantic indexing (LSI) tries to reduce the dimensionality in vector-based approaches based on singular value decomposition.
There are several reasons to treat queries differently from the ordinary documents in data collections. Firstly it is obvious that misinterpretations of queries do much more harm than that of documents, assuming the collection size is sufficiently large. Secondly queries in nature are very different from the documents in data collections. Here is a topic statement used in TREC8:
A relevant document will focus on the causes of the lack of
integration in a significant way; that is, the mere mention of
immigration difficulties is not relevant. Documents that discuss
immigration problems unrelated to Germany are also not relevant.
It is obvious that a more sophisticated parsing method is needed to
favor a match of content-bearing words (e.g. 'immigration') than
functional words (e.g. 'document'). The query keywords may need to be
further expanded semantically in order to take synonyms into
account. Moreover a flexible representation is needed to reflect the
logics conveyed in the query.
Matching between documents and queries should faithfully reflect the real similarity between the two. How to define a good similarity measure and weighting schemes to achieve the goal is an important issue. In particular when multiple ranking lists are available from different approaches we need to combine them into one -- this problem is known as data fusion.
It is often the case that when a user submits a query she does not have a precise idea as what she needs. A query can be either stable but with a shifting focus, or variable in that the structure of the query is changed within a single retrieval process. An effective TR system thus should allow a user to interact with the system to refine her information needs.
Over 30 years of research there have been various approaches adopted in the TR community. For document representation these could be categorized into two main frameworks: vector model and probability model. In the most popular vector-based model each document is represented by a vector with each dimension being the existence of an indexed term. Various weighting scheme could then be adopted to approximate the 'importance' of a particular term. In probability model the similarity between a query and a document is governed by various statistical and probabilistic laws, for example, Bayesian inference rules. On top of these two categories the documents could be indexed either by the word forms (terms) appearing in them, or by the concepts conveyed in them. The term-based approaches may include word stemming algorithm for conflating morphologically different words, while the concept-based methods may consult knowledge base (either general or domain-dependant) for semantics interpretations.
The nature of TR tasks implies multitude of possibilities of various approaches from different disciplines. Linguistics and Formal Languages provide rich knowledge on parsing and interpreting the text, while techniques from artificial intelligence, machine learning, knowledge representation and pattern recognition shed light on how to model the content of the text and search on the representations effectively. The availability of large text collections also enables the statistics approaches, which have already been proved robust even with only modest linguistics knowledge.
Image feature are explored extensively from the following domains:
Indexing mainly deals with the indexing structure, similarity distance and clustering.
User interface, relevance feedback, result merging, etc.