problem
stringlengths 1.61k
12k
| solution
stringclasses 6
values | dataset
stringclasses 1
value | split
stringclasses 1
value | __index_level_0__
int64 0
10.2k
|
|---|---|---|---|---|
Classify the node 'Node Similarity in Networked Information Spaces Netvorked information spaces contain information entities, corresponding to nodes, vhich are (:orlrle(:l.ed [)y associm.i(ms, (:or'r'esporldirlg 1.o links irl (.he nel.wor'k. Exarrq)les of nel.wor'ked information spaces are: the World Wide Web, vhere information entities are veb pages, and associations are hyperlinks; the scientific literature, vhere information entities are articles and associations are references to other articles. SimilariW betveen information entities in a net- vorked information space can be defined not only based on the content of the information entities, but also based on the connectivity established by the associations present. This paper explores the definition of similariW based on connectivity only, and proposes several algorithms [)r' I, his [mr'pose. Our' rrlei, r'i(:s I,ake mJvard,age o[' I, he local rleigh[)or'hoo(ts o[' I, he rmcJes irl I, he rlel,- is no required, as long as a query engine is available for fo]]oving ]inks and extracting he necessary local neighbourhoods for similarity estimation. Tvo variations of similarity estimation beveen vo nodes are described, one based on he separate local neighbourhoods of he nodes, and another based on he join local neighbourhood expanded from boh nodes a he same ime. The algorithms are imp]emened and evaluated on he citation graph of computer science. The immediate application of his vork is in finding papers similar o a given paper [he Web.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
2-Hop Neighbour:
Clustering Categorical Data: An Approach Based on Dynamical Systems We describe a novel approach for clustering collections of sets, and its application to the analysis and mining of categorical data. By "categorical data," we mean tables with fields that cannot be naturally ordered by a metric --- e.g., the names of producers of automobiles, or the names of products offered by a manufacturer. Our approach is based on an iterative method for assigning and propagating weights on the categorical values in a table; this facilitates a type of similarity measure arising from the cooccurrence of values in the dataset. Our techniques can be studied analytically in terms of certain types of non-linear dynamical systems. We discuss experiments on a variety of tables of synthetic and real data; we find that our iterative methods converge quickly to prominently correlated values of various categorical fields. 1 Introduction Much of the data in databases is categorical: fields in tables whose attributes cannot naturally be ordered as numerical values can. The pro...
2-Hop Neighbour:
A Case Study in Web Search using TREC Algorithms Web search engines rank potentially relevant pages/sites for a user query. Ranking documents for user queries has also been at the heart of the Text REtrieval Conference (TREC in short) under the label ###### retrieval. The TREC community has developed document ranking algorithms that are known to be the best for searching the document collections used in TREC, which are mainly comprised of newswire text. However, the web search community has developed its own methods to rank web pages/sites, many of which use link structure on the web, and are quite dierentfrom the algorithms developed at TREC. This study evaluates the performance of a state-of-the-art keyword-based document ranking algorithm (coming out of TREC) on a popular web search task: nding the web page/site of an entity, #### companies, universities, organizations, individuals, etc. This form of querying is quite prevalentonthe web. The results from the TREC algorithms are compared to four commercial web search engines. Results show that for nding the web page/site of an entity, commercial web search engines are notably better than a state-of-the-art TREC algorithm. These results are in sharp contrast to results from several previous studies. Keywords Search engines, TREC ad-hoc, keyword-based ranking, linkbased ranking 1.
2-Hop Neighbour:
Target Seeking Crawlers and their Topical Performance Topic driven crawlers can complement search engines by targeting relevant portions of the Web. A topic driven crawler must exploit the information available about the topic and its underlying context. In this paper we extend our previous research on the design and evaluation of topic driven crawlers by comparing seven different crawlers on a harder problem, namely, seeking highly relevant target pages. We find that exploration is an important aspect of a crawling strategy. We also study how the performance of crawler strategies depends on a number of topical characteristics based on notions of topic generality, cohesiveness, and authoritativeness. Our results reveal that topic generality is an obstacle for most crawlers, that three crawlers tend to perform better when the target pages are clustered together, and that two of these also display better performance when topic targets are highly authoritative.
2-Hop Neighbour:
Towards a Highly-Scalable Metasearch Engine The World Wide Web has been expanding in a very fast rate. The coverage of the Web by each of the major search engines has been steadily decreasing despite their effort to index more web pages. Worse yet, as these search engines get larger, higher percentages of their indexed information are becoming obsolete. More and more people are having doubt about the scalability of centralized search engine technology. A more scalable alternative to search the Web is the metasearch engine approach. A metasearch engine can be considered as an interface on top of multiple local search engines to provide uniform access to many local search engines. Database selection is one of the main challenges in building a large-scale metasearch engine. The problem is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. In order to enable accurate selection, metadata that reflect the content of each search engine need to be co...
2-Hop Neighbour:
Collection Synthesis The invention of the hyperlink and the HTTP transmission protocol caused an amazing new structure to appear on the Internet -- the World Wide Web. With the Web, there came spiders, robots, and Web crawlers, which go from one link to the next checking Web health, ferreting out information and resources, and imposing organization on the huge collection of information (and dross) residing on the net. This paper reports on the use of one such crawler to synthesize document collections on various topics in science, mathematics, engineering and technology. Such collections could be part of a digital library.
|
IR (Information Retrieval)
|
citeseer
|
train
| 0
|
Classify the node 'Exploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Evaluating Topic-Driven Web Crawlers Due to limited bandwidth, storage, and computational resources, and to the dynamic nature of the Web, search engines cannot index every Web page, and even the covered portion of the Web cannot be monitored continuously for changes. Therefore it is essential to develop effective crawling strategies to prioritize the pages to be indexed. The issue is even more important for topic-specific search engines, where crawlers must make additional decisions based on the relevance of visited pages. However, it is difficult to evaluate alternative crawling strategies because relevant sets are unknown and the search space is changing. We propose three different methods to evaluate crawling strategies. We apply the proposed metrics to compare three topic-driven crawling algorithms based on similarity ranking, link analysis, and adaptive agents.
2-Hop Neighbour:
Accelerated Focused Crawling through Online Relevance Feedback The organization of HTML into a tag tree structure, which is rendered by browsers as roughly rectangular regions with embedded text and HREF links, greatly helps surfers locate and click on links that best satisfy their information need. Can an automatic program emulate this human behavior and thereby learn to predict the relevance of an unseen HREF target page w.r.t. an information need, based on information limited to the HREF source page? Such a capability would be of great interest in focused crawling and resource discovery, because it can fine-tune the priority of unvisited URLs in the crawl frontier, and reduce the number of irrelevant pages which are fetched and discarded.
2-Hop Neighbour:
Information Retrieval on the World Wide Web and Active Logic: A Survey and Problem Definition As more information becomes available on the World Wide Web (there are currently over 4 billion pages covering most areas of human endeavor), it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest). The first process is tentative and time consuming and the second may not satisfy the user because of many inaccurate and irrelevant results. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves.
2-Hop Neighbour:
Focused Crawls, Tunneling, and Digital Libraries Crawling the Web to build collections of documents related to pre-specified topics became an active area of research during the late 1990's after crawler technology was developed for the benefit of search engines. Now, Web crawling is being seriously considered as an important strategy for building large scale digital libraries. This paper considers some of the crawl technologies that might be exploited for collection building. For example, to make such collection-building crawls more effective, focused crawling was developed, in which the goal was to make a "best-first" crawl of the Web. We are using powerful crawler software to implement a focused crawl but use tunneling to overcome some of the limitations of a pure best-first approach. Tunneling has been described by others as not only prioritizing links from pages according to the page's relevance score, but also estimating the value of each link and prioritizing on that as well. We add to this mix by devising a tunneling focused crawling strategy which evaluates the current crawl direction on the fly to determine when to terminate a tunneling activity. Results indicate that a combination of focused crawling and tunneling could be an e#ective tool for building digital libraries.
2-Hop Neighbour:
Target Seeking Crawlers and their Topical Performance Topic driven crawlers can complement search engines by targeting relevant portions of the Web. A topic driven crawler must exploit the information available about the topic and its underlying context. In this paper we extend our previous research on the design and evaluation of topic driven crawlers by comparing seven different crawlers on a harder problem, namely, seeking highly relevant target pages. We find that exploration is an important aspect of a crawling strategy. We also study how the performance of crawler strategies depends on a number of topical characteristics based on notions of topic generality, cohesiveness, and authoritativeness. Our results reveal that topic generality is an obstacle for most crawlers, that three crawlers tend to perform better when the target pages are clustered together, and that two of these also display better performance when topic targets are highly authoritative.
2-Hop Neighbour:
Background Readings for Collection Synthesis
|
IR (Information Retrieval)
|
citeseer
|
train
| 13
|
Classify the node 'Prometheus: A Methodology for Developing Intelligent Agents Abstract. As agents gain acceptance as a technology there is a growing need for practical methods for developing agent applications. This paper presents the Prometheus methodology, which has been developed over several years in collaboration with Agent Oriented Software. The methodology has been taught at industry workshops and university courses. It has proven effective in assisting developers to design, document, and build agent systems. Prometheus differs from existing methodologies in that it is a detailed and complete (start to end) methodology for developing intelligent agents which has evolved out of industrial and pedagogical experience. This paper describes the process and the products of the methodology illustrated by a running example. 1' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
A Survey of Agent-Oriented Methodologies . This article introduces the current agent-oriented methodologies. It discusseswhat approacheshave been followed (mainly extending existing objectoriented and knowledge engineering methodologies), the suitability of these approaches for agent modelling, and some conclusions drawn from the survey. 1 Introduction Agent technology has received a great deal of attention in the last few years and, as a result, the industry is beginning to get interested in using this technology to develop its own products. In spite of the different developed agent theories, languages, architectures and the successful agent-based applications, very little work for specifying (and applying) techniques to develop applications using agent technology has been done. The role of agent-oriented methodologies is to assist in all the phases of the life cycle of an agent-based application, including its management. This article reviews the current approaches to the development of an agent-oriented (AO) methodology. ...
1-Hop Neighbour:
JACK Intelligent Agents - Components for Intelligent Agents in Java This paper is organised as follows. Section 2 introduces JACK Intelligent Agents, presenting the approach taken by AOS to its design and outlining its major engineering characteristics. The BDI model is discussed briefly in Section 3. Section 4 gives an outline of how to build an application with JACK Intelligent Agents. Finally, in Section 5 we discuss how the use of this framework can be beneficial to both engineers and researchers. For brevity, we will refer to JACK Intelligent Agents simply as "JACK".
1-Hop Neighbour:
A Methodology and Modelling Technique for Systems of BDI Agents The construction of large-scale embedded software systems demands the use of design methodologies and modelling techniques that support abstraction, inheritance, modularity, and other mechanisms for reducing complexity and preventing error. If multi-agent systems are to become widely accepted as a basis for large-scale applications, adequate agentoriented methodologies and modelling techniques will be essential. This is not just to ensure that systems are reliable, maintainable, and conformant, but to allow their design, implementation, and maintenance to be carried out by software analysts and engineers rather than researchers. In this paper we describe an agent-oriented methodology and modelling technique for systems of agents based upon the Belief-Desire-Intention (BDI) paradigm. Our models extend existing Object-Oriented (OO) models. By building upon and adapting existing, well-understood techniques, we take advantage of their maturity to produce an approach that can be easily lear...
2-Hop Neighbour:
Agent UML Class Diagrams Revisited Multiagent system designers already use Agent UML in order to represent the interaction protocols [8] [2]. Agent UML is a graphical modeling language based on UML. As UML, Agent UML provides several types of representation covering the description of the system, of the components, the dynamics of the system and the deployment. Since agents and objects are not exactly the same, one can guess that the UML class diagram has to be changed for describing agents. The aim of this paper is to present how to extend Agent UML class diagrams in order to represent agents. We then compare our approach to Bauer's approach [1].
2-Hop Neighbour:
A Conceptual Framework for Agent Definition and Development The use of agents of many different kinds in a variety of fields of computer science and artificial intelligence is increasing rapidly and is due, in part, to their wide applicability. The richness of the agent metaphor that leads to many different uses of the term is, however, both a strength and a weakness: its strength lies in the fact that it can be applied in very many different ways in many situations for different purposes; the weakness is that the term agent is now used so frequently that there is no commonly accepted notion of what it is that constitutes an agent. This paper addresses this issue by applying formal methods to provide a defining framework for agent systems. The Z specification language is used to provide an accessible and unified formal account of agent systems, allowing us to escape from the terminological chaos that surrounds agents. In particular, the framework precisely and unambiguously provides meanings for common concepts and terms, enables alternative models of particular classes of system to be described within it, and provides a foundation for subsequent development of increasingly more refined concepts.
2-Hop Neighbour:
Architecture-Centric Object-Oriented Design Method for Multi-Agent Systems This paper introduces an architecture-centric object-oriented design method for MAS (Multi-Agent Systems) using the extended UML (Unified Modeling Language). The UML extension is based on design principles that are derived from characteristics of MAS and concept of software architecture which helps to design reusable and wellstructured multi-agent architecture. The extension allows one to use original objectoriented method without syntactic or semantic changes which implies the preservation of OO productivity, i.e., the availability of developers and tools, the utilization of past experiences and knowledge, and the seamless integration with other systems. Keywords: multi-agent systems, architecture, object-oriented development methods 1. Introduction Software agents provide a new way of analyzing, designing, and implementing complex software systems. Currently, agent technology is used in wide variety of applications with range from comparatively small systems such as
2-Hop Neighbour:
CaseLP, A Rapid Prototyping Environment For Agent Based Software Intelligent agents and multi-agent systems are increasingly recognized as an innovative approach for analyzing, designing and implementing complex, heterogeneous and distributed software applications. The agent-based view offers a powerful and high level conceptualization that software engineers can exploit to considerably improve the way in which software is realized. Agent-based software engineering is a recent and very interesting research area. Due to its novelty, there is still no evidence of well-established practices for the development of agent-based applications and thus experimentation in this direction is very important. This dissertation
2-Hop Neighbour:
A Methodology for Agent-Oriented Analysis and Design . This article presents Gaia: a methodology for agent-oriented analysis and design. The Gaia methodology is both general, in that it is applicable to a wide range of multi-agent systems, and comprehensive, in that it deals with both the macro-level (societal) and the micro-level (agent) aspects of systems. Gaia is founded on the view of a multi-agent system as a computational organisation consisting of various interacting roles. We illustrate Gaia through a case study (an agent-based business process management system). 1. Introduction Progress in software engineering over the past two decades has been made through the development of increasingly powerful and natural high-level abstractions with which to model and develop complex systems. Procedural abstraction, abstract data types, and, most recently, objects and components are all examples of such abstractions. It is our belief that agents represent a similar advance in abstraction: they may be used by software developers to more n...
|
Agents
|
citeseer
|
train
| 25
|
Classify the node 'Analysis and extraction of useful information across networks of Web databases Contents 1 Introduction 2 2 Problem Statement 2 3 Literature Review 3 3.1 Retrieving Text . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.2 Understanding Music . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Identifying Images . . . . . . . . . . . . . . . . . . . . . . . . 9 3.4 Extracting Video . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Work Completed and in Progress 12 5 Research Plan and Time-line 14 A List of Published Work 15 1 1 INTRODUCTION 2 1 Introduction The World Wide Web of documents on the Internet contains a huge amount of information and resources. It has been growing at a rapid rate for nearly a decade and is now one of the main resources of information for many people. The large interest in the Web is due to the fact that it is uncontrolled and easily accessible, no single person owns it and anyone can add to it. The Web has also brought with it a lot of controversy, also due to the' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
1-Hop Neighbour:
Invariant Fourier-Wavelet Descriptor For Pattern Recognition We present a novel set of descriptors for recognizing complex patterns such as roadsigns, keys, aircrafts, characters, etc. Given a pattern, we first transform it to polar coordinate (r; `) using the centre of mass of the pattern as origin. We then apply the Fourier transform along the axis of polar angle ` and the wavelet transform along the axis of radius r. The features thus obtained are invariant to translation, rotation, and scaling. As an example, we apply the method to a database of 85 printed Chinese characters. The result shows that the Fourier-Wavelet descriptor is an efficient representation which can provide for reliable recognition. Feature Extraction, Fourier Transform, Invariant Descriptor, Multiresolution Analysis, Pattern Recognition, Wavelet Transform. 1 Introduction Feature extraction is a crucial processing step for pattern recognition (15) . Some authors (5\Gamma7;13) extract 1-D features from 2-D patterns. The advantage of this approach is that we can save spa...
2-Hop Neighbour:
Visual Ranking of Link Structures (Extended Abstract) Methods for ranking World Wide Web resources according to their position in the link structure of the Web are receiving considerable attention, because they provide the first effective means for search engines to cope with the explosive growth and diversification of the Web.
2-Hop Neighbour:
Searching the Web: General and Scientific Information Access he World Wide Web is revolutionizing the way people access information, and has opened up new possibilities in areas such as digital libraries, general and scientific information dissemination and retrieval, education, commerce, entertainment, government, and health care. The amount of publicly available information on the Web is increasing rapidly [1]. The Web is a gigantic digital library, a searchable 15 billion word encyclopedia [2]. It has stimulated research and development in information retrieval and dissemination, and fostered search engines such as AltaVista. These new developments are not limited to the Web, and can enhance access to virtually all forms of digital libraries. The revolution the Web has brought to information access is not so much due to the availability of information (huge amounts of information has long been available in libraries and elsewhere), but rather the increased efficiency of accessing information, which can make previously impractical tasks practical. There are many avenues for improvement in the efficiency of accessing information on the Web, for example, in the areas of locating and organizing information. This article discusses general and scientific information access on the Web, and many of our comments are applicable to digital libraries in general. The effectiveness of Web search engines is discussed, including results that show that the major search engines cover only a fraction of the “publicly indexable Web ” (the part of the Web which is considered for indexing by the major engines, which excludes pages hidden behind search forms, pages with authorization requirements, etc.). Current research into improved searching of the Web is discussed, including new techniques for ranking the relevance of results, and new techniques in metasearch that can improve the efficiency and effectiveness of Web search. The amount of scientific information and the number of electronic journals on the Internet continues to increase. Researchers are increasingly making their work available online. This article also discusses the creation of digital libraries of the scientific literature, incorporating autonomous citation indexing. The autonomous creation of citation indices
2-Hop Neighbour:
User Modeling for Information Access Based on Implicit Feedback User modeling can be used in information filtering and retrieval systems to improve the representation of a user's information needs. User models can be constructed by hand, or learned automatically based on feedback provided by the user about the relevance of documents that they have examined. By observing user behavior, it is possible to infer implicit feedback without requiring explicit relevance judgments. Previous studies based on Internet discussion groups (USENET news) have shown reading time to be a useful source of implicit feedback for predicting a user's preferences. The study reported in this paper extends that work by providing framework for considering alternative sources of implicit feedback, examining whether reading time is useful for predicting a user's preferences for academic and professional journal articles, and exploring whether retention behavior can usefully augment the information that reading time provides. Two user studies were conducted in which undergradua...
2-Hop Neighbour:
Techniques for Specialized Search Engines It is emerging that it is very difficult for the major search engines to provide a comprehensive and up-to-date search service of the Web. Even the largest search engines index only a small proportion of static Web pages and do not search the Web' s backend databases that are estimated to be 500 times larger than the static Web. The scale of such searching introduces both technical and economic problems. What is more, in many cases users are not able to retrieve the information they desire because of the simple and generic search interface provided by the major search engines. A necessary response to these search problems is the creation of specialized search engines. These search engines search just for information in a particular topic or category on the Web. Such search engines will have smaller and more manageable indexes and have a powerful domainspecific search interface. This paper discusses the issues in this area and gives an overview of the techniques for building specialized search engines. Keywords: specialized search engine, information retrieval, focused crawling, taxonomy, Web search. 1.
2-Hop Neighbour:
Categorisation by Context Assistance in retrieving of documents on the World Wide Web is provided either by search engines, through keyword based queries, or by catalogues, which organise documents into hierarchical collections. Maintaining catalogues manually is becoming increasingly difficult due to the sheer amount of material, and therefore it will be necessary to resort to techniques for automatic classification of documents. Classification is traditionally performed by extracting information for indexing a document from the document itself. The paper describes the technique of categorisation by context, which exploits the context perceivable from the structure of HTML documents to extract useful information for classifying the documents they refer to. We present the results of experiments with a preliminary implementation of the technique. 1. INTRODUCTION Most Web search engines (e.g. Altavista^TM [Altavista], HotBot^TM [HotBot], Excite^TM [Excite]) perform search based on the content of docume...
|
IR (Information Retrieval)
|
citeseer
|
train
| 57
|
Classify the node 'Co-clustering documents and words using Bipartite Spectral Graph Partitioning Both document clustering and word clustering are important and well-studied problems. By using the vector space model, a document collection may be represented as a word-document matrix. In this paper, we present the novel idea of modeling the document collection as a bipartite graph between documents and words. Using this model, we pose the clustering probliem as a graph partitioning problem and give a new spectral algorithm that simultaneously yields a clustering of documents and words. This co-clustrering algorithm uses the second left and right singular vectors of an appropriately scaled word-document matrix to yield good bipartitionings. In fact, it can be shown that these singular vectors give a real relaxation to the optimal solution of the graph bipartitioning problem. We present several experimental results to verify that the resulting co-clustering algoirhm works well in practice and is robust in the presence of noise.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Concept Decompositions for Large Sparse Text Data using Clustering Abstract. Unlabeled document collections are becoming increasingly common and available; mining such data sets represents a major contemporary challenge. Using words as features, text documents are often represented as high-dimensional and sparse vectors–a few thousand dimensions and a sparsity of 95 to 99 % is typical. In this paper, we study a certain spherical k-means algorithm for clustering such document vectors. The algorithm outputs k disjoint clusters each with a concept vector that is the centroid of the cluster normalized to have unit Euclidean norm. As our first contribution, we empirically demonstrate that, owing to the high-dimensionality and sparsity of the text data, the clusters produced by the algorithm have a certain “fractal-like ” and “self-similar ” behavior. As our second contribution, we introduce concept decompositions to approximate the matrix of document vectors; these decompositions are obtained by taking the least-squares approximation onto the linear subspace spanned by all the concept vectors. We empirically establish that the approximation errors of the concept decompositions are close to the best possible, namely, to truncated singular value decompositions. As our third contribution, we show that the concept vectors are localized in the word space, are sparse, and tend towards orthonormality. In contrast, the singular vectors are global in the word space and are dense. Nonetheless, we observe the surprising fact that the linear subspaces spanned by the concept vectors and the leading singular vectors are quite close in the sense of small principal angles between them. In conclusion, the concept vectors produced by the spherical k-means
1-Hop Neighbour:
Criterion Functions for Document Clustering: Experiments and Analysis In recent years, we have witnessed a tremendous growth in the volume of text documents available on the Internet, digital libraries, news sources, and company-wide intranets. This has led to an increased interest in developing methods that can help users to effectively navigate, summarize, and organize this information with the ultimate goal of helping them to find what they are looking for. Fast and high-quality document clustering algorithms play an important role towards this goal as they have been shown to provide both an intuitive navigation/browsing mechanism by organizing large amounts of information into a small number of meaningful clusters as well as to greatly improve the retrieval performance either via cluster-driven dimensionality reduction, term-weighting, or query expansion. This ever-increasing importance of document clustering and the expanded range of its applications led to the development of a number of new and novel algorithms with different complexity-quality trade-offs. Among them, a class of clustering algorithms that have relatively low computational requirements are those that treat the clustering problem as an optimization process which seeks to maximize or minimize a particular clustering criterion function defined over the entire clustering solution.
1-Hop Neighbour:
Web Document Clustering: A Feasibility Demonstration Abstract Users of Web search engines are often forced to sift through the long ordered list of document “snippets” returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial. 1
2-Hop Neighbour:
Document Clustering using Word Clusters via the Information Bottleneck Method We present a novel implementation of the recently introduced information bottleneck method for unsupervised document clustering. Given a joint empirical distribution of words and documents, p(x; y), we first cluster the words, Y , so that the obtained word clusters, Y_hat , maximally preserve the information on the documents. The resulting joint distribution, p(X; Y_hat ), contains most of the original information about the documents, I(X; Y_hat ) ~= I(X;Y ), but it is much less sparse and noisy. Using the same procedure we then cluster the documents, X , so that the information about the word-clusters is preserved. Thus, we first find word-clusters that capture most of the mutual information about the set of documents, and then find document clusters, that preserve the information about the word clusters. We tested this procedure over several document collections based on subsets taken from the standard 20Newsgroups corpus. The results were assessed by calculating the correlation between the document clusters and the correct labels for these documents. Finding from our experiments show that this double clustering procedure, which uses the information bottleneck method, yields significantly superior performance compared to other common document distributional clustering algorithms. Moreover, the double clustering procedure improves all the distributional clustering methods examined here.
2-Hop Neighbour:
Clustering Hypertext With Applications To Web Searching : Clustering separates unrelated documents and groups related documents, and is useful for discrimination, disambiguation, summarization, organization, and navigation of unstructured collections of hypertext documents. We propose a novel clustering algorithm that clusters hypertext documents using words (contained in the document), out-links (from the document) , and in-links (to the document). The algorithm automatically determines the relative importance of words, out-links, and in-links for a given collection of hypertext documents. We annotate each cluster using six information nuggets: summary, breakthrough, review, keywords, citation, and reference. These nuggets constitute high-quality information resources that are representatives of the content of the clusters, and are extremely effective in compactly summarizing and navigating the collection of hypertext documents. We employ web searching as an application to illustrate our results. Keywords: cluster annotation, feature comb...
2-Hop Neighbour:
Web Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions This paper summarizes the different characteristics of web data, the basic components of web mining and its different types, and their current states of the art. The reason for considering web mining, a separate field from data mining, is explained. The limitations of some of the existing web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) highlighted. A survey of the existing literature on "soft web mining" is provided along with the commercially available systems. The prospective areas of web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing "soft web mining" systems is explained. An extensive bibliography is also provided.
2-Hop Neighbour:
A Comparison of Techniques to Find Mirrored Hosts on the WWW We compare several algorithms for identifying mirrored hosts on the World Wide Web. The algorithms operate on the basis of URL strings and linkage data: the type of information easily available from web proxies and crawlers. Identification of mirrored hosts can improve web-based information retrieval in several ways: First, by identifying mirrored hosts, search engines can avoid storing and returning duplicate documents. Second, several new information retrieval techniques for the Web make inferences based on the explicit links among hypertext documents -- mirroring perturbs their graph model and degrades performance. Third, mirroring information can be used to redirect users to alternate mirror sites to compensate for various failures, and can thus improve the performance of web browsers and proxies. # This work was presented at the Workshop on Organizing Web Space at the Fourth ACM Conference on Digital Libraries 1999. We evaluated 4 classes of "top-down" algorithms for detecting ...
2-Hop Neighbour:
Background Readings for Collection Synthesis
|
IR (Information Retrieval)
|
citeseer
|
train
| 109
|
Classify the node 'Tumor Detection in Colonoscopic Images Using Hybrid Methods for on-Line Neural Network Training In this paper the effectiveness of a new Hybrid Evolutionary Algorithm in on-line Neural Network training for tumor detection is investigated. To this end, a Lamarck-inspired combination of Evolutionary Algorithms and Stochastic Gradient Descent is proposed. The Evolutionary Algorithm works on the termination point of the Stochastic Gradient Descent. Thus, the method consists in a Stochastic Gradient Descent-based on-line training stage and an Evolutionary Algorithm-based retraining stage. On-line training is considered eminently suitable for large (or even redundant) training sets and/or networks; it also helps escaping local minima and provides a more natural approach for learning nonstationary tasks. Furthermore, the notion of retraining aids the hybrid method to exhibit reliable and stable performance, and increases the generalization capability of the trained neural network. Experimental results suggest that the proposed hybrid strategy is capable to train on-line, efficiently and effectively. Here, an artificial neural network architecture has been successfully used for detecting abnormalities in colonoscopic video images.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Online Learning with Random Representations We consider the requirements of online learning---learning which must be done incrementally and in realtime, with the results of learning available soon after each new example is acquired. Despite the abundance of methods for learning from examples, there are few that can be used effectively for online learning, e.g., as components of reinforcement learning systems. Most of these few, including radial basis functions, CMACs, Kohonen 's self-organizing maps, and those developed in this paper, share the same structure. All expand the original input representation into a higher dimensional representation in an unsupervised way, and then map that representation to the final answer using a relatively simple supervised learner, such as a perceptron or LMS rule. Such structures learn very rapidly and reliably, but have been thought either to scale poorly or to require extensive domain knowledge. To the contrary, some researchers (Rosenblatt, 1962; Gallant & Smith, 1987; Kanerva, 1988; Prager ...
1-Hop Neighbour:
Image Recognition and Neuronal Networks: Intelligent Systems for the Improvement of Imaging Information this paper we have concentrated on describing issues related to the development and use of artificial neural network-based intelligent systems for medical image interpretation. Research in intelligent systems to-date remains centred on technological issues and is mostly application driven. However, previous research and experience suggests that the successful implementation of computerised systems (e.g., [34] [35]), and decision support systems in particular (e.g., [36]), in the area of healthcare relies on the successful integration of the technology with the organisational and social context within which it is applied. Therefore, the successful implementation of intelligent medical image interpretation systems 9 should not only rely on their technical feasibility and effectiveness but also on organisational and social aspects that may rise from their applications, as clinical information is acquired, processed, used and exchanged between professionals. All these issues are critical in healthcare applications because they ultimately reflect on the quality of care provided.
2-Hop Neighbour:
Hybrid Methods Using Evolutionary Algorithms for On-line Training A novel hybrid evolutionary approach is presented in this paper for improving the performance of neural network classifiers in slowly varying environments. For this purpose, we investigate a coupling of Differential Evolution Strategy and Stochastic Gradient Descent, using both the global search capabilities of Evolutionary Strategies and the effectiveness of on-line gradient descent. The use of Differential Evolution Strategy is related to the concept of evolution of a number of individuals from generation to generation and that of on-line gradient descent to the concept of adaptation to the environment by learning. The hybrid algorithm is tested in two real-life image processing applications. Experimental results suggest that the hybrid strategy is capable to train on-line effectively leading to networks with increased generalization capability.
|
AI (Artificial Intelligence)
|
citeseer
|
train
| 110
|
Classify the node 'Towards Detection of Human Motion Detecting humans in images is a useful application of computer vision. Loose and textured clothing, occlusion and scene clutter make it a difficult problem because bottom-up segmentation and grouping do not always work. We address the problem of detecting humans from their motion pattern in monocular image sequences; extraneous motions and occlusion may be present. We assume that we may not rely on segmentation, nor grouping and that the vision front-end is limited to observing the motion of key points and textured patches in between pairs of frames. We do not assume that we are able to track features for more than two frames. Our method is based on learning an approximate probabilistic model of the joint position and velocity of different body features. Detection is performed by hypothesis testing on the maximum a posteriori estimate of the pose and motion of the body. Our experiments on a dozen of walking sequences indicate that our algorithm is accurate and efficient.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
3D Hand Pose Reconstruction Using Specialized Mappings A system for recovering 3D hand pose from monocular color sequences is proposed. The system employs a non-linear supervised learning framework, the specialized mappings architecture (SMA), to map image features to likely 3D hand poses. The SMA's fundamental components are a set of specialized forward mapping functions, and a single feedback matching function. The forward functions are estimated directly from training data, which in our case are examples of hand joint configurations and their corresponding visual features. The joint angle data in the training set is obtained via a CyberGlove, a glove with 22 sensors that monitor the angular motions of the palm and fingers. In training, the visual features are generated using a computer graphics module that renders the hand from arbitrary viewpoints given the 22 joint angles. The viewpoint is encoded by two real values, therefore 24 real values represent a hand pose. We test our system both on synthetic sequences and on sequences taken with a color camera. The system automatically detects and tracks both hands of the user, calculates the appropriate features, and estimates the 3D hand joint angles and viewpoint from those features. Results are encouraging given the complexity of the task.
2-Hop Neighbour:
Rotation Invariant Neural Network-Based Face Detection In this paper, we present a neural network-based face detection system. Unlike similar systems which are limited to detecting upright, frontal faces, this system detects faces at any degree of rotation in the image plane. The system employs multiple networks; the first is a "router" network which processes each input window to determine its orientation and then uses this information to prepare the window for one or more "detector" networks. We present the training methods for both types of networks. We also perform sensitivity analysis on the networks, and present empirical results on a large test set. Finally, we present preliminary results for detecting faces which are rotated out of the image plane, such as profiles and semi-profiles. This work was partially supported by grants from Hewlett-Packard Corporation, Siemens Corporate Research, Inc., the Department of the Army, Army Research Office under grant number DAAH04-94-G-0006, and by the Office of Naval Research under grant number...
2-Hop Neighbour:
Toward Scalability in ASL Recognition: Breaking Down Signs into Phonemes In this paper we present a novel approach to continuous, whole-sentence ASL recognition that uses phonemes instead of whole signs as the basic units. Our approach is based on a sequential phonological model of ASL. According to this model the ASL signs can be broken into movements and holds, which are both considered phonemes. This model does away with the distinction between whole signs and epenthesis movements that we made in previous work [13]. Instead, epenthesis movements are just like the other movements that constitute the signs. We subsequently train Hidden Markov Models (HMMs) to recognize the phonemes, instead of whole signs and epenthesis movements that we recognized previously [13]. Because the number of phonemes is limited, HMM-based training and recognition of the ASL signal becomes computationally more tractable and has the potential to lead to the recognition of large-scale vocabularies. We experimented with a 22 word vocabulary, and we achieved similar recognition r...
2-Hop Neighbour:
View-based Interpretation of Real-time Optical Flow for Gesture Recognition We have developed a real-time, view-based gesture recognition system. Optical flow is estimated and segmented into motion blobs. Gestures are recognized using a rule-based technique based on characteristics of the motion blobs such as relative motion and size. Parameters of the gesture (e.g., frequency) are then estimated using context specific techniques. The system has been applied to create an interactive environment for children. 1 Introduction For many applications, the use of hand and body gestures is an attractive alternative to the cumbersome interface devices for human-computer interaction. This is especially true for interacting in virtual reality environments, where the user is no longer confined to the desktop and should be able to move around freely. While special devices can be worn to achieve these goals, these can be expensive and unwieldy. There has been a recent surge in computer vision research to provide a solution that doesn't use such devices. This paper describe...
2-Hop Neighbour:
Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video Hidden Markov models (HMM's) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that demonstrate a realtime HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon. 1 Introduction While there are many different types of gestures, the most structured sets belong to the sign languages. In sign language, each gesture already has assigned meaning, and strong rules of context and grammar may be applied to make recognition tractable. To date, most work on sign language recognition has employed expensi...
2-Hop Neighbour:
View-independent Recognition of Hand Postures Since human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research of view-independent object recognition. Due to the difficulties of the modelbased approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set o...
|
ML (Machine Learning)
|
citeseer
|
train
| 127
|
Classify the node 'Reasoning with Examples: Propositional Formulae and Database Dependencies For humans, looking at how concrete examples behave is an intuitive way of deriving conclusions. The drawback with this method is that it does not necessarily give the correct results. However, under certain conditions example-based deduction can be used to obtain a correct and complete inference procedure. This is the case for Boolean formulae (reasoning with models) and for certain types of database integrity constraints (the use of Armstrong relations). We show that these approaches are closely related, and use the relationship to prove new results about the existence and sizes of Armstrong relations for Boolean dependencies. Furthermore, we exhibit close relations between the questions of finding keys in relational databases and that of finding abductive explanations. Further applications of the correspondence between these two approaches are also discussed. 1 Introduction One of the major tasks in database systems as well as artificial intelligence systems is to express some know...' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Temporal Dependencies Generalized for Spatial and Other Dimensions . Recently, there has been a lot of interest in temporal granularity , and its applications in temporal dependency theory and data mining. Generalization hierarchies used in multi-dimensional databases and OLAP serve a role similar to that of time granularity in temporal databases, but they also apply to non-temporal dimensions, like space. In this paper, we first generalize temporal functional dependencies for non-temporal dimensions, which leads to the notion of roll-up dependency (RUD). We show the applicability of RUDs in conceptual modeling and data mining. We then indicate that the notion of time granularity used in temporal databases is generally more expressive than the generalization hierarchies in multi-dimensional databases, and show how this surplus expressiveness can be introduced in non-temporal dimensions, which leads to the formalism of RUD with negation (RUD : ). A complete axiomatization for reasoning about RUD : is given. 1 Introduction Generalization hierarchi...
1-Hop Neighbour:
Generalizing Temporal Dependencies for Non-Temporal Dimensions Recently, there has been a lot of interest in temporal granularity, and its applications in temporal dependency theory and data mining. Generalization hierarchies used in multi-dimensional databases and OLAP serve a role similar to that of time granularity in temporal databases, but they also apply to non-temporal dimensions, like space.
2-Hop Neighbour:
Mining Knowledge in Geographical Data this article, a short overview is provided to summarize recent studies on spatial data mining, including spatial data mining techniques, their strengths and weaknesses, how and when to apply them, and what are the challenges yet to be faced.
2-Hop Neighbour:
A String-based Model for Infinite Granularities (Extended Abstract) ) Jef Wijsen Universit'e de Mons-Hainaut [email protected] Abstract In the last few years, the concept of time granularity has been defined by several researchers, and a glossary of time granularity concepts has been published. These definitions often view a time granularity as a (mostly infinite) sequence of time granules. Although this view is conceptually clean, it is extremely inefficient or even practically impossible to represent a time granularity in this manner. In this paper, we present a practical formalism for the finite representation of infinite granularities. The formalism is string-based, allows symbolic reasoning, and can be extended to multiple dimensions to accommodate, for example, space. Introduction In the last few years, formalisms to represent and to reason about temporal and spatial granularity have been developed in several areas of computer science. Although several researchers have used different definitions of time granularity, they comm...
2-Hop Neighbour:
Temporal FDs on Complex Objects ing with credit is permitted. To copy otherwise, to republish, to Post on servers, or to redistribute to lists, requires prior speci#c permission and#or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 #212# 869-0481, or [email protected]. 2 # J. Wijsen in reality, regardless of the past and future. In many applications, information about the past and future is just as important as information about the present. Incorporating the time dimension into existing database theory and practice is interesting and important. Temporal databases extend classical #snapshot" databases by supporting the storage and the access of time-related information. When history is taken into account, integrity constraints can place restrictions on the evolution of data in time. This paper deals with extending dependency theory for temporal databases with complex objects. In general, classical dependencies can be applied to temporal databases in a straightforward manner. From an abstr...
|
AI (Artificial Intelligence)
|
citeseer
|
train
| 136
|
Classify the node 'Data Visualization, Indexing and Mining Engine - A Parallel Computing Architecture for Information Processing Over the Internet ion : : : : : : : : : : : 9 3.2 Spatial Representation Of the System's Networks : : : : : : : : : : : : : : : : : : : : : : 10 3.3 Display And Interaction Mechanisms : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11 3.3.1 Overviews, orientation, and network abstraction : : : : : : : : : : : : : : : : : : 11 3.3.2 Other Navigation And Orientation Tools : : : : : : : : : : : : : : : : : : : : : : : 12 3.3.3 Head-tracked Stereoscopic Display : : : : : : : : : : : : : : : : : : : : : : : : : : 12 4 Web Access of Geographic Information System Data 13 4.1 Functions Provided by GIS2WEB : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 13 4.2 Structure : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14 5 ParaCrawler --- Parallel Web Searching With Machine Learning 15 5.1 A Machine Learning Approach Towards Web Searching : : : : : : : : : : : : : : : : : : 15 6 DUSIE --- Interactive Content-based Web Structuring 16 6.1 Creating t...' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Evaluating Stereo and Motion Cues for Visualizing Information Nets in Three Dimensions This article concerns the benefits of presenting abstract data in 3D. Two experiments show that motion cues combined with stereo viewing can substantially increase the size of tbe graph that can be perceived. The first experiment was designed to provide quantitative measurements of how much more (or less) can be understood in 3D than in 2D. Tbe 3D display used was configured so that the image on the monitor was coupled to the user’s actual eye positions (and it was updated in real-time as the user moved) as well as being in stereo. Thus the effect was like a local “virtual reality ” display located in the vicinity of the computer monitor. The results from this study show that head-coupled stereo viewing can increase the size of an abstract graph that can be understood by a factor of three; using stereo alone provided an increase by a factor of 1.6 and bead coupling alone produced an increase by a factor of 2.2, Tbe second experiment examined a variety of motion cues provided by head-coupled perspective (as in virtual reality displays), hand-guided motion and automatic rotation, respectively, both with and without stereo in each case. The results show that structured 3D motion and stereo viewing both help in understanding, but that the kind of motion is not particularly important; all improve performance, and all are more significant than stereo cues. These results provide strong reasons for using advanced 3D graphics for interacting with a large variety of information structures.
|
IR (Information Retrieval)
|
citeseer
|
train
| 157
|
Classify the node 'Exploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Intelligent Crawling on the World Wide Web with Arbitrary Predicates The enormous growth of the world wide web in recent years has made it important to perform resource discovery efficiently. Consequently, several new ideas have been proposed in recent years; among them a key technique is focused crawling which is able to crawl particular topical portions of the world wide web quickly without having to explore all web pages. In this paper, we propose the novel concept of intelligent crawling which actually learns characteristics of the linkage structure of the world wide web while performing the crawling. Specifically, the intelligent crawler uses the inlinking web page content, candidate URL structure, or other behaviors of the inlinking web pages or siblings in order to estimate the probability that a candidate is useful for a given crawl. This is a much more general framework than the focused crawling technique which is based on a pre-defined understanding of the topical structure of the web. The techniques discussed in this paper are applicable for crawling web pages which satisfy arbitrary user-defined predicates such as topical queries, keyword queries or any combinations of the above. Unlike focused crawling, it is not necessary to provide representative topical examples, since the crawler can learn its way into the appropriate topic. We refer to this technique as intelligent crawling because of its adaptive nature in adjusting to the web page linkage structure. The learning crawler is capable of reusing the knowledge gained in a given crawl in order to provide more efficient crawling for closely related predicates.
1-Hop Neighbour:
Adaptive Retrieval Agents: Internalizing Local Context and Scaling up to the Web . This paper focuses on two machine learning abstractions springing from ecological models: (i) evolutionary adaptation by local selection, and (ii) selective query expansion by internalization of environmental signals. We first outline a number of experiments pointing to the feasibility and performance of these methods on a general class of graph environments. We then describe how these methods have been applied to the intelligent retrieval of information distributed across networked environments. In particular, the paper discusses a novel distributed evolutionary algorithm and representation used to construct populations of adaptive Web agents. These InfoSpiders search on-line for information relevant to the user, by traversing hyperlinks in an autonomous and intelligent fashion. They can adapt to the spatial and temporal regularities of their local context. Our results suggest that InfoSpiders could complement current search engine technology by starting up where search engines stop...
1-Hop Neighbour:
Web Crawling Agents for Retrieving Biomedical Information Autonomous agents for topic driven retrieval of information from the Web are currently a very active area of research. The ability to conduct real time searches for information is important for many users including biomedical scientists, health care professionals and the general public. We present preliminary research on different retrieval agents tested on their ability to retrieve biomedical information, whose relevance is assessed using both genetic and ontological expertise. In particular, the agents are judged on their performance in fetching information about diseases when given information about genes. We discuss several key insights into the particular challenges of agent based retrieval learned from our initial experience in the biomedical domain.
2-Hop Neighbour:
Background Readings for Collection Synthesis
2-Hop Neighbour:
Toward Learning Based Web Query Processing In this paper, we describe a novel Web query processing approach with learning capabilities. Under this approach, user queries are in the form of keywords and search engines are employed to find URLs of Web sites that might contain the required information. The first few URLs are presented to the user for browsing. Meanwhile, the query processor learns both the information required by the user and the way that the user navigates through hyperlinks to locate such information. With the learned knowledge, it processes the rest URLs and produces precise query results in the form of segments of Web pages without user involvement. The preliminary experimental results indicate that the approach can process a range of Web queries with satisfactory performance. The architecture of such a query processor, techniques of modeling HTML pages, and knowledge for query processing are discussed. Experiments on the effectiveness of the approach, the required knowledge, and the training strategies are presented.
2-Hop Neighbour:
Information Retrieval on the World Wide Web and Active Logic: A Survey and Problem Definition As more information becomes available on the World Wide Web (there are currently over 4 billion pages covering most areas of human endeavor), it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest). The first process is tentative and time consuming and the second may not satisfy the user because of many inaccurate and irrelevant results. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves.
2-Hop Neighbour:
Computing Geographical Scopes of Web Resources Many information resources on the web are relevant primarily to limited geographical communities. For instance, web sites containing information on restaurants, theaters, and apartment rentals are relevant primarily to web users in geographical proximity to these locations. In contrast, other information resources are relevant to a broader geographical community. For instance, an on-line newspaper may be relevant to users across the United States. Unfortunately, most current web search engines largely ignore the geographical scope of web resources. In this paper, we introduce techniques for automatically computing the geographical scope of web resources, based on the textual content of the resources, as well as on the geographical distribution of hyperlinks to them. We report an extensive experimental evaluation of our strategies using real web data. Finally, we describe a geographically-aware search engine that we have built using our techniques for determining the geographical scope of web resources. 1
2-Hop Neighbour:
Target Seeking Crawlers and their Topical Performance Topic driven crawlers can complement search engines by targeting relevant portions of the Web. A topic driven crawler must exploit the information available about the topic and its underlying context. In this paper we extend our previous research on the design and evaluation of topic driven crawlers by comparing seven different crawlers on a harder problem, namely, seeking highly relevant target pages. We find that exploration is an important aspect of a crawling strategy. We also study how the performance of crawler strategies depends on a number of topical characteristics based on notions of topic generality, cohesiveness, and authoritativeness. Our results reveal that topic generality is an obstacle for most crawlers, that three crawlers tend to perform better when the target pages are clustered together, and that two of these also display better performance when topic targets are highly authoritative.
|
IR (Information Retrieval)
|
citeseer
|
train
| 164
|
Classify the node 'A Web Odyssey: from Codd to XML INTRODUCTION The Web presents the database area with vast opportunities and commensurate challenges. Databases and the Web are organically connected at many levels. Web sites are increasingly powered by databases. Collections of linked Web pages distributed across the Internet are themselves tempting targets for a database. The emergence of XML as the lingua franca of the Web brings some much needed order and will greatly facilitate the use of database techniques to manage Web information. This paper will discuss some of the developments related to the Web from the viewpoint of database theory. As we shall see, the Web scenario requires revisiting some of the basic assumptions of the area. To be sure, database theory remains as valid as ever in the classical setting, and the database industry will continue to representamulti-billion dollar target of applicability for the foreseeable future. But the Web represents an opportunityofanentirely di#erent scale. We are th' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Algorithmics and Applications of Tree and Graph Searching Modern search engines answer keyword-based queries extremely efficiently. The impressive speed is due to clever inverted index structures, caching, a domain-independent knowledge of strings, and thousands of machines. Several research efforts have attempted to generalize keyword search to keytree and keygraph searching, because trees and graphs have many applications in next-generation database systems. This paper surveys both algorithms and applications, giving some emphasis to our own work.
1-Hop Neighbour:
Benchmarking XML Management Systems: The XOO7 Way The effectiveness of existing XML query languages has been studied by many who focused on the comparison of linguistic features, implicitly reflecting the fact that most XML tools exist only on paper. In this paper, with a focus on efficiency and concreteness, we propose a pragmatic first step toward the systematic benchmarking of XML query processing platforms. We begin by identifying the necessary functionalities an XML data management system should support. We review existing approaches for managing XML data and the query processing capabilities of these approaches. We then compare three XML query benchmarks XMach-1, XMark and XOO7 and discuss the applicability, strengths and limitations of these benchmarks. We highlight the bias of these benchmarks towards the data centric view of XML and motivate our selection of XOO7 to extend with document centric queries. We complete XOO7 to capture the information retrieval capabilities of XML management systems. Finally we summarize our contributions and discuss future directions.
1-Hop Neighbour:
Interaction between Path and Type Constraints XML [7], which is emerging as an important standard for data exchange on the World-Wide Web, highlights the importance of semistructured data. Although the XML standard itself does not require any schema or type system, a number of proposals [6, 17, 19] have been developed that roughly correspond to data definition languages. These allow one to constrain the structure of XML data by imposing a schema on it. These and other proposals also advocate the need for integrity constraints, another form of constraints that should, for example, be capable of expressing inclusion constraints and inverse relationships. The latter have recently been studied as path constraints in the context of semistructured data [4, 9]. It is likely that future XML proposals will involve both forms of constraints, and it is therefore appropriate to understand the interaction between them. This paper investigates that interaction. In particular it studies constraint implication problems, which are important both i...
2-Hop Neighbour:
Indexing and Querying XML Data for Regular Path Expressions With the advent of XML as a standard for data representation and exchange on the Internet, storing and querying XML data becomes more and more important. Several XML query languages have been proposed, and the common feature of the languages is the use of regular path expressions to query XML data. This poses a new challenge concerning indexing and searching XML data, because conventional approaches based on tree traversals may not meet the processing requirements under heavy access requests. In this paper, we propose a new system for indexing and storing XML data based on a numbering scheme for elements. This numbering scheme quickly determines the ancestor-descendant relationship between elements in the hierarchy of XML data. We also propose several algorithms for processing regular path expressions, namely, (1) ##-Join for searching paths from an element to another, (2) ##-Join for scanning sorted elements and attributes to find element-attribute pairs, and (3) ##-Join for finding Kleene-Closure on repeated paths or elements. The ##-Join algorithm is highly effective particularly for searching paths that are very long or whose lengths are unknown. Experimental results from our prototype system implementation show that the proposed algorithms can process XML queries with regular path expressions by up to an or- # This work was sponsored in part by National Science Foundation CAREER Award (IIS-9876037) and Research Infrastructure program EIA-0080123. The authors assume all responsibility for the contents of the paper. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its...
2-Hop Neighbour:
The XML Benchmark Project With standardization efforts of a query language for XML documents drawing to a close, researchers and users increasingly focus their attention on the database technology that has to deliver on the new challenges that the sheer amount of XML documents produced by applications pose to data management: validation, performance evaluation and optimization of XML query processors are the upcoming issues. Following a long tradition in database research, the XML Store Benchmark Project provides a framework to assess an XML database's abilities to cope with a broad spectrum of different queries, typically posed in real-world application scenarios. The benchmark is intended to help both implementors and users to compare XML databases independent of their own, specific application scenario. To this end, the benchmark o ers a set queries each of which is intended to challenge a particular primitive of the query processor or storage engine. The overall workload wepropose consists of a scalable document database and a concise, yet comprehensive set of queries, which covers the major aspects of query processing. The queries' challenges range from stressing the textual character of the document to data analysis queries, but include also typical ad-hoc queries. We complement our research with results obtained from running the benchmark on our XML database platform. They are intended to give a rst baseline, illustrating the state of the art.
2-Hop Neighbour:
A Performance Evaluation of Alternative Mapping Schemes for Storing XML Data in a Relational Database XML is emerging as one of the dominant data formats for data processing on the Internet. To query XML data, query languages likeXQL, Lorel, XML-QL, or XML-GL have been proposed. In this paper, we study how XML data can be stored and queried using a standard relational database system. For this purpose, we present alternative mapping schemes to store XML data in a relational database and discuss how XML-QL queries can be translated into SQL queries for every mapping scheme. We present the results of comprehensive performance experiments that analyze the tradeo#s of the alternative mapping schemes in terms of database size, query performance and update performance. While our discussion is focussed on XML and XML-QL, the results of this paper are relevant for most semi-structured data models and most query languages for semi-structured data. 1 Introduction It has become clear that not all applications are met by the relational, object-relational, or object-oriented data models. ...
2-Hop Neighbour:
A Web Odyssey: from Codd to XML INTRODUCTION The Web presents the database area with vast opportunities and commensurate challenges. Databases and the Web are organically connected at many levels. Web sites are increasingly powered by databases. Collections of linked Web pages distributed across the Internet are themselves tempting targets for a database. The emergence of XML as the lingua franca of the Web brings some much needed order and will greatly facilitate the use of database techniques to manage Web information. This paper will discuss some of the developments related to the Web from the viewpoint of database theory. As we shall see, the Web scenario requires revisiting some of the basic assumptions of the area. To be sure, database theory remains as valid as ever in the classical setting, and the database industry will continue to representamulti-billion dollar target of applicability for the foreseeable future. But the Web represents an opportunityofanentirely di#erent scale. We are th
2-Hop Neighbour:
Representing and Querying Changes in Semistructured Data Semistructured data may be irregular and incomplete and does not necessarily conform to a fixed schema. As with structured data, it is often desirable to maintain a history of changes to data, and to query over both the data and the changes. Representing and querying changes in semistructured data is more difficult than in structured data due to the irregularity and lack of schema. We present a model for representing changes in semistructured data and a language for querying over these changes. We discuss implementation strategies for our model and query language. We also describe the design and implementation of a "query subscription service" that permits standing queries over changes in semistructured information sources. 1 Introduction Semistructured data is data that has some structure, but it may be irregular and incomplete and does not necessarily conform to a fixed schema (e.g, HTML documents). Recently, there has been increased interest in data models and query languages for s...
|
DB (Databases)
|
citeseer
|
train
| 183
|
Classify the node 'OBPRM: An Obstacle-Based PRM for 3D Workspaces this paper we consider an obstacle-based prm' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Path Planning Using Lazy PRM This paper describes a new approach to probabilistic roadmap planners (PRMs). The overall theme of the algorithm, called Lazy PRM, is to minimize the number of collision checks performed during planning and hence minimize the running time of the planner. Our algorithm builds a roadmap in the configuration space, whose nodes are the user-defined initial and goal configurations and a number of randomly generated nodes. Neighboring nodes are connected by edges representing paths between the nodes. In contrast with PRMs, our planner initially assumes that all nodes and edges in the roadmap are collision-free, and searches the roadmap at hand for a shortest path between the initial and the goal node. The nodes and edges along the path are then checked for collision. If a collision with the obstacles occurs, the corresponding nodes and edges are removed from the roadmap. Our planner either finds a new shortest path, or first updates the roadmap with new nodes and edges, and then searches for a shortest path. The above process is repeated until a collision-free path is returned.
1-Hop Neighbour:
Providing Haptic 'Hints' to Automatic Motion Planners In this paper, we investigate methods for enabling a human operator and an automatic motion planner to cooperatively solve a motion planning query. Our work is motivated by our experience that automatic motion planners sometimes fail due to the difficulty of discovering `critical' configurations of the robot that are often naturally apparent to a human observer. Our goal is to develop techniques by which the automatic planner can utilize (easily generated) user-input, and determine `natural' ways to inform the user of the progress made by the motion planner. We show that simple randomized techniques inspired by probabilistic roadmap methods are quite useful for transforming approximate, user-generated paths into collision-free paths, and describe an iterative transformation method which enables one to transform a solution for an easier version of the problem into a solution for the original problem. We also show that simple visualization techniques can provide meaningful representatio...
1-Hop Neighbour:
Choosing Good Distance Metrics and Local Planners for Probabilistic Roadmap Methods This paper presents a comparative evaluation of different distance metrics and local planners within the context of probabilistic roadmap methods for motion planning. Both C-space and Workspace distance metrics and local planners are considered. The study concentrates on cluttered three-dimensional Workspaces typical, e.g., of mechanical designs. Our results include recommendations for selecting appropriate combinations of distance metrics and local planners for use in motion planning methods, particularly probabilistic roadmap methods. Our study of distance metrics showed that the importance of the translational distance increased relative to the rotational distance as the environment become more crowded. We find that each local planner makes some connections than none of the others do — indicating that better connected roadmaps will be constructed using multiple local planners. We propose a new local planning method we call rotate-at-s that outperforms the common straight-line in C-space method in crowded environments.
|
AI (Artificial Intelligence)
|
citeseer
|
train
| 188
|
Classify the node 'Updating Mental States from Communication . In order to perform effective communication agents must be able to foresee the effects of their utterances on the addressee's mental state. In this paper we investigate on the update of the mental state of an hearer agent as a consequence of the utterance performed by a speaker agent. Given an agent communication language with a STRIPSlike semantics, we propose a set of criteria that allow to bind the speaker's mental state to its uttering of a certain sentence. On the basis of these criteria, we give an abductive procedure that the hearer can adopt to partially recognize the speaker's mental state that led to a specific utterance. This procedure can be adopted by the hearer to update its own mental state and its image of the speaker's mental state. 1 Introduction In multi-agent systems, communication is necessary for the agents to cooperate and coordinate their activities or simply to avoid interfering with one another. If agents are not designed with embedded pre-compiled...' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
BDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...
1-Hop Neighbour:
Extending Multi-Agent Cooperation by Overhearing Much cooperation among humans happens following a common pattern: by chance or deliberately, a person overhears a conversation between two or more parties and steps in to help, for instance by suggesting answers to questions, by volunteering to perform actions, by making observations or adding information. We describe an abstract architecture to support a similar pattern in societies of articial agents. Our architecture involves pairs of so-called service agents (or services) engaged in some tasks, and unlimited number of suggestive agents (or suggesters). The latter have an understanding of the work behaviors of the former through a publicly available model, and are able to observe the messages they exchange. Depending on their own objectives, the understanding they have available, and the observed communication, the suggesters try to cooperate with the services, by initiating assisting actions, and by sending suggestions to the services. These in eect may induce a change in services behavior. To test our architecture, we developed an experimental, multi-agent Web site. The system has been implemented by using a BDI toolkit, JACK Intelligent Agents. Keywords: autonomous agents, multiagent systems, AI architectures, distributed AI. 1
2-Hop Neighbour:
An Architecture for Mobile BDI Agents BDI (Belief, Desire, Intention) is a mature and commonly adopted architecture for Intelligent Agents. BDI Agents are autonomous entities able to work in teams and react to changing environmental conditions. However, the current computational model adopted by BDI has problems which, amongst other limitations, prevent the development of mobile agents. In this paper, we discuss an architecture, TOMAS (Transaction Oriented Multi Agent System), that addresses these issues by combining BDI and the distributed nested transaction paradigms. An algorithm is presented which enable agents in TOMAS to become mobile. 1 Introduction Intelligent Agents are a very active area of AI research [WJ95] [Sho93]. Of the various agent architectures which have been proposed, BDI (Belief, Desire, Intention) [RG92] is probably the most mature and has been adopted by a few industrial applications. BDI Agents are autonomous entities able to work in teams and react to changing environmental conditions. Mobile m...
2-Hop Neighbour:
A Framework For Designing, Modeling and Analyzing Agent Based Software Systems The agent paradigm is gaining popularity because it brings intelligence, reasoning and autonomy to software systems. Agents are being used in an increasingly wide variety of applications from simple email filter programs to complex mission control and safety systems. However there appears to be very little work in defining practical software architecture, modeling and analysis tools that can be used by software engineers. This should be contrasted with object-oriented paradigm that is supported by models such as UML and CASE tools that aid during the analysis, design and implementation phases of object-oriented software systems. In our research we are developing a framework and extensions to UML to address this need. Our approach is rooted in the BDI formalism, but stresses the practical software design methods instead of reasoning about agents. In this paper we describe our preliminary ideas Index Terms: Agent-Oriented programming, ObjectOriented programming, BDI, UML 1.
2-Hop Neighbour:
BDI Design Principles and Cooperative Implementation - A Report on RoboCup Agents This report discusses two major views on BDI deliberation for autonomous agents. The first view is a rather conceptual one, presenting general BDI design principles, namely heuristic options, decomposed reasoning and layered planning, which enable BDI deliberation in realtime domains. The second view is focused on the practical application of the design principles in RoboCup Simulation League. This application not only evaluates the usefulness in deliberation but also the usefulness in rapid cooperative implementation. We compare this new approach, which has been used in the Vice World Champion team AT Humboldt 98, to the old approach of AT Humboldt 97, and we outline the extensions for AT Humboldt 99, which are still under work. Conditions faced by deliberation in multi agent contexts differ significantly from the basic assumption of classical AI search and planning. Traditional game playing methods for example assume a static well-known setting and a fixed round-based interaction of ...
|
Agents
|
citeseer
|
train
| 314
|
Classify the node 'The 3W Model and Algebra for Unified Data Mining Real data mining/analysis applications call for a framework which adequately supports knowledge discovery as a multi-step process, where the input of one mining operation can be the output of another. Previous studies, primarily focusing on fast computation of one specific mining task at a time, ignore this vital issue. Motivated by this observation, we develop a unified model supporting all major mining and analysis tasks. Our model consists of three distinct worlds, corresponding to intensional and extensional dimensions, and to data sets. The notion of dimension is a centerpiece of the model. Equipped with hierarchies, dimensions integrate the output of seemingly dissimilar mining and analysis operations in a clean manner. We propose an algebra, called the dimension algebra, for manipulating (intensional) dimensions, as well as operators that serve as "bridges" between the worlds. We demonstrate by examples that several real data mining processes can be captured ...' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Optimization of Constrained Frequent Set Queries with 2-variable Constraints Currently, there is tremendous interest in providing ad-hoc mining capabilities in database management systems. As a first step towards this goal, in [15] we proposed an architecture for supporting constraint-based, human-centered, exploratory mining of various kinds of rules including associations, introduced the notion of constrained frequent set queries (CFQs), and developed effective pruning optimizations for CFQs with 1-variable (1-var) constraints. While 1-var constraints are useful for constraining the antecedent and consequent separately, many natural examples of CFQs illustrate the need for constraining the antecedent and consequent jointly, for which 2-variable (2-var) constraints are indispensable. Developing pruning optimizations for CFQs with 2-var constraints is the subject of this paper. But this is a difficult problem because: (i) in 2var constraints, both variables keep changing and, unlike 1-var constraints, there is no fixed target for pruning; (ii) as we show, "conv...
|
DB (Databases)
|
citeseer
|
train
| 317
|
Classify the node 'Exploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Breadth-First Search Crawling Yields High-Quality Pages This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl.
1-Hop Neighbour:
MySpiders : Evolve your own intelligent Web crawlers Abstract. The dynamic nature of the World Wide Web makes it a challenge to find information that is bothrelevant and recent. Intelligent agents can complement the power of searchengines to meet this challenge. We present a Web tool called MySpiders, which implements an evolutionary algorithm managing a population of adaptive crawlers who browse the Web autonomously. Each agent acts as an intelligent client on behalf of the user, driven by a user query and by textual and linkage clues in the crawled pages. Agents autonomously decide which links to follow, which clues to internalize, when to spawn offspring to focus the search near a relevant source, and when to starve. The tool is available to the public as a threaded Java applet. We discuss the development and deployment of such a system. Keywords: web informational retrieval, topic-driver crawlers, online search, InfoSpiders, MySpiders, applet
1-Hop Neighbour:
Improved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1
2-Hop Neighbour:
A Comparison of Techniques to Find Mirrored Hosts on the WWW We compare several algorithms for identifying mirrored hosts on the World Wide Web. The algorithms operate on the basis of URL strings and linkage data: the type of information easily available from web proxies and crawlers. Identification of mirrored hosts can improve web-based information retrieval in several ways: First, by identifying mirrored hosts, search engines can avoid storing and returning duplicate documents. Second, several new information retrieval techniques for the Web make inferences based on the explicit links among hypertext documents -- mirroring perturbs their graph model and degrades performance. Third, mirroring information can be used to redirect users to alternate mirror sites to compensate for various failures, and can thus improve the performance of web browsers and proxies. # This work was presented at the Workshop on Organizing Web Space at the Fourth ACM Conference on Digital Libraries 1999. We evaluated 4 classes of "top-down" algorithms for detecting ...
2-Hop Neighbour:
Mining the Link Structure of the World Wide Web The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. We develop algorithms that exploit the hyperlink structure of the WWW for information discovery and categorization, the construction of high-quality resource lists, and the analysis of on-line hyperlinked communities. 1 Introduction The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. There are a number of fundamental reasons for this. The Web is a hypertext corpus of enormous size --- approximately three hundred million Web pages as of this writing --- and it continues to grow at a phenomenal rate. But the variation in pages is even worse than the raw scale of the data: the set of Web pages taken as a whole has almost no unifying structure, wi...
2-Hop Neighbour:
Topical Locality in the Web Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable WorldWide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.
2-Hop Neighbour:
Learning to Create Customized Authority Lists The proliferation of hypertext and the popularity of Kleinberg's HITS algorithm have brought about an increased interest in link analysis. While HITS and its older relatives from the Bibliometrics provide a method for finding authoritative sources on a particular topic, they do not allow individual users to inject their own opinions on what sources are authoritative. This paper presents a technique for learning a user's internal model of authority. We present experimental results based on Cora on-line index, a database of approximately one million on-line computer science literature references. 1. Introduction Bibliometrics (White & McCain, 1989; Small, 1973) involves studying the structure that emerges from sets of linked documents. Traditionally, these links have taken the form of citations among journal articles, although Kleinberg (1997) and others (e.g., Brin & Page, 1998) have found that they adapt well to sets of hyperlinked documents. Bibliometric techniques exis...
2-Hop Neighbour:
Effective Site Finding using Link Anchor Information Link-based ranking methods have been described in the literature and applied in commercial Web search engines. However, according to recent TREC experiments, they are no better than traditional content-based methods. We conduct a different type of experiment, in which the task is to find the main entry point of a specific Web site. In our experiments, ranking based on link anchor text is twice as effective as ranking based on document content, even though both methods used the same BM25 formula. We obtained these results using two sets of 100 queries on a 18.5 million docu- ment set and another set of 100 on a 0.4 million document set. This site finding effectiveness begins to explain why many search engines have adopted link methods. It also opens a rich new area for effectiveness improvement, where traditional methods fail.
|
IR (Information Retrieval)
|
citeseer
|
train
| 360
|
Classify the node 'Web Usage Mining - Languages and Algorithms We propose two new XML applications, XGMML and LOGML. XGMML is a graph description language and LOGML is a web-log report description language. We generate a web graph in XGMML format for a web site using the web robot of the WWWPal system (developed for web visualization and organization). We generate web-log reports in LOGML format for a web site from web log files and the web graph. In this paper, we further illustrate the usefulness of these two XML applications with a web data mining example. Moreover, we show the simplicity with which this mining algorithm can be specified and implemented efficiently using our two XML applications. We provide sample results, namely frequent patterns of users in a web site, with our web data mining algorithm.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Web Mining Research: A Survey With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.
2-Hop Neighbour:
Workshop on Intelligent Information Integration (III'99)
2-Hop Neighbour:
Information Extraction with HMMs and Shrinkage Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling time series data, and have been applied with success to many language-related tasks such as part of speech tagging, speech recognition, text segmentation and topic detection. This paper describes the application of HMMs to another language related task|information extraction|the problem of locating textual sub-segments that answer a particular information need. In our work, the HMM state transition probabilities and word emission probabilities are learned from labeled training data. As in many machine learning problems, however, the lack of suÆcient labeled training data hinders the reliability of the model. The key contribution of this paper is the use of a statistical technique called \shrinkage" that signi cantly improves parameter estimation of the HMM emission probabilities in the face of sparse training data. In experiments on seminar announcements and Reuters acquisitions articles, shrinkage is shown to r...
2-Hop Neighbour:
Web Log Data Warehousing and Mining for Intelligent Web Caching We introduce intelligent web caching algorithms that employ predictive models of web requests; the general idea is to extend the LRU policy of web and proxy servers by making it sensible to web access models extracted from web log data using data mining techniques. Two approaches have been studied in particular, frequent patterns and decision trees. The experimental results of the new algorithms show substantial improvement over existing LRU-based caching techniques, in terms of hit rate. We designed and developed a prototypical system, which supports data warehousing of web log data, extraction of data mining models and simulation of the web caching algorithms.
2-Hop Neighbour:
The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
2-Hop Neighbour:
Report on the CONALD Workshop on Learning from Text and the Web Moo], organization and presentation of documents in information retrieval systems [GS, Hof], collaborative filtering [dVN], lexicon learning [GBGH], query reformulation [KK], text generation [Rad] and analysis of the statistical properties of text [MA]. In short, the state of the art in learning from text and the web is that a broad range of methods are currently being applied to many important and interesting tasks. There remain numerous open research questions, however. Broadly, the goals of the work presented at the workshop fall into two overlapping categories: (i) making textual information available in a structured format so that it can be used for complex queries and problem solving, and (ii) assisting users in finding, organizing and managing information represented in text sources. As an example of research aimed at the former goal, Muslea, Minton and Knoblock [MMK] have developed an approach to learning wrappers for semi-structured Web sources, such as restau
|
IR (Information Retrieval)
|
citeseer
|
train
| 377
|
Classify the node 'A Fast Multi-Dimensional Algorithm for Drawing Large Graphs We present a novel hierarchical force-directed method for drawing large graphs. The algorithm produces a graph embedding in an Euclidean space E of any dimension. A two or three dimensional drawing of the graph is then obtained by projecting a higher-dimensional embedding into a two or three dimensional subspace of E. Projecting high-dimensional drawings onto two or three dimensions often results in drawings that are "smoother" and more symmetric. Among the other notable features of our approach are the utilization of a maximal independent set filtration of the set of vertices of a graph, a fast energy function minimization strategy, e#cient memory management, and an intelligent initial placement of vertices. Our implementation of the algorithm can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a mid-range PC. 1 Introduction Graphs are common in many applications, from data structures to networks, from software engineering...' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Visual Ranking of Link Structures (Extended Abstract) Methods for ranking World Wide Web resources according to their position in the link structure of the Web are receiving considerable attention, because they provide the first effective means for search engines to cope with the explosive growth and diversification of the Web.
2-Hop Neighbour:
What can you do with a Web in your Pocket? The amount of information available online has grown enormously over the past decade. Fortunately, computing power, disk capacity, and network bandwidth have also increased dramatically. It is currently possible for a university research project to store and process the entire World Wide Web. Since there is a limit on how much text humans can generate, it is plausible that within a few decades one will be able to store and process all the human-generated text on the Web in a shirt pocket. The Web is a very rich and interesting data source. In this paper, we describe the Stanford WebBase, a local repository of a significant portion of the Web. Furthermore, we describe a number of recent experiments that leverage the size and the diversity of the WebBase. First, we have largely automated the process of extracting a sizable relation of books (title, author pairs) from hundreds of data sources spread across the World Wide Web using a technique we call Dual Iterative Pattern Relation Extraction. Second, we have developed a global ranking of Web pages called PageRank based on the link structure of the Web that has properties that are useful for search and navigation. Third, we have used PageRank to develop a novel search engine called Google, which also makes heavy use of anchor text. All of these experiments rely significantly on the size and diversity of the WebBase. 1
2-Hop Neighbour:
The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
2-Hop Neighbour:
Automatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.
2-Hop Neighbour:
Improved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1
|
AI (Artificial Intelligence)
|
citeseer
|
train
| 410
|
Classify the node 'Exploiting Redundancy in Question Answering Our goal is to automatically answer brief factual questions of the form "When was the Battle of Hastings?" or "Who wrote The Wind in the Willows?". Since the answer to nearly any such question can now be found somewhere on the Web, the problem reduces to finding potential answers in large volumes of data and validating their accuracy. We apply a method for arbitrary passage retrieval to the first half of the problem and demonstrate that answer redundancy can be used to address the second half. The success of our approach depends on the idea that the volume of available Web data is large enough to supply the answer to most factual questions multiple times and in multiple contexts. A query is generated from a question and this query is used to select short passages that may contain the answer from a large collection of Web data. These passages are analyzed to identify candidate answers. The frequency of these candidates within the passages is used to "vote" for the most likely answer. The ...' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Incorporating Quality Metrics in Centralized/Distributed Information Retrieval on the World Wide Web Most information retrieval systems on the Internet rely primarily on similarity ranking algorithms based solely on term frequency statistics. Information quality is usually ignored. This leads to the problem that documents are retrieved without regard to their quality. We present an approach that combines similarity-based similarity ranking with quality ranking in centralized and distributed search environments. Six quality metrics, including the currency, availability, information-to-noise ratio, authority, popularity, and cohesiveness, were investigated. Search effectiveness was significantly improved when the currency, availability, information-to-noise ratio and page cohesiveness metrics were incorporated in centralized search. The improvement seen when the availability, information-to-noise ratio, popularity, and cohesiveness metrics were incorporated in site selection was also significant. Finally, incorporating the popularity metric in information fusion resulted in a significan...
1-Hop Neighbour:
The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
1-Hop Neighbour:
Results and Challenges in Web Search Evaluation A frozen 18.5 million page snapshot of part of the Web has been created to enable and encourage meaningful and reproducible evaluation of Web search systems and techniques. This collection is being used in an evaluation framework within the Text Retrieval Conference (TREC) and will hopefully provide convincing answers to questions such as, "Can link information result in better rankings?", "Do longer queries result in better answers?", and, "Do TREC systems work well on Web data?" The snapshot and associated evaluation methods are described and an invitation is extended to participate. Preliminary results are presented for an effectivess comparison of six TREC systems working on the snapshot collection against five well-known Web search systems working over the current Web. These suggest that the standard of document rankings produced by public Web search engines is by no means state-of-the-art. 1999 Published by Elsevier Science B.V. All rights reserved. Keywords: Evaluation; Search...
2-Hop Neighbour:
FEATURES: Real-time Adaptive Feature Learning and Document Learning for Web Search In this paper we report our research on building Features - an intelligent web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Not only does Features learn from the user's document relevance feedback, but also automatically extracts and suggests indexing keywords relevant to a search query and learns from the user's keyword relevance feedback so that it is able to speed up its search process and to enhance its search performance. We design two efficient and mutual-benefiting learning algorithms that work concurrently, one for feature learning and the other for document learning. Features employs these algorithms together with an internal index database and a real-time meta-searcher so to perform adaptive real-time learning to find desired documents with as little relevance feedback from the user as possible. The architecture and performance of Features are also discussed. 1 Introduction As the world wide web rapidly evo...
2-Hop Neighbour:
Background Readings for Collection Synthesis
2-Hop Neighbour:
Evaluating Strategies for Similarity Search on the Web Finding pages on the Web that are similar to a query page (Related Pages) is an important component of modern search engines. A variety of strategies have been proposed for answering Related Pages queries, but comparative evaluation by user studies is expensive, especially when large strategy spaces must be searched (e.g., when tuning parameters). We present a technique for automatically evaluating strategies using Web hierarchies, such as Open Directory, in place of user feedback. We apply this evaluation methodology to a mix of document representation strategies, including the use of text, anchor-text, and links. We discuss the relative advantages and disadvantages of the various approaches examined. Finally, we describe how to efficiently construct a similarity index out of our chosen strategies, and provide sample results from our index.
2-Hop Neighbour:
Application of ART2 Networks and Self-Organizing Maps to Collaborative Filtering Since the World Wide Web has become widespread, more and more applications exist that are suitable for the application of social information filtering techniques. In collaborative filtering, preferences of a user are estimated through mining data available about the whole user population, implicitly exploiting analogies between users that show similar characteristics.
2-Hop Neighbour:
Topic-Driven Crawlers: Machine Learning Issues Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers.
|
IR (Information Retrieval)
|
citeseer
|
train
| 423
|
Classify the node 'Exploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Topic-Driven Crawlers: Machine Learning Issues Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers.
1-Hop Neighbour:
Automatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.
1-Hop Neighbour:
The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
2-Hop Neighbour:
Generating a Topically Focused VirtualReality Internet Surveys highlight that Internet users are frequently frustrated by failing to locate useful information, and by difficulty in browsing anarchically linked web-structures. We present a new Internet browsing application (called VR-net) that addresses these problems. It first identifies semantic domains consisting of tightly interconnected web-page groupings. The second part populates a 3D virtual world with these information sources, representing all relevant pages plus appropriate structural relations. Users can then easily browse through around a semantically focused virtual library. 1 Introduction The Internet is a probably the most significant global information resource ever created, allowing access to an almost unlimited amount of information. In this paper we describe two inter-related difficulties suffered by Internet users, and their combined influence on web use. We then introduce an integrated "search and browse" solution tool that directly tackles both issues. We also examin...
2-Hop Neighbour:
Exploiting Structure for Intelligent Web Search Together with the rapidly growing amount of online data we register an immense need for intelligent search engines that access a restricted amount of data as found in intranets or other limited domains. This sort of search engines must go beyond simple keyword indexing/matching, but they also have to be easily adaptable to new domains without huge costs. This paper presents a mechanism that addresses both of these points: first of all, the internal document structure is being used to extract concepts which impose a directorylike structure on the documents similar to those found in classified directories. Furthermore, this is done in an efficient way which is largely language independent and does not make assumptions about the document structure.
2-Hop Neighbour:
Accelerated Focused Crawling through Online Relevance Feedback The organization of HTML into a tag tree structure, which is rendered by browsers as roughly rectangular regions with embedded text and HREF links, greatly helps surfers locate and click on links that best satisfy their information need. Can an automatic program emulate this human behavior and thereby learn to predict the relevance of an unseen HREF target page w.r.t. an information need, based on information limited to the HREF source page? Such a capability would be of great interest in focused crawling and resource discovery, because it can fine-tune the priority of unvisited URLs in the crawl frontier, and reduce the number of irrelevant pages which are fetched and discarded.
2-Hop Neighbour:
WebSail: From On-line Learning to Web Search In this paper we investigate the applicability of on-line learning algorithms to the real-world problem of web search. Consider that web documents are indexed using n Boolean features. We first present a practically efficient on-line learning algorithm TW2 to search for web documents represented by a disjunction of at most k relevant features. We then design and implement WebSail, a real-time adaptive web search learner, with TW2 as its learning component. WebSail learns from the user's relevance feedback in real-time and helps the user to search for the desired web documents. The architecture and performance of WebSail are also discussed.
2-Hop Neighbour:
Text and Image Metasearch on the Web As the Web continues to increase in size, the relative coverage of Web search engines is decreasing, and search tools that combine the results of multiple search engines are becoming more valuable. This paper provides details of the text and image metasearch functions of the Inquirus search engine developed at the NEC Research Institute. For text metasearch, we describe features including the use of link information in metasearch, and provide statistics on the usage and performance of Inquirus and the Web search engines. For image metasearch, Inquirus queries multiple image search engines on the Web, downloads the actual images, and creates image thumbnails for display to the user. Inquirus handles image search engines that return direct links to images, and engines that return links to HTML pages. For the engines that return HTML pages, Inquirus analyzes the text on the pages in order to predict which images are most likely to correspond to the query. The individual image search engin...
|
IR (Information Retrieval)
|
citeseer
|
train
| 431
|
Classify the node 'Diffusion-snakes using statistical shape knowledge We present a novel extension of the Mumford-Shah functional that allows to incorporate statistical shape knowledge at the computational level of image segmentation. Our approach exhibits various favorable properties: non-local convergence, robustness against noise, and the ability to take into consideration both shape evidence in given image data and knowledge about learned shapes. In particular, the latter property distinguishes our approach from previous work on contour-evolution based image segmentation. Experimental results conrm these properties.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Calibrating Parameters of Cost Functionals We propose a new framework for calibrating parameters of energy functionals, as used in image analysis. The method learns parameters from a family of correct examples, and given a probabilistic construct for generating wrong examples from correct ones. We introduce a measure of frustration to penalize cases in which wrong responses are preferred to correct ones, and we design a stochastic gradient algorithm which converges to parameters which minimize this measure of frustration. We also present a first set of experiments in this context, and introduce extensions to deal with data-dependent energies. keywords: Learning, variational method, parameter estimation, image reconstruction, Bayesian image models 1 1 Description of the method Many problems in computer vision are addressed through the minimization of a cost functional U . This function is typically defined on a large, finite, set \Omega (for example the set of pictures with fixed dimensions), and the minimizer of x ...
2-Hop Neighbour:
Level Lines as Global Minimizers of Energy Functionals in Image Segmentation We propose a variational framework for determining global minimizers of rough energy functionals used in image segmentation. Segmentation is achieved by minimizing an energy model, which is comprised of two parts: the first part is the interaction between the observed data and the model, the second is a regularity term. The optimal boundaries are the set of curves that globally minimize the energy functional.
|
ML (Machine Learning)
|
citeseer
|
train
| 437
|
Classify the node 'Unsupervised Learning from Dyadic Data Dyadic data refers to a domain with two finite sets of objects in which observations are made for dyads, i.e., pairs with one element from either set. This includes event co-occurrences, histogram data, and single stimulus preference data as special cases. Dyadic data arises naturally in many applications ranging from computational linguistics and information retrieval to preference analysis and computer vision. In this paper, we present a systematic, domain-independent framework for unsupervised learning from dyadic data by statistical mixture models. Our approach covers different models with flat and hierarchical latent class structures and unifies probabilistic modeling and structure discovery. Mixture models provide both, a parsimonious yet flexible parameterization of probability distributions with good generalization performance on sparse data, as well as structural information about data-inherent grouping structure. We propose an annealed version of the standard Expectation Maximization algorithm for model fitting which is empirically evaluated on a variety of data sets from different domains.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Estimating Dependency Structure as a Hidden Variable This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. 1 INTRODUCTION A fundamental feature of a good model is the ability to uncover and exploit independencies in the data it is presented with. For many commonly used models, such as neural nets and belief networks, the dependency structure encoded in the model is fixed, in the sense that it is not allowed to vary depending on actual values of the variables or with the current case. However, dependency structures that are conditional on values of variables abound in the world around us. Consider for example bitmaps of handwritten digits. They obviously contain many dependencies between pixels; however, the pattern of these dependencies will vary acr...
1-Hop Neighbour:
Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two--mode and co-occurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of experiments.
1-Hop Neighbour:
Empirical Risk Approximation: An Induction Principle for Unsupervised Learning Unsupervised learning algorithms are designed to extract structure from data without reference to explicit teacher information. The quality of the learned structure is determined by a cost function which guides the learning process. This paper proposes Empirical Risk Approximation as a new induction principle for unsupervised learning. The complexity of the unsupervised learning models are automatically controlled by the two conditions for learning: (i) the empirical risk of learning should uniformly converge towards the expected risk; (ii) the hypothesis class should retain a minimal variety for consistent inference. The maximal entropy principle with deterministic annealing as an efficient search strategy arises from the Empirical Risk Approximation principle as the optimal inference strategy for large learning problems. Parameter selection of learnable data structures is demonstrated for the case of k-means clustering. 1 What is unsupervised learning? Learning algorithms are desi...
2-Hop Neighbour:
The Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis.
2-Hop Neighbour:
Generative Models for Cold-Start Recommendations Systems for automatically recommending items (e.g., movies, products, or information) to users are becoming increasingly important in e-commerce applications, digital libraries, and other domains where personalization is highly valued. Such recommender systems typically base their suggestions on (1) collaborative data encoding which users like which items, and/or (2) content data describing item features and user demographics. Systems that rely solely on collaborative data fail when operating from a cold start---that is, when recommending items (e.g., first-run movies) that no member of the community has yet seen. We develop several generative probabilistic models that circumvent the cold-start problem by mixing content data with collaborative data in a sound statistical manner. We evaluate the algorithms using MovieLens movie ratings data, augmented with actor and director information from the Internet Movie Database. We find that maximum likelihood learning with the expectation maximization (EM) algorithm and variants tends to overfit complex models that are initialized randomly. However, by seeding parameters of the complex models with parameters learned in simpler models, we obtain greatly improved performance. We explore both methods that exploit a single type of content data (e.g., actors only) and methods that leverage multiple types of content data (e.g., both actors and directors) simultaneously.
2-Hop Neighbour:
Learning to Perceive the World as Articulated: An Approach for Hierarchical Learning in Sensory-Motor Systems This paper describes how agents can learn an internal model of the world structurally by focusing on the problem of behavior-based articulation. We develop an on-line learning scheme -- the so-called mixture of recurrent neural net (RNN) experts -- in which a set of RNN modules becomes self-organized as experts on multiple levels in order to account for the different categories of sensory-motor flow which the robot experiences. Autonomous switching of activated modules in the lower level actually represents the articulation of the sensory-motor flow. In the meanwhile, a set of RNNs in the higher level competes to learn the sequences of module switching in the lower level, by which articulation at a further more abstract level can be achieved. The proposed scheme was examined through simulation experiments involving the navigation learning problem. Our dynamical systems analysis clarified the mechanism of the articulation; the possible correspondence between the articulation...
2-Hop Neighbour:
GTM: The Generative Topographic Mapping Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline. GTM: The Generative Topographic Mapping 2 1 Introduction Many data sets exhibit significant correlations between the variables. One way to capture such structure is to model the distribution of the data in term...
2-Hop Neighbour:
Probabilistic Models for Unified Collaborative and Content-Based Recommendation in Sparse-Data Environments Recommender systems leverage product and community information to target products to consumers. Researchers have developed collaborative recommenders, content-based recommenders, and a few hybrid systems. We propose a unified probabilistic framework for merging collaborative and content-based recommendations. We extend Hofmann's (1999) aspect model to incorporate three-way co-occurrence data among users, items, and item content. The relative influence of collaboration data versus content data is not imposed as an exogenous parameter, but rather emerges naturally from the given data sources. However, global probabilistic models coupled with standard EM learning algorithms tend to drastically overfit in the sparsedata situations typical of recommendation applications. We show that secondary content information can often be used to overcome sparsity. Experiments on data from the ResearchIndex library of Computer Science publications show that appropriate mixture models incorporating secondary data produce significantly better quality recommenders than k-nearest neighbors (k-NN). Global probabilistic models also allow more general inferences than local methods like k-NN.
|
ML (Machine Learning)
|
citeseer
|
train
| 468
|
Classify the node 'Modeling Sociality In The BDI Framework . We present a conceptual model for how the social nature of agents impacts upon their individual mental states. Roles and social relationships provide an abstraction upon which we develop the notion of social mental shaping . 1 Introduction Belief-Desire-Intention (BDI) architectures for deliberative agents are based on the physical symbol system assumption that agents maintain and reason about internal representations of their world [2]. However, while such architectures conceptualise individual intentionality and behaviour, they say nothing about the social aspects of agents being situated in a multi-agent system. The main reason for this limitation is that mental attitudes are taken to be internal to a particular agent (or team) and are modeled as a relation between the agent (or a team) and a proposition. The purpose of this paper is, therefore, to extend BDI models in order to investigate the problem of how the social nature of agents can impact upon their individual mental ...' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Social Mental Shaping: Modelling the Impact of Sociality on the Mental States of Autonomous Agents This paper presents a framework that captures how the social nature of agents that are situated in a multi-agent environment impacts upon their individual mental states. Roles and social relationships provide an abstraction upon which we develop the notion of social mental shaping. This allows us to extend the standard Belief-DesireIntention model to account for how common social phenomena (e.g. cooperation, collaborative problem-solving and negotiation) can be integrated into a unified theoretical perspective that reflects a fully explicated model of the autonomous agent's mental state. Keywords: Multi-agent systems, agent interactions, BDI models, social influence. 3 1.
1-Hop Neighbour:
Formalizing Collaborative Decision-making and Practical Reasoning in Multi-agent Systems In this paper, we present an abstract formal model of decision-making in a social setting that covers all aspects of the process, from recognition of a potential for cooperation through to joint decision. In a multi-agent environment, where self-motivated autonomous agents try to pursue their own goals, a joint decision cannot be taken for granted. In order to decide effectively, agents need the ability to (a) represent and maintain a model of their own mental attitudes, (b) reason about other agents' mental attitudes, and (c) influence other agents' mental states. Social mental shaping is advocated as a general mechanism for attempting to have an impact on agents' mental states in order to increase their cooperativeness towards a joint decision. Our approach is to specify a novel, high-level architecture for collaborative decision-making in which the mentalistic notions of belief, desire, goal, intention, preference and commitment play a central role in guiding the individual agent's and the group's decision-making behaviour. We identify preconditions that must be fulfilled before collaborative decision-making can commence and prescribe how cooperating agents should behave, in terms of their own decision-making apparatus and their interactions with others, when the decision-making process is progressing satisfactorily. The model is formalized through a new, many-sorted, multi-modal logic.
1-Hop Neighbour:
Social Mental Shaping: Modelling the Impact of Sociality on the Mental States of Autonomous Agents This paper presents a framework that captures how the social nature of agents that are situated in a multi-agent environment impacts upon their individual mental states. Roles and social relationships provide an abstraction upon which we develop the notion of social mental shaping. This allows us to extend the standard Belief-DesireIntention model to account for how common social phenomena (e.g. cooperation, collaborative problem-solving and negotiation) can be integrated into a unified theoretical perspective that reflects a fully explicated model of the autonomous agent's mental state. Keywords: Multi-agent systems, agent interactions, BDI models, social influence. 3 1.
2-Hop Neighbour:
Controlling Cooperative Problem Solving in Industrial Multi-Agent Systems using Joint Intentions One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparativ...
2-Hop Neighbour:
Cooperative Plan Selection Through Trust Cooperation plays a fundamental role in multi-agent systems in which individual agents must interact for the overall system to function effectively.
2-Hop Neighbour:
Agents That Reason and Negotiate By Arguing The need for negotiation in multi-agent systems stems from the requirement for agents to solve the problems posed by their interdependence upon one another. Negotiation provides a solution to these problems by giving the agents the means to resolve their conflicting objectives, correct inconsistencies in their knowledge of other agents' world views, and coordinate a joint approach to domain tasks which benefits all the agents concerned. We propose a framework, based upon a system of argumentation, which permits agents to negotiate in order to establish acceptable ways of solving problems. The framework provides a formal model of argumentation-based reasoning and negotiation, details a design philosophy which ensures a clear link between the formal model and its practical instantiation, and describes a case study of this relationship for a particular class of architectures (namely those for belief-desire-intention agents). 1 Introduction An increasing number of software app...
|
Agents
|
citeseer
|
train
| 486
|
Classify the node 'Prometheus: A Methodology for Developing Intelligent Agents Abstract. As agents gain acceptance as a technology there is a growing need for practical methods for developing agent applications. This paper presents the Prometheus methodology, which has been developed over several years in collaboration with Agent Oriented Software. The methodology has been taught at industry workshops and university courses. It has proven effective in assisting developers to design, document, and build agent systems. Prometheus differs from existing methodologies in that it is a detailed and complete (start to end) methodology for developing intelligent agents which has evolved out of industrial and pedagogical experience. This paper describes the process and the products of the methodology illustrated by a running example. 1' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
A Survey of Agent-Oriented Methodologies . This article introduces the current agent-oriented methodologies. It discusseswhat approacheshave been followed (mainly extending existing objectoriented and knowledge engineering methodologies), the suitability of these approaches for agent modelling, and some conclusions drawn from the survey. 1 Introduction Agent technology has received a great deal of attention in the last few years and, as a result, the industry is beginning to get interested in using this technology to develop its own products. In spite of the different developed agent theories, languages, architectures and the successful agent-based applications, very little work for specifying (and applying) techniques to develop applications using agent technology has been done. The role of agent-oriented methodologies is to assist in all the phases of the life cycle of an agent-based application, including its management. This article reviews the current approaches to the development of an agent-oriented (AO) methodology. ...
1-Hop Neighbour:
JACK Intelligent Agents - Components for Intelligent Agents in Java This paper is organised as follows. Section 2 introduces JACK Intelligent Agents, presenting the approach taken by AOS to its design and outlining its major engineering characteristics. The BDI model is discussed briefly in Section 3. Section 4 gives an outline of how to build an application with JACK Intelligent Agents. Finally, in Section 5 we discuss how the use of this framework can be beneficial to both engineers and researchers. For brevity, we will refer to JACK Intelligent Agents simply as "JACK".
1-Hop Neighbour:
A Methodology and Modelling Technique for Systems of BDI Agents The construction of large-scale embedded software systems demands the use of design methodologies and modelling techniques that support abstraction, inheritance, modularity, and other mechanisms for reducing complexity and preventing error. If multi-agent systems are to become widely accepted as a basis for large-scale applications, adequate agentoriented methodologies and modelling techniques will be essential. This is not just to ensure that systems are reliable, maintainable, and conformant, but to allow their design, implementation, and maintenance to be carried out by software analysts and engineers rather than researchers. In this paper we describe an agent-oriented methodology and modelling technique for systems of agents based upon the Belief-Desire-Intention (BDI) paradigm. Our models extend existing Object-Oriented (OO) models. By building upon and adapting existing, well-understood techniques, we take advantage of their maturity to produce an approach that can be easily lear...
2-Hop Neighbour:
Paradigma: Agent Implementation through Jini One of the key problems of recent years has been the divide between theoretical work in agent-based systems and its practical complement which have, to a large extent, developed along different paths. The Paradigma implementation framework has been designed with the aim of narrowing this gap. It relies on an extensive formal agent framework implemented using recent advances in Java technology. Specifically, Paradigma uses Jini connectivity technology to enable the creation of on-line communities in support of the development of agent-based systems. 1 Introduction In a networked environment that is highly interconnected, interdependent and heterogeneous, we are faced with an explosion of information and available services that are increasingly hard to manage. Agent-based systems can provide solutions to these problems as a consequence of their dynamics of social interaction; communication and cooperation can be used to effectively model problem domains through the interaction of agent...
2-Hop Neighbour:
An Overview of the Multiagent Systems Engineering Methodology . To solve complex problems, agents work cooperatively with other agents in heterogeneous environments. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior. The use of intelligent agents provides an even greater amount of flexibility to the ability and configuration of the system itself. With these new intricacies, software development is becoming increasingly difficult. Therefore, it is critical that our processes for building the inherently complex distributed software that must run in this environment be adequate for the task. This paper introduces a methodology for designing these systems of interacting agents. 1.
2-Hop Neighbour:
Formalisms for Multi-Agent Systems This report is the result of a panel discussion at the First UK Workshop on Foundations of Multi-Agent Systems (FoMAS '96). All members of the panel are authors, listed alphabetically. as knowledge representation language, for direct manipulation within an agent system, is exemplified in the work of Konolige on formalisms for modelling belief, and logic as a programming language is evidenced in the work of Fisher on Concurrent METATEM. All of these strands of work can claim some measure of success. However, a common failing of formal work (both in AI and multi-agent systems) is that its role is not clear. Formal agent theories are agent specifications, not only in the sense of providing descriptions and constraints on agent behaviour, but also in the sense that one understands the term `specification' from mainstream software engineering, namely that they provide a base from which to design, implement and verify agent systems. Agents are a natural next step for software engineering; they represent a fundamentally new way of considering complex distributed systems, containing societies of cooperating autonomous components. If we aim to build such systems, then principled techniques will be required for their design and implementation. We aim to assist the development of such systems by providing formalisms and notations that can be used to specify the desirable behaviour of agents and multi-agent systems; a requirement is that we should be able to move in a principled way from specifications of such systems to implementations. The properties identified by using a formalism serve to measure and evaluate implementations of agent systems. Some properties currently seem to be unimplementable, because they deal with an idealised aspect of agency, such as knowledge. Still, t...
2-Hop Neighbour:
Organisational Rules as an Abstraction for the Analysis and Design of Multi-Agent Systems Multi-agent systems... In this paper we introduce three additional organisational concepts - organisational rules, organisational structures, and organisational patterns - and discuss why we believe they are necessary for the complete specification of computational organisations. In particular, we focus on the concept of organisational rules and introduce a formalism, based on temporal logic, to specify them. This formalism is then used to drive the definition of the organisational structure and the identification of the organisational patterns. Finally, the paper sketches some guidelines for a methodology for agent-oriented systems based on our expanded set of organisational abstractions.
2-Hop Neighbour:
Organisational Abstractions for the Analysis and Design of Multi-Agent Systems Abstract. The architecture of a multi-agent system can naturally be viewed as a computational organisation. For this reason, we believe organisational abstractions should play a central role in the analysis and design of such systems. To this end, the concepts of agent roles and role models are increasingly being used to specify and design multi-agent systems. However, this is not the full picture. In this paper we introduce three additional organisational concepts — organisational rules, organisational structures, and organisational patterns — that we believe are necessary for the complete specification of computational organisations. We view the introduction of these concepts as a step towards a comprehensive methodology for agent-oriented systems. 1
|
Agents
|
citeseer
|
train
| 509
|
Classify the node 'Analysis and extraction of useful information across networks of Web databases Contents 1 Introduction 2 2 Problem Statement 2 3 Literature Review 3 3.1 Retrieving Text . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.2 Understanding Music . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Identifying Images . . . . . . . . . . . . . . . . . . . . . . . . 9 3.4 Extracting Video . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Work Completed and in Progress 12 5 Research Plan and Time-line 14 A List of Published Work 15 1 1 INTRODUCTION 2 1 Introduction The World Wide Web of documents on the Internet contains a huge amount of information and resources. It has been growing at a rapid rate for nearly a decade and is now one of the main resources of information for many people. The large interest in the Web is due to the fact that it is uncontrolled and easily accessible, no single person owns it and anyone can add to it. The Web has also brought with it a lot of controversy, also due to the' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
1-Hop Neighbour:
Invariant Fourier-Wavelet Descriptor For Pattern Recognition We present a novel set of descriptors for recognizing complex patterns such as roadsigns, keys, aircrafts, characters, etc. Given a pattern, we first transform it to polar coordinate (r; `) using the centre of mass of the pattern as origin. We then apply the Fourier transform along the axis of polar angle ` and the wavelet transform along the axis of radius r. The features thus obtained are invariant to translation, rotation, and scaling. As an example, we apply the method to a database of 85 printed Chinese characters. The result shows that the Fourier-Wavelet descriptor is an efficient representation which can provide for reliable recognition. Feature Extraction, Fourier Transform, Invariant Descriptor, Multiresolution Analysis, Pattern Recognition, Wavelet Transform. 1 Introduction Feature extraction is a crucial processing step for pattern recognition (15) . Some authors (5\Gamma7;13) extract 1-D features from 2-D patterns. The advantage of this approach is that we can save spa...
2-Hop Neighbour:
Learning Search Engine Specific Query Transformations for Question Answering We introduce a method for learning query transformations that improves the ability to retrieve answers to questions from an information retrieval system. During the training stage the method involves automatically learning phrase features for classifying questions into different types, automatically generating candidate query transformations from a training set of question/answer pairs, and automatically evaluating the candidate transforms on target information retrieval systems such as real-world general purpose search engines. At run time, questions are transformed into a set of queries, and re-ranking is performed on the documents retrieved. We present a prototype search engine, Tritus, that applies the method to web search engines. Blind evaluation on a set of real queries from a web search engine log shows that the method significantly outperforms the underlying web search engines as well as a commercial search engine specializing in question answering. Keywords Web search, quer...
2-Hop Neighbour:
Text-Based Content Search and Retrieval in ad hoc P2P Communities We consider the problem of content search and retrieval in peer-to-peer (P2P) communities. P2P computing is a potentially powerful model for information sharing between ad hoc groups' of users because of its' low cost of entry and natural model for resource scaling with community size. As P2P communities grow in size, however, locating information distributed across the large number of peers becomes problematic. We present a distributed text-based content search and retrieval algorithm to address this' problem. Our algorithm is' based on a state-of-the-art text-based document ranking algorithm: the vector-space model instantiated with the TFxlDF ranking rule. A naive application of TFxlDF wouM require each peer in a community to collect an inverted index of the entire community. This' is' costly both in terms of bandwidth and storage. Instea & we show how TFxlDF can be approximated given compact summaries of peers' local inverted indexes. We make three contributions: (a) we show how the TFxlDF rule can be adapted to use the index summaries, (b) we provide a heuristic for adaptively determining the set of peers that shouM be contacted for a query, and (c) we show that our algorithm tracks' TFxlDF's performance very closely, regardless of how documents' are distributed throughout the community. Furthermore, our algorithm preserves the main flavor of TFxlDF by retrieving close to the same set of documents for any given query.
2-Hop Neighbour:
Rank Aggregation Revisited The rank aggregation problem is to combine many different rank orderings on the same set of candidates, or alternatives, in order to obtain a "better" ordering. Rank aggregation has been studied extensively in the context of social choice theory, where several "voting paradoxes" have been discovered. The problem
2-Hop Neighbour:
Application of ART2 Networks and Self-Organizing Maps to Collaborative Filtering Since the World Wide Web has become widespread, more and more applications exist that are suitable for the application of social information filtering techniques. In collaborative filtering, preferences of a user are estimated through mining data available about the whole user population, implicitly exploiting analogies between users that show similar characteristics.
2-Hop Neighbour:
Collection Synthesis The invention of the hyperlink and the HTTP transmission protocol caused an amazing new structure to appear on the Internet -- the World Wide Web. With the Web, there came spiders, robots, and Web crawlers, which go from one link to the next checking Web health, ferreting out information and resources, and imposing organization on the huge collection of information (and dross) residing on the net. This paper reports on the use of one such crawler to synthesize document collections on various topics in science, mathematics, engineering and technology. Such collections could be part of a digital library.
|
IR (Information Retrieval)
|
citeseer
|
train
| 519
|
Classify the node 'The THISL Broadcast News Retrieval System This paper described the THISL spoken document retrieval system for British and North American Broadcast News. The system is based on the ABBOT large vocabulary speech recognizer, using a recurrent network acoustic model, and a probabilistic text retrieval system. We discuss the development of a realtime British English Broadcast News system, and its integration into a spoken document retrieval system. Detailed evaluation is performed using a similar North American Broadcast News system, to take advantage of the TREC SDR evaluation methodology. We report results on this evaluation, with particular reference to the effect of query expansion and of automatic segmentation algorithms. 1. INTRODUCTION THISL is an ESPRIT Long Term Research project in the area of speech retrieval. It is concerned with the construction of a system which performs good recognition of broadcast speech from television and radio news programmes, from which it can produce multimedia indexing data. The principal obj...' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
The Cambridge University Spoken Document Retrieval System This paper describes the spoken document retrieval system that we have been developing and assesses its performance using automatic transcriptions of about 50 hours of broadcast news data. The recognition engine is based on the HTK broadcast news transcription system and the retrieval engine is based on the techniques developed at City University. The retrieval performance over a wide range of speech transcription error rates is presented and a number of recognition error metrics that more accurately reflect the impact of transcription errors on retrieval accuracy are defined and computed. The results demonstrate the importance of high accuracy automatic transcription. The final system is currently being evaluated on the 1998 TREC-7 spoken document retrieval task. 1.
1-Hop Neighbour:
Speaker Tracking in Broadcast Audio Material in the Framework of the THISL Project In this paper, we present a first approach to build an automatic system for broadcast news speaker-based segmentation. Based on a Chop-and-Recluster method, this system is developed in the framework of the THISL project. A metric-based segmentation is used for the Chop procedure and different distances have been investigated. The Recluster procedure relies on a bottom-up clustering of segments obtained beforehand and represented by non-parametricmodels. Various hierarchical clustering schemes have been tested. Some experiments on BBC broadcast news recordings show that the system can detect real speaker changes with high accuracy (mean error ' 0.7s) and fair false alarm rate (mean false alarm rate ' 5.5% ). The Recluster procedure can produce homogeneous clusters but it is not already robust enough to tackle too complex classification tasks. 1. INTRODUCTION THISL (THematic Indexing of Spoken Language) 1 is an ESPRIT Long Term Research project that is investigating the development ...
2-Hop Neighbour:
General Query Expansion Techniques For Spoken Document Retrieval This paper presents some developments in query expansion and document representation of our Spoken Document Retrieval (SDR) system since the 1998 Text REtrieval Conference (TREC-7). We have shown that a modification of the document representation combining several techniques for query expansion can improve Average Precision by 17 % relative to a system similar to that which we presented at TREC-7 [1]. These new experiments have also confirmed that the degradation of Average Precision due to a Word Error Rate (WER) of 25 % is relatively small (around 2 % relative). We hope to repeat these experiments when larger document collections become available to evaluate the scalability of these techniques. 1.
2-Hop Neighbour:
Improving Retrieval on Imperfect Speech Transcriptions This paper presents the results from adding several forms of query expansion to our retrieval system running on transcriptions of broadcast news from the 1997 TREC-7 spoken document retrieval track. 1 Introduction Retrieving documents which originated as speech is complicated by the presence of errors in the transcriptions. If some method of increasing retrieval performance despite these errors could be found, then even low-accuracy recognisers could be used as part of a successful spoken document retrieval (SDR) system. This paper presents results using four query expansion techniques described in [3] on 8 different sets of transcriptions generated for the 1997 TREC-7 SDR evaluation. The baseline retrieval system and the techniques used for query expansion are described in section 2, the transcriptions on which the experiments were performed in section 3 and results and further discussion are offered in section 4. 2 Retrieval Systems 2.1 Baseline System (BL) Our baseline system ...
|
IR (Information Retrieval)
|
citeseer
|
train
| 558
|
Classify the node 'Analysis and extraction of useful information across networks of Web databases Contents 1 Introduction 2 2 Problem Statement 2 3 Literature Review 3 3.1 Retrieving Text . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.2 Understanding Music . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Identifying Images . . . . . . . . . . . . . . . . . . . . . . . . 9 3.4 Extracting Video . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Work Completed and in Progress 12 5 Research Plan and Time-line 14 A List of Published Work 15 1 1 INTRODUCTION 2 1 Introduction The World Wide Web of documents on the Internet contains a huge amount of information and resources. It has been growing at a rapid rate for nearly a decade and is now one of the main resources of information for many people. The large interest in the Web is due to the fact that it is uncontrolled and easily accessible, no single person owns it and anyone can add to it. The Web has also brought with it a lot of controversy, also due to the' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
1-Hop Neighbour:
Invariant Fourier-Wavelet Descriptor For Pattern Recognition We present a novel set of descriptors for recognizing complex patterns such as roadsigns, keys, aircrafts, characters, etc. Given a pattern, we first transform it to polar coordinate (r; `) using the centre of mass of the pattern as origin. We then apply the Fourier transform along the axis of polar angle ` and the wavelet transform along the axis of radius r. The features thus obtained are invariant to translation, rotation, and scaling. As an example, we apply the method to a database of 85 printed Chinese characters. The result shows that the Fourier-Wavelet descriptor is an efficient representation which can provide for reliable recognition. Feature Extraction, Fourier Transform, Invariant Descriptor, Multiresolution Analysis, Pattern Recognition, Wavelet Transform. 1 Introduction Feature extraction is a crucial processing step for pattern recognition (15) . Some authors (5\Gamma7;13) extract 1-D features from 2-D patterns. The advantage of this approach is that we can save spa...
2-Hop Neighbour:
An Overview of World Wide Web Search Technologies With over 800 million pages covering most areas of human endeavor, the World Wide Web is fertile ground for information retrieval. Numerous search technologies have been applied to Web searches, and the dominant search method has yet to be identified. This chapter provides an overview of existing Web search technologies and classifies them into six categories: (i) hyperlink exploration, (ii) information retrieval, (iii) metasearches, (iv) SQL approaches, (v) content-based multimedia searches, and (vi) others. A comparative study of some major commercial and experimental search services is presented, and some future research directions for Web searches are suggested. Keywords: Survey, World Wide Web, Searches, Search Engines, and Information Retrieval. 1.
2-Hop Neighbour:
Exploiting Redundancy in Question Answering Our goal is to automatically answer brief factual questions of the form "When was the Battle of Hastings?" or "Who wrote The Wind in the Willows?". Since the answer to nearly any such question can now be found somewhere on the Web, the problem reduces to finding potential answers in large volumes of data and validating their accuracy. We apply a method for arbitrary passage retrieval to the first half of the problem and demonstrate that answer redundancy can be used to address the second half. The success of our approach depends on the idea that the volume of available Web data is large enough to supply the answer to most factual questions multiple times and in multiple contexts. A query is generated from a question and this query is used to select short passages that may contain the answer from a large collection of Web data. These passages are analyzed to identify candidate answers. The frequency of these candidates within the passages is used to "vote" for the most likely answer. The ...
2-Hop Neighbour:
Detection of Heterogeneities in a Multiple Text Database Environment As the number of text retrieval systems (search engines) grows rapidly on the World Wide Web, there is an increasing need to build search brokers (metasearch engines) on top of them. Often, the task of building an effective and efficient metasearch engine is hindered by the heterogeneities among the underlying local search engines. In this paper, we first analyze the impact of various heterogeneities on building a metasearch engine. We then present some techniques that can be used to detect the most prominent heterogeneities among multiple search engines. Applications of utilizing the detected heterogeneities in building better metasearch engines will be provided.
2-Hop Neighbour:
Inverted files and dynamic signature files for optimisation of Web Directories Web directories are taxonomies for the classification of Web documents. This kind of IR systems present a specific type of search where the document collection is restricted to one area of the category graph. This paper introduces a specific data architecture for Web directories which improves the performance of restricted searches. That architecture is based on a hybrid data structure composed of an inverted file with multiple embedded signature files. Two variants based on the proposed model are presented: hybrid architecture with total information and hybrid architecture with partial information. The validity of this architecture has been analysed by means of developing both variants to be compared with a basic model. The performance of the restricted queries was clearly improved, specially the hybrid model with partial information, which yielded a positive response under any load of the search system.
2-Hop Neighbour:
Towards Web-Scale Web Archeology Web-scale Web research is difficult. Information on the Web is vast in quantity, unorganized and uncatalogued, and available only over a network with varying reliability. Thus, Web data is difficult to collect, to store, and to manipulate efficiently. Despite these difficulties, we believe performing Web research at Web-scale is important. We have built a suite of tools that allow us to experiment on collections that are an order of magnitude or more larger than are typically cited in the literature. Two key components of our current tool suite are a fast, extensible Web crawler and a highly tuned, in-memory database of connectivity information. A Web page repository that supports easy access to and storage for billions of documents would allow us to study larger data sets and to study how the Web evolves over time.
|
IR (Information Retrieval)
|
citeseer
|
train
| 591
|
Classify the node 'Discovering Web Access Patterns and Trends by Applying OLAP and Data Mining Technology on Web Logs As a confluence of data mining and WWW technologies, it is now possible to perform data mining on web log records collected from the Internet web page access history. The behaviour of the web page readers is imprinted in the web server log files. Analyzing and exploring regularities in this behaviour can improve system performance, enhance the quality and delivery of Internet information services to the end user, and identify population of potential customers for electronic commerce. Thus, by observing people using collections of data, data mining can bring considerable contribution to digital library designers. In a joint effort between the TeleLearning-NCE project on Virtual University and NCE-IRIS project on data mining, we have been developing the knowledge discovery tool, WebLogMiner, for mining web server log files. This paper presents the design of the WebLogMiner, reports the current progress, and outlines the future work in this direction.' into one of the following categories:
Agents
ML (Machine Learning)
IR (Information Retrieval)
DB (Databases)
HCI (Human-Computer Interaction)
AI (Artificial Intelligence).
Refer to the neighbour nodes for context.
1-Hop Neighbour:
Discovering And Mining User Web-Page Traversal Patterns As the popularity of WWW explodes, a massive amount of data is gathered by Web servers in the form of Web access logs. This is a rich source of information for understanding Web user surfing behavior. Web Usage Mining, also known as Web Log Mining, is an application of data mining algorithms to Web access logs to find trends and regularities in Web users' traversal patterns. The results of Web Usage Mining have been used in improving Web site design, business and marketing decision support, user profiling, and Web server system performance. In this thesis we study the application of assisted exploration of OLAP data cubes and scalable sequential pattern mining algorithms to Web log analysis. In multidimensional OLAP analysis, standard statistical measures are applied to assist the user at each step to explore the interesting parts of the cube. In addition, a scalable sequential pattern mining algorithm is developed to discover commonly traversed paths in large data sets. Our experimental and performance studies have demonstrated the effectiveness and efficiency of the algorithm in comparison to previously developed sequential pattern mining algorithms. In conclusion, some further research avenues in web usage mining are identified as well. iv Dedication To my parents v Acknowledgments I would like to thank my supervisor Dr. Jiawei Han for his support, sharing of his knowledge and the opportunities that he gave me. His dedication and perseverance has always been exemplary to me. I am also grateful to TeleLearning for getting me started in Web Log Analysis. I owe a depth of gratitude to Dr. Jiawei Han, Dr. Tiko Kameda and Dr. Wo-shun Luk for supporting my descision to continue my graduate studies. I am also grateful to Dr. Tiko Kameda for accepting to be my supervis...
1-Hop Neighbour:
Web Log Data Warehousing and Mining for Intelligent Web Caching We introduce intelligent web caching algorithms that employ predictive models of web requests; the general idea is to extend the LRU policy of web and proxy servers by making it sensible to web access models extracted from web log data using data mining techniques. Two approaches have been studied in particular, frequent patterns and decision trees. The experimental results of the new algorithms show substantial improvement over existing LRU-based caching techniques, in terms of hit rate. We designed and developed a prototypical system, which supports data warehousing of web log data, extraction of data mining models and simulation of the web caching algorithms.
1-Hop Neighbour:
From Resource Discovery to Knowledge Discovery on the Internet More than 50 years ago, at a time when modern computers didn't exist yet, Vannevar Bush wrote about a multimedia digital library containing human collective knowledge and filled with "trails" linking materials of the same topic. At the end of World War II, Vannevar urged scientists to build such a knowledge store and make it useful, continuously extendable and more importantly, accessible for consultation. Today, the closest to the materialization of Vannevar's dream is the World-Wide Web hypertext and multimedia document collection. However, the ease of use and accessibility of the knowledge described by Vannevar is yet to be realized. Since the 60s, extensive research has been accomplished in the information retrieval field, and free-text search was finally adopted by many text repository systems in the late 80s. The advent of the World-Wide Web in the 90s helped text search become routine as millions of users use search engines daily to pinpoint resources on the Internet. However, r...
2-Hop Neighbour:
The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
2-Hop Neighbour:
Web Mining Research: A Survey With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.
|
DB (Databases)
|
citeseer
|
train
| 593
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 6