messages
listlengths
2
2
[ { "content": "Classify the node 'Capturing Natural Hand Articulation Vision-based motion capturing of hand articulation is a challenging task, since the hand presents a motion of high degrees of freedom. Model-based approaches could be taken to approach this problem by searching in a high dimensional hand state space, and matching projections of a hand model and image observations. However, it is highly inefficient due to the curse of dimensionality. Fortunately, natural hand articulation is highly constrained, which largely reduces the dimensionality of hand state space. This paper presents a model-based method to capture hand articulation by learning hand natural constraints. Our study shows that natural hand articulation lies in a lower dimensional configurations space characterized by a union of linear manifolds spanned by a set of basis configurations. By integrating hand motion constraints, an efficient articulated motion-capturing algorithm is proposed based on sequential Monte Carlo techniques. Our experiments show that this algorithm is robust and accurate for tracking natural hand movements. This algorithm is easy to extend to other articulated motion capturing tasks.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nVisual Interpretation of Hand Gestures for Human-Computer Interaction: A Review The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. We survey the literature on vision-based hand gesture recognition within the context of its role in HCI. The number of approaches to video-based hand gesture recognition has grown in recent years. Thus, the need for systematization and analysis of different aspects of gestural interaction has developed. We discuss a complete model of hand gestures that possesses both spatial and dynamic properties of human hand gestures and can accommodate for all their natural types. Two classes of models that have been employed for interpretation of hand gestures for HCI are considered. The first utilizes 3D models of the human hand, while the second relies on the appearance of the human hand in the image. Investigation of model parameters and analysis feat...\n\n1-Hop Neighbour:\nView-independent Recognition of Hand Postures Since human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research of view-independent object recognition. Due to the difficulties of the modelbased approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set o...\n\n2-Hop Neighbour:\nFeedback From Video For Virtual Reality Navigation Important preconditions for wide acceptance of virtual reality systems include their comfort, ease and naturalness to use. Most existing trackers suer from discomfortrelated issues. For example, body-based trackers (such as hand controllers, joysticks or helmet attachments) restrict spontaneity and naturalness of motion, whereas groundbased devices (e.g., hand controllers) limit the workspace by literally binding an operator to the ground. Controls have similar problems. This paper describes using real-time video with registered depth information (from a commercially available camera) for virtual reality navigation. A camera-based setup can replace cumbersome trackers. The method includes selective depth processing for increased speed, and a robust skin-color segmentation for handling illumination variations.\n\n2-Hop Neighbour:\nExtraction and Classification of Visual Motion Patterns for Hand Gesture Recognition We present a new method for extracting and classifying motion patterns to recognize hand gestures. First, motion segmentation of the image sequence is generated based on a multiscale transform and attributed graph matching of regions across frames. This produces region correspondences and their affine transformations. Second, color information of motion regions is used to determine skin regions. Third, human head and palm regions are identified based on the shape and size of skin areas in motion. Finally, affine transformations defining a region's motion between successive frames are concatenated to construct the region's motion trajectory. Gestural motion trajectories are then classified by a time-delay neural network trained with backpropagation learning algorithm. Our experimental results show that hand gestures can be recognized well using motion patterns. 1 Introduction This paper is concerned with the problem of detecting two-dimensional motion across image frames and classifyi...\n\n2-Hop Neighbour:\nText Classification from Labeled and Unlabeled Documents using EM This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.\n\n2-Hop Neighbour:\n3D Hand Pose Reconstruction Using Specialized Mappings A system for recovering 3D hand pose from monocular color sequences is proposed. The system employs a non-linear supervised learning framework, the specialized mappings architecture (SMA), to map image features to likely 3D hand poses. The SMA's fundamental components are a set of specialized forward mapping functions, and a single feedback matching function. The forward functions are estimated directly from training data, which in our case are examples of hand joint configurations and their corresponding visual features. The joint angle data in the training set is obtained via a CyberGlove, a glove with 22 sensors that monitor the angular motions of the palm and fingers. In training, the visual features are generated using a computer graphics module that renders the hand from arbitrary viewpoints given the 22 joint angles. The viewpoint is encoded by two real values, therefore 24 real values represent a hand pose. We test our system both on synthetic sequences and on sequences taken with a color camera. The system automatically detects and tracks both hands of the user, calculates the appropriate features, and estimates the 3D hand joint angles and viewpoint from those features. Results are encouraging given the complexity of the task.\n\n2-Hop Neighbour:\nGestural Interface to a Visual Computing Environment for Molecular Biologists In recent years there has been tremendous progress in 3D, immersive display and virtual reality (VR) technologies. Scientific visualization of data is one of many applications that has benefited from this progress. To fully exploit the potential of these applications in the new environment there is a need for \"natural\" interfaces that allow the manipulation of such displays without burdensome attachments. This paper describes the use of visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech interface for controlling a 3-D display. The interface augments an existing application, VMD, which is a VR visual computing environment for molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We concentrate on the visual gesture analysis techniques used in developing this interface. The dual modality of gesture/speech is found to greatly aid the interaction capability....", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Relationlog System User Manual - Release 1.0 This document introduces the Relationlog system through the use of examples. All examples used here are available as part of the Relationlog release in the directory named demo. 2 INSTALLING RELATIONLOG 3 2 Installing Relationlog' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nRelationlog: A Typed Extension to Datalog with Sets and Tuples This paper presents a novel logic programming based language for nested relational and complex value models called Relationlog. It stands in the same relationship to the nested relational and complex value models as Datalog stands to the relational model. The main novelty of the language is the introduction of powerful mechanisms, namely, partial and complete set terms, for representing and manipulating both partial and complete information on nested sets, tuples and relations. They generalize the set grouping and set enumeration mechanisms of LDL and allow the user to directly encode the open and closed world assumptions on nested sets, tuples, and relations. They allow direct inference and access to deeply embedded values in a complex value relation as if the relation is normalized, which greatly increases the ease of use of the language. As a result, the extended relational algebra operations can be represented in Relationlog directly, and more importantly, recursively in a way similar to Datalog. Like Datalog, Relationlog has a well-defined Herbrand model-theoretic semantics, which captures the intended semantics of nested sets, tuples and relations, and also a well-defined proof-theoretic semantics which coincides with its model-theoretic semantics.\n\n2-Hop Neighbour:\nDesign and Implementation of the ROL System ROL is a deductive object-oriented database system developed at the University of Regina. It eectively integrates important features of deductive databases and object-oriented databases in a uniform framework and provides a uniform rule-based declarative language for dening, manipulating and querying a database. This paper describes the latest implementation of ROL. 1 Introduction In the past decade a lot of interests arose in integrating deductive and object-oriented databases to gain the best of the two approaches such as recursion, declarative querying, and rm logical foundations from deductive approaches, and object identity, complex objects, classes, class hierarchy, property inheritance with overriding and schema from object-oriented approach. A number of deductive object-oriented database languages have been proposed, such as O-logic [17], revised O-logic [11], C-logic [8], IQL [2], IQL2[1], Flogic [10], LOGRES [7], LLO [16], LOL [6], CORAL++[19], Datalog method [3], DLT ...\n\n2-Hop Neighbour:\nQuery Processing in Relationlog . Relationlog is a persistent deductive database system that supports eective storage, ecient access and inference of large amounts of data with complex structures. In this paper, we describe query processing in the Relationlog system. In particular, we illustrate the extended semi-naive and magic-set techniques used in Relationlog. 1 Introduction During the past decades, the nested relational and complex object models [1, 5, 11, 12, 15, 18, 21, 22] were developed to extend the applicability of the traditional relational model to more complex, non-business applications such as CAD, image processing and text retrieval [2]. Another important direction of intense research has been in using a logic programming based language Datalog [8, 23] as a database query language. Such a language provides a simple and natural way to express queries on a relational database and is more expressive than the traditional relational languages. In the past several years, there have been some eorts...\n\n2-Hop Neighbour:\nPartial and Complete Tuples and Sets in Deductive Databases . In a nested relational or complex object database, nested tuples and sets are used to represent real world objects. For various reasons, such tuples and sets can be partial or complete. In this paper, we discuss how to support them in deductive databases. In particular, we present a deductive database language RLOG II that supports partial and complete tuples and sets based on Relationlog. This work provides a firm logical foundation for nested relational and complex object databases that have both partial and complete tuples and sets. 1 Introduction In a nested relational database or complex object database, nested tuples and sets are used to represent real world objects. A relation is just a set of tuples and a database is a tuple of relations. For various reasons, the information about real world objects in a database may be incomplete. Hence, both tuples and sets can be partial or complete. In the past several years, a sub-problem, that is, a database which contains partial an...\n\n2-Hop Neighbour:\nDesign and Implementation of the OLOG Deductive Object-Oriented Database Management System . OLOG is a novel deductive database system for advanced intelligent information system applications. It directly supports eective storage, ecient access and inference of large amount of persistent data with complex structures. It provides a SQL-like data denition language and data manipulation language, and a declarative rule-based query language. It combines the best of the deductive, object-oriented, and objectrelational approaches in a uniform framework. This paper describes the design and implementation of the OLOG system. 1 Introduction Deductive, object-oriented, and object-relational databases are three important extensions of the traditional relational database technology. Deductive databases stem from the integration of logic programming and relational databases. It oers representational and operational uniformity, reasoning capabilities, recursion, declarative querying, ecient secondary storage access, etc. However, deductive databases based on relational databas...\n\n2-Hop Neighbour:\nA Rule-based Query Language for HTML With the recent popularity of the web, enormous amount of information is now available on line. Most web documents available over the web are in HTML format and are hierarchically structured in nature. How to query such web documents based on their internal hierarchical structure becomes more and more important. In this paper, we present a rule-based language called WebQL to support effective and flexible web queries. Unlike other web query languages, WebQL is a high level declarative query language with a logical semantics. It allows us to query web documents based on their internal hierarchical structures. It supports not only negation and recursion, but also query result restructuring in a natural way. We also describe the implementation of the system that supports the WebQL query language.", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Knowledge Management through Ontologies Most enterprises agree that knowledge is an essential asset for success and survival on a increasingly competitive and global market. This awareness is one of the main reasons for the exponential growth of knowledge management in the past decade. Our approach to knowledge management is based on ontologies, and makes knowledge assets intelligently accessible to people in organizations. Most company-vital knowledge resides in the heads of people, and thus successful knowledge management does not only consider technical aspects, but also social ones. In this paper, we describe an approach to intelligent knowledge management that explicitly takes into account the social issues involved. The proof of concept is given by a large-scale initiative involving knowledge management of a virtual organization. 1 Introduction According to Information Week (Angus et al., 1998) \"the business problem that knowledge management is designed to solve is that knowledge acquired through experience doesn't ge...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nLearning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.\n\n1-Hop Neighbour:\nBuilding a XML-based Corporate Memory . This paper emphasizes the interest of XML meta-language for corporate knowledge management. Taking into account the advantages of the World Wide Web and of ontologies for knowledge management, we present OSIRIX, a tool enabling enterprise-ontology- guided search in XML documents that may consitute a part of a corporate memory. Keywords: XML, World Wide Web, knowledge management, document-based corporate memory, enterprise ontology, information retrieval. 1 Introduction Extending the definitions proposed by [28] [20], we consider a corporate memory as an explicit, disembodied, persistent representation of knowledge and information in an organization, in order to facilitate its access and reuse by members of the organization, for their tasks. We consider its building as relying on the following steps [11]: (1) Detection of needs in corporate memory, (2) Construction of the corporate memory, (3) Diffusion of the corporate memory, (4) Use of the corporate memory, (5) Evaluation of...\n\n1-Hop Neighbour:\nCommunity Webs (C-Webs): Technological Assessment and System Architecture this paper, our presentation mainly relies on examples taken from one of the potential C-Web applications, namely C-Web Portals for cultural communities.\n\n2-Hop Neighbour:\nLearning to Classify Text from Labeled and Unlabeled Documents In many important text classification problems, acquiring class labels for training documents is costly, while gathering large quantities of unlabeled data is cheap. This paper shows that the accuracy of text classifiers trained with a small number of labeled documents can be improved by augmenting this small training set with a large pool of unlabeled documents. We present a theoretical argument showing that, under common assumptions, unlabeled data contain information about the target function. We then introduce an algorithm for learning from labeled and unlabeled text based on the combination of Expectation-Maximization with a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents; it then trains a new classifier using the labels for all the documents, and iterates to convergence. Experimental results, obtained using text from three different realworld tasks, show that the use of unlabeled data reduces classification error by up to 33%.\n\n2-Hop Neighbour:\nA Machine Learning Approach to Building Domain-Specific Search Engines Domain-specific search engines are becoming increasingly popular because they offer increased accuracy and extra features not possible with general, Web-wide search engines. Unfortunately, they are also difficult and timeconsuming to maintain. This paper proposes the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific search engines. We describe new research in reinforcement learning, text classification and information extraction that enables efficient spidering, populates topic hierarchies, and identifies informative text segments. Using these techniques, we have built a demonstration system: a search engine for computer science research papers available at www.cora.justresearch.com. 1 Introduction As the amount of information on the World Wide Web grows, it becomes increasingly difficult to find just what wewant. While general-purpose search engines suchas AltaVista and HotBot offer high coverage, they often provi...\n\n2-Hop Neighbour:\nFact or fiction: Content classification for digital libraries The World-Wide Web (WWW) is a vast repository of information, much of which is valuable but very often hidden to the user. The anarchic nature of the WWW presents unique challenges when it comes to information extraction and categorization. We view the WWW as a valuable resource for the gathering of information for Digital Libraries. In this paper we will describe the process of extracting and classifying information from the WWW for the purpose of integrating it into digital libraries. Our eorts focus on ways to automatically classify news articles according to whether they present opinions or reported facts. We describe and evaluate a system in development that automatically classies and recommends Web news articles from sports and politics domains. 1\n\n2-Hop Neighbour:\nLearning Comprehensible Conceptual User Models for User Adaptive Meta Web Search In course of the OySTER project our goal is to induce conceptual user models that allow for a transparent query refinement and information filtering in the domain of Www meta--search. User models which describe a user's interest with respect to an underlying ontology allow for a manual user model editing process and also pose a well defined problem for a conceptual inductive learning task. OySTER is a research prototype that is currently being developed at the university of Osnabruck. Introduction User Modeling and Machine Learning. User models represent assumptions about a user. User modeling systems infer user models from user interaction, store user models and induce new assumptions by reasoning about the models. These models are used within the system in order to adapt to the user. Furthermore, these models shall be accessible to the user --- they should be both understandable and manually modifyable. Incorporating machine learning into this framework often leads to intertwine...\n\n2-Hop Neighbour:\nInducing Conceptual User Models Usually, performance is the primary objective in systems that make use of user modeling (Um) techniques. But since machine learning (Ml) in user modeling addresses several issues in the context of human computer interaction (Hci), the requirements on \"performance\" are manifold. Thus, machine learning for user modeling (Ml4Um) has to meet several demands in order to satisfy the aims of involved disciplines. In this article we describe the application of a rather unusual Ml method to Um, namely inductive logic programming (Ilp). Though not primarily associated with efficient learning methods, we motivate the use of Ilp by showing translucency of derived user models and the explanatory potential of such models during a user adapted filtering process. In course of the OySTER project our goal is to induce conceptual user models that allow for a transparent query refinement and information filtering in the domain of Www meta--search. 1 Introduction: Why Conceptual User Models?...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Discovering Seeds of New Interest Spread from Premature Pages Cited by Multiple Communities The World Wide Web is a great source of new topics significant for trend birth and creation. In this paper, we propose a method for discovering topics, which stimulate communities of people into earnest communications on the topics' meaning, and grow into a trend of popular interest. Here, the obtained are web pages which absorb attentions of people from multiple interest-communities. It is shown by a experiments to a small group of people, that topics in such pages can trigger the growth of peoples' interests, beyond the bounds of existing communities.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nLearning to Predict Rare Events in Event Sequences Learning to predict rare events from sequences of events with categorical features is an important, real-world, problem that existing statistical and machine learning methods are not well suited to solve. This paper describes timeweaver, a genetic algorithm based machine learning system that predicts rare events by identifying predictive temporal and sequential patterns. Timeweaver is applied to the task of predicting telecommunication equipment failures from 110,000 alarm messages and is shown to outperform existing learning methods. Introduction An event sequence is a sequence of timestamped observations, each described by a fixed set of features. In this paper we focus on the problem of predicting rare events from sequences of events which contain categorical (non-numerical) features. Predicting telecommunication equipment failures from alarm messages is one important problem which has these characteristics. For AT&T, where most traffic is handled by 4ESS switches, the specific ...\n\n2-Hop Neighbour:\nThe PageRank Citation Ranking: Bringing Order to the Web The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.\n\n2-Hop Neighbour:\nPredicting Telecommunication Equipment Failures from Sequences of Network Alarms The computer and telecommunication industries rely heavily on knowledge-based expert systems to manage the performance of their networks. These expert systems are developed by knowledge engineers, who must first interview domain experts to extract the pertinent knowledge. This knowledge acquisition process is laborious and costly, and typically is better at capturing qualitative knowledge than quantitative knowledge. This is a liability, especially for domains like the telecommunication domain, where enormous amounts of data are readily available for analysis. Data mining holds tremendous promise for the development of expert systems for monitoring network performance since it provides a way of automatically identifying subtle, yet important, patterns in data. This case study describes a project in which a temporal data mining system called Timeweaver is used to identify faulty telecommunication equipment from logs of network alarm messages. Project Overview Managing the p...\n\n2-Hop Neighbour:\nWebSail: From On-line Learning to Web Search In this paper we investigate the applicability of on-line learning algorithms to the real-world problem of web search. Consider that web documents are indexed using n Boolean features. We first present a practically efficient on-line learning algorithm TW2 to search for web documents represented by a disjunction of at most k relevant features. We then design and implement WebSail, a real-time adaptive web search learner, with TW2 as its learning component. WebSail learns from the user's relevance feedback in real-time and helps the user to search for the desired web documents. The architecture and performance of WebSail are also discussed.\n\n2-Hop Neighbour:\nWorld Wide Web Information Retrieval Using Web Connectivity Information Gathering, processing and distributing information from the World Wide Web will be a vital technology for the next century. Web search techniques have played a critical role in the development of information systems. Due to the diverse nature of web documents, traditional search techniques must be improved. Hyperlink structure based methods have proved to be powerful ways of exploring the relationships between web documents. In this project, a prototype web search engine was developed to exploit the link structure of web documents, based on the use of the Companion algorithm. The prototype consists of a web spider, local database, and search software. The system was written using the Java programming language. Our spider crawls and downloads web pages using Lynx, then saves the hyperlinks into an Oracle database. JDBC is used to implement the database processing. Search software makes a vicinity graph for the query URL and returns the most related pages after calculating the hub and authority weights. Finally, HTML web pages provide user interfaces and communicate with CGI using the Perl language. iii ACKNOWLEDGMENTS The author would like to express thanks to all of the members of his M.S. committee for their useful comments on the thesis, assistance in scheduling the defense date and kind help during the final defense period. The author would like to express his deepest appreciation to Dr. Wen-Chen Hu, his thesis mentor, for the depth of the training and the appropriate guidance he has provided. The author would also like to acknowledge the Department of Computer Science and Software Engineering of Auburn University for financial support. Finally, thanks especially go to the author's wife Qifang, his son, Alex, and his father and mother for their support and love. ...\n\n2-Hop Neighbour:\nClustering Categorical Data: An Approach Based on Dynamical Systems We describe a novel approach for clustering collections of sets, and its application to the analysis and mining of categorical data. By \"categorical data,\" we mean tables with fields that cannot be naturally ordered by a metric --- e.g., the names of producers of automobiles, or the names of products offered by a manufacturer. Our approach is based on an iterative method for assigning and propagating weights on the categorical values in a table; this facilitates a type of similarity measure arising from the cooccurrence of values in the dataset. Our techniques can be studied analytically in terms of certain types of non-linear dynamical systems. We discuss experiments on a variety of tables of synthetic and real data; we find that our iterative methods converge quickly to prominently correlated values of various categorical fields. 1 Introduction Much of the data in databases is categorical: fields in tables whose attributes cannot naturally be ordered as numerical values can. The pro...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Learning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nMultistrategy Learning for Information Extraction Information extraction (IE) is the problem of filling out pre-defined structured summaries from text documents. We are interested in performing IE in non-traditional domains, where much of the text is often ungrammatical, such as electronic bulletin board posts and Web pages. We suggest that the best approach is one that takes into account many different kinds of information, and argue for the suitability of a multistrategy approach. We describe learners for IE drawn from three separate machine learning paradigms: rote memorization, term-space text classification, and relational rule induction. By building regression models mapping from learner confidence to probability of correctness and combining probabilities appropriately, it is possible to improve extraction accuracy over that achieved by any individual learner. We describe three different multistrategy approaches. Experiments on two IE domains, a collection of electronic seminar announcements from a university computer science de...\n\n1-Hop Neighbour:\nAn Integrated Ontology for the WWW . Knowledge-intensive processing of WWW information should be founded on clear and uniform conceptualisation. An integrated ontology covering different aspects of the WWW (documents, sites, network addressing, HTML code) has been laid down, upon which a knowledge base of the WWW domain is being built. This knowledge base should support \"intelligent\" metasearch of the Web, in particular, postprocessing of hit-lists returned by external search engines. 1 Introduction During the last few years, the World-Wide Web has become one of the most widespread technologies of information presentation. It is thus not surprising that many Knowledge Engineering (KE) projects focus on it: some use HTML as a cheap, ready-made user-interface, other thrive to mine valuable information hidden inside existing WWW pages. A necessary prerequisite of mutual comprehensibility and knowledge reuse among different KE communities and projects dealing with the Web is a clear and unified conceptualisation, wh...\n\n1-Hop Neighbour:\nLearning to Classify Text from Labeled and Unlabeled Documents In many important text classification problems, acquiring class labels for training documents is costly, while gathering large quantities of unlabeled data is cheap. This paper shows that the accuracy of text classifiers trained with a small number of labeled documents can be improved by augmenting this small training set with a large pool of unlabeled documents. We present a theoretical argument showing that, under common assumptions, unlabeled data contain information about the target function. We then introduce an algorithm for learning from labeled and unlabeled text based on the combination of Expectation-Maximization with a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents; it then trains a new classifier using the labels for all the documents, and iterates to convergence. Experimental results, obtained using text from three different realworld tasks, show that the use of unlabeled data reduces classification error by up to 33%.\n\n2-Hop Neighbour:\nEmploying EM and Pool-Based Active Learning for Text Classification This paper shows how a text classifier's need for labeled training data can be reduced by a combination of active learning and Expectation Maximization (EM) on a pool of unlabeled data. Query-by-Committee is used to actively select documents for labeling, then EM with a naive Bayes model further improves classification accuracy by concurrently estimating probabilistic labels for the remaining unlabeled documents and using them to improve the model. We also present a metric for better measuring disagreement among committee members; it accounts for the strength of their disagreement and for the distribution of the documents. Experimental results show that our method of combining EM and active learning requires only half as many labeled training examples to achieve the same accuracy as either EM or active learning alone. Keywords: text classification active learning unsupervised learning information retrieval 1 Introduction In many settings for learning text classifiers, obtaining lab...\n\n2-Hop Neighbour:\nUsing Unlabeled Data to Improve Text Classification One key difficulty with text classification learning algorithms is that they require many hand-labeled examples to learn accurately. This dissertation demonstrates that supervised learning algorithms that use a small number of labeled examples and many inexpensive unlabeled examples can create high-accuracy text classifiers. By assuming that documents are created by a parametric generative model, Expectation-Maximization (EM) finds local maximum a posteriori models and classifiers from all the data -- labeled and unlabeled. These generative models do not capture all the intricacies of text; however on some domains this technique substantially improves classification accuracy, especially when labeled data are sparse. Two problems arise from this basic approach. First, unlabeled data can hurt performance in domains where the generative modeling assumptions are too strongly violated. In this case the assumptions can be made more representative in two ways: by modeling sub-topic class structure, and by modeling super-topic hierarchical class relationships. By doing so, model probability and classification accuracy come into correspondence, allowing unlabeled data to improve classification performance. The second problem is that even with a representative model, the improvements given by unlabeled data do not sufficiently compensate for a paucity of labeled data. Here, limited labeled data provide EM initializations that lead to low-probability models. Performance can be significantly improved by using active learning to select high-quality initializations, and by using alternatives to EM that avoid low-probability local maxima.\n\n2-Hop Neighbour:\nContent-Based Book Recommending Using Learning for Text Categorization Recommender systems improve access to relevant products and information by making personalized suggestions based on previous examples of a user's likes and dislikes. Most existing recommender systems use collaborative filtering methods that base recommendations on other users' preferences. By contrast, content-based methods use information about an item itself to make suggestions. This approach has the advantage of being able to recommend previously unrated items to users with unique interests and to provide explanations for its recommendations. We describe a content-based book recommending system that utilizes information extraction and a machine-learning algorithm for text categorization. Initial experimental results demonstrate that this approach can produce accurate recommendations. KEYWORDS: Recommender systems, information filtering, machine learning, text categorization INTRODUCTION There is a growing interest in recommender systems that suggest music, films, books, and othe...\n\n2-Hop Neighbour:\nCommunity Webs (C-Webs): Technological Assessment and System Architecture this paper, our presentation mainly relies on examples taken from one of the potential C-Web applications, namely C-Web Portals for cultural communities.\n\n2-Hop Neighbour:\nWeb Document Clustering: A Feasibility Demonstration Abstract Users of Web search engines are often forced to sift through the long ordered list of document “snippets” returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial. 1", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Flexible and Scalable Cost-Based Query Planning in Mediators: A Transformational Approach The Internet provides access to a wealth of information. For any given topic or application domain there are a variety of available information sources. However, current systems, such as search engines or topic directories in the World Wide Web, offer only very limited capabilities for locating, combining, and organizing information. Mediators, systems that provide integrated access and database-like query capabilities to information distributed over heterogeneous sources, are critical to realize the full potential of meaningful access to networked information. Query planning, the task of generating a cost-efficient plan that computes a user query from the relevant information sources, is central to mediator systems. However, query planning is a computationally hard problem due to the large number of possible sources and possible orderings on the operations to process the data. Moreover, the choice of sources, data processing operations, and their ordering, strongly affects the plan c...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCost-based Query Scrambling for Initial Delays Remote data access from disparate sources across a wide-area network such as the Internet is problematic due to the unpredictable nature of the communications medium and the lack of knowledge about the load and potential delays at remote sites. Traditional, static, query processing approaches break down in this environment because they are unable to adapt in response to unexpected delays. Query scrambling has been proposed to address this problem. Scrambling modifies query execution plans on-the-fly when delays are encountered during runtime. In its original formulation, scrambling was based on simple heuristics, which although providing good performance in many cases, were also shown to be susceptible to problems resulting from bad scrambling decisions. In this paper we address these shortcomings by investigating ways to exploit query optimization technology to aid in making intelligent scrambling choices. We propose three different approaches to using query optimization for scramblin...\n\n1-Hop Neighbour:\nSimplifying Data Access: The Energy Data Collection (EDC) Project The massive amount of statistical and text data available from government agencies has created a set of daunting challenges to both research and analysis communities. These problems include heterogeneity, size, distribution, and control of terminology. At the Digital Government Research Center we are investigating solutions to these key problems. In this paper we focus on (1) ontological mappings for terminology standardization, (2) data integration across data bases with high speed query processing, and (3) interfaces for query input and presentation of results. This collaboration between researchers from Columbia University and the Information Sciences Institute of the University of Southern California employs technology developed at both locations, in particular the SENSUS ontology, the SIMS multi-database access planner, the LKB automated dictionary and terminology analysis system, and others. The pilot application targets gasoline data from the Bureau of Labor Statistics, the Energy Information Administration of the Department of Energy, the Census Bureau, and other government agencies. 1\n\n1-Hop Neighbour:\nFast Approximate Evaluation of OLAP Queries for Integrated Statistical Data We have developed a mediator architecture that integrates statistical information about energy products from several government agencies, such as the Bureau of Labor Statistics, the Energy Information Administration, and the California Energy Commission. Our architecture has a dual mode of operation. First, our system can retrieve live data from databases and web sources from these agencies. This allows the users to obtain completely up-to-date data. However, for complex analytical queries that typically require large amounts of data and processing, live access does not offer the level of interactivity that some users require. Second, our system can warehouse the information from the data sources to allow for complex analytical queries to be executed much more efficiently. However, the data would be only as recent as the last update to the data warehouse. In this paper we describe the architecture and focus on how to perform analytical queries against the data warehouse very efficiently. We present results using a fast wavelet-based technique for progressive evaluation of range-sum queries. This technique allows for returning an approximate result to the query very efficiently and for fast convergence to the exact result. We envision users exploring many complex queries using the very fast approximate results as guidance and only obtaining the exact results for those queries that are deemed of interest. We present experimental results showing the efficiency of both approximate and exact queries. 1\n\n2-Hop Neighbour:\nOn Reconfiguring Query Execution Plans in Distributed Object-Relational DBMS Massive database sizes and growing demands for decision support and data mining result in long-running queries in extensible Object-Relational DBMS, particularly in decision support and data warehousing analysis applications. Parallelization of query evaluation is often required for acceptable performance. Yet queries are frequently processed suboptimally due to (1) only coarse or inaccurate estimates of the query characteristics and database statistics available prior to query evaluation; (2) changes in system configuration and resource availability during query evaluation. In a distributed environment, dynamically reconfiguring query execution plans (QEPs), which adapts QEPs to the environment as well as the query characteristics, is a promising means to significantly improve query evaluation performance. Based on an operator classification, we propose an algorithm to coordinate the steps in a reconfiguration and introduce alternatives for execution context checkpointing and restorin...\n\n2-Hop Neighbour:\nEfficiently Ordering Query Plans for Data Integration interface to a multitude of data sources. Given a user query formulated in this interface, the system translates it into a set of query plans. Each plan is a query formulated over the data sources, and specifies a way to access sources and combine data to answer the user query.\n\n2-Hop Neighbour:\nMaximizing Coverage of Mediated Web Queries Over the Web, mediators are built on large collections of sources to provide integrated access to Web content (e.g., meta-search engines). In order to minimize the expense of visiting a large number of sources, mediators need to choose a subset of sources to contact when processing queries. As fewer sources participate in processing a mediated query, the coverage of the query goes down. In this paper, we study this trade-off and develop techniques for mediators to maximize the coverage for their queries while at the same time visiting a subset of their sources. We formalize the problem; study its complexity; propose algorithms to solve it; and analyze the theoretical performance guarantees of the algorithms. We also study the performance of our algorithms through simulation experiments. 1 Introduction Web sources often provide limited information \"coverage.\" For instance, one type of information source is search engines, such as Lycos [27], Northern Light [29] and Yahoo [30]....\n\n2-Hop Neighbour:\nAdaptive Query Processing for Internet Applications As the area of data management for the Internet has gained in popularity, recent work has focused on effectively dealing with unpredictable, dynamic data volumes and transfer rates using adaptive query processing techniques. Important requirements of the Internet domain include: (1) the ability to process XML data as it streams in from the network, in addition to working on locally stored data; (2) dynamic scheduling of operators to adjust to I/O delays and flow rates; (3) sharing and re-use of data across multiple queries, where possible; (4) the ability to output results and later update them. An equally important consideration is the high degree of variability in performance needs for different query processing domains: perhaps an ad-hoc query application should optimize for display of incomplete and partial incremental results, whereas a corporate data integration application may need the best time-to-completion and may have very strict data \"freshness\" guarantees. The goal of...\n\n2-Hop Neighbour:\nEddies: Continuously Adaptive Query Processing In large federated and shared-nothing databases, resources can exhibit widely fluctuating characteristics. Assumptions made at the time a query is submitted will rarely hold throughout the duration of query processing. As a result, traditional static query optimization and execution techniques are ineffective in these environments. In this paper we introduce a query processing mechanism called an eddy, which continuously reorders operators in a query plan as it runs. We characterize the moments of symmetry during which pipelined joins can be easily reordered, and the synchronization barriers that require inputs from different sources to be coordinated. By combining eddies with appropriate join algorithms, we merge the optimization and execution phases of query processing, allowing each tuple to have a flexible ordering of the query operators. This flexibility is controlled by a combination of fluid dynamics and a simple learning algorithm. Our initial implementation demonstrates prom...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Supporting Internet-Scale Multi-Agent Systems ts a model of AgentScape from the agent perspective, that is, the location comprising the middleware and the resources are represented by a location manager agent and resource objects. Calls from an agent to the middleware are modeled by requests to the location manager agent to, for example, create an agent or move an agent. Information about resources residing at the location can be retrieved by binding to the resource objects, which are local distributed objects. These objects can be accessed only within the location they reside, not from outside the location. For development of agent applications, an application programming interface (API) and a runtime system (RTS) are provided, see Fig. 1. The default API and RTS can be extended to provide a higher-level application programming interface with, for example, a model that offers more structure and semantics to the agent application developer. Within AgentScape, management of large-scale agent systems is an important issue, includi' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...\n\n1-Hop Neighbour:\nThe Open Agent Architecture: A Framework for Building Distributed Software Systems The Open Agent Architecture (OAA), developed and used for several years at SRI International, makes it possible for software services to be provided through the cooperative efforts of distributed collections of autonomous agents. Communication and cooperation between agents are brokered by one or more facilitators, which are responsible for matching requests, from users and agents, with descriptions of the capabilities of other agents. Thus, it is not generally required that a user or agent know the identities, locations, or number of other agents involved in satisfying a request. OAA is structured so as to minimize the effort involved in creating new agents and \"wrapping\" legacy applications, written in various languages and operating on various platforms; to encourage the reuse of existing agents; and to allow for dynamism and flexibility in the makeup of agent communities. Distinguishing features of OAA as compared with related work include extreme flexibility in using facilitator-b...\n\n1-Hop Neighbour:\nAgent Factory: Generative Migration of Mobile Agents in Heterogeneous Environments In most of today's agent systems migration of agents requires homogeneity in the programming language and/or agent platform in which an agent has been designed. In this paper an approach is presented with which heterogeneity is possible: agents can migrate between non-identical platforms, and need not be written in the same language. Instead of migrating the \"code\" (including data and state) of an agent, a blueprint of an agent's functionality is transferred. An agent factory generates new code on the basis of this blueprint. This approach of generative mobility not only has implications for interoperability but also for security, as discussed in this paper.\n\n2-Hop Neighbour:\nA Business Process Agent The architecture of a process agent is a three-layer BDI, hybrid, multi-agent architecture. These process agents are intended to deal with corporate cultural, or political, sensitivities as well as with corporate rules. These agents adapt their behaviour on the basis of the likelihood of plan success, and on estimates of the time, cost and value of choosing a plan. 1 Introduction An intelligent multi-agent system is a society of autonomous cooperating components each of which maintains an ongoing interaction with its environment. Intelligent agents should be autonomous, cooperative and adaptive. A process agent architecture is designed specifically for business process applications. Typically the cost of bringing a business process to its conclusion is substantially due to the cost of the human processing involved. If this is so then a process management system should make its decisions in a thorough and considered way, and should have no reason, based on cost, for not doing s...\n\n2-Hop Neighbour:\nExtending a Multi-Agent System for Genomic Annotation . The explosive growth in genomic (and soon, expression and proteomic) data, exemplified by the Human Genome Project, is a fertile domain for the application of multi-agent information gathering technologies. Furthermore, hundreds of smaller-profile, yet still economically important organisms are being studied that require the efficient and inexpensive automated analysis tools that multiagent approaches can provide. In this paper we give a progress report on the use of the DECAF multi-agent toolkit to build reusable information gathering systems for bioinformatics. We will briefly summarize why bioinformatics is a classic application for information gathering, how DECAF supports it, and recent extensions underway to support new analysis paths for genomic information. 1\n\n2-Hop Neighbour:\nA Logic of BDI Agents with Procedural Knowledge In this paper, we present a new logic for specifying the behaviour of multi-agent systems. In this logic, agents are viewed as BDI systems, in that their state is characterised in terms of beliefs, desires, and intentions: the semantics of the BDI component of the logic are based on the wellknown system of Rao and Georgeff. In addition, agents have available to them a library of plans, representing their `know-how': procedural knowledge about how to achieve their intentions. These plans are, in effect, programs, that specify how a group of agents can work in parallel to achieve certain ends. The logic provides a rich set of constructs for describing the structure and execution of plans. Some properties of the logic are investigated, (in particular, those relating to plans), and some comments on future work are presented. 1 Introduction There is currently much international interest in computer systems that go under the banner of intelligent agents [17]. Crudely, an intelligent agent i...\n\n2-Hop Neighbour:\nFacilitating Message Exchange though Middle Agents To utilize services provided by other agents, a requesting agent needs to locate and communicate with these service providers. Specifically, in order to interoperate with the providers, the requesting agent should know: 1) the service provider's interface; 2) the ontology that defines concepts used by the provider; and 3) the agent communication language (ACL) the agent uses so that it can parse and understand the communication. Currently deployed Multi-Agent Systems (MAS) encode the interface description and the ontology within a service provider's capability description (or advertisement) that is registered with a Middle Agent; however, this assumes a common ACL between communicating agents. We demonstrate how agents can communicate with each other using a template-based shallow parsing approach to constructing and decomposing messages, thus relaxing assumptions on the ACLs and message formats used.\n\n2-Hop Neighbour:\nLearning to Notice: Adaptive Models of Human Operators Agent-based technologies have been used for a number of years to model human operators in complex simulated environments. The BDI agent framework has proven to be particularly suited to this sort of modelling, due to its \"natural\" composition of beliefs, goals and plans. However one of the weaknesses of the BDI agent model, and many other human operator models (agent-based or otherwise), is its inability to support agent learning. Human operators naturally adapt their behaviour over time, particularly to avoid repeating mistakes. This paper introduces an enhancement to the BDI framework which is based on a descriptive psychological model of decision making called \"recognition-primed decision making.\" This enhancement allows the development of agents that adapt their behaviour in real-time, in the same manner as a person would, providing more realistic human operator models.", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Results and Challenges in Web Search Evaluation A frozen 18.5 million page snapshot of part of the Web has been created to enable and encourage meaningful and reproducible evaluation of Web search systems and techniques. This collection is being used in an evaluation framework within the Text Retrieval Conference (TREC) and will hopefully provide convincing answers to questions such as, \"Can link information result in better rankings?\", \"Do longer queries result in better answers?\", and, \"Do TREC systems work well on Web data?\" The snapshot and associated evaluation methods are described and an invitation is extended to participate. Preliminary results are presented for an effectivess comparison of six TREC systems working on the snapshot collection against five well-known Web search systems working over the current Web. These suggest that the standard of document rankings produced by public Web search engines is by no means state-of-the-art. 1999 Published by Elsevier Science B.V. All rights reserved. Keywords: Evaluation; Search...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSearching the world wide web The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the “indexable Web, ” the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages. The Internet has grown rapidly since its inception in December 1969 (1) and is anticipated to expand 1000 % over the next few years (2). The amount of scientific information and the number of electronic journals on the Internet continue to increase [about 1000 journals as of 1996 (2, 3)]. The Internet and the World Wide Web (the Web) represent significant advancements for the retrieval and dissemination of scientific and other literature and for the advancement of education (2, 4). With the introduction of full-text search engines such as AltaVista (www.\n\n1-Hop Neighbour:\nA Case Study in Web Search using TREC Algorithms Web search engines rank potentially relevant pages/sites for a user query. Ranking documents for user queries has also been at the heart of the Text REtrieval Conference (TREC in short) under the label ###### retrieval. The TREC community has developed document ranking algorithms that are known to be the best for searching the document collections used in TREC, which are mainly comprised of newswire text. However, the web search community has developed its own methods to rank web pages/sites, many of which use link structure on the web, and are quite dierentfrom the algorithms developed at TREC. This study evaluates the performance of a state-of-the-art keyword-based document ranking algorithm (coming out of TREC) on a popular web search task: nding the web page/site of an entity, #### companies, universities, organizations, individuals, etc. This form of querying is quite prevalentonthe web. The results from the TREC algorithms are compared to four commercial web search engines. Results show that for nding the web page/site of an entity, commercial web search engines are notably better than a state-of-the-art TREC algorithm. These results are in sharp contrast to results from several previous studies. Keywords Search engines, TREC ad-hoc, keyword-based ranking, linkbased ranking 1.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nCreating a Semantic web Interface with Virtual Reality Novel initiatives amongst the Internet community such as Internet2 [1] and Qbone [2] are based on the use of high bandwidth and powerful computers. However the experience amongst the majority of Internet users is light-years from these emerging technologies. We describe the construction of a distributed high performance search engine, utilizing advanced threading techniques on a diskless Linux cluster. The resulting Virtual Reality scene is passed to a standard client machine for viewing. This search engine bridges the gap between the Internet of today, and the Internet of the future. Keywords: Internet Searching, High Performance VRML, Visualization. 1.\n\n2-Hop Neighbour:\nChart of Darkness: Mapping a Large Intranet We introduce and de ne the concept of dark matter on the Web. Dark matter for a person or Web crawler consists of pages that they cannot reach and view, but which another observer can. Dark matter is important to our understanding of the Web in that the portion of the Web any of us can see depends on our viewpoint. Diffrent observers see different overlapping sections of the Web. However, no one can see all of the Web, even if they want to. We categorise the various types of dark matter that exist and how they may be discovered. Formal definitions of what constitutes lightness and darkness on the Web are formulated in terms of reachability. Our case study of dark matter within the Australian National University's intranet is reported. We estimate that 87% of the ANU intranet's information is dark to our local search service, and 37% is potentially loadable Web data unreachable to almost every Web user. Finally, we discuss some of the implications of dark matter for estimating the size of the Web and for general Web searching.\n\n2-Hop Neighbour:\nWebSail: From On-line Learning to Web Search In this paper we investigate the applicability of on-line learning algorithms to the real-world problem of web search. Consider that web documents are indexed using n Boolean features. We first present a practically efficient on-line learning algorithm TW2 to search for web documents represented by a disjunction of at most k relevant features. We then design and implement WebSail, a real-time adaptive web search learner, with TW2 as its learning component. WebSail learns from the user's relevance feedback in real-time and helps the user to search for the desired web documents. The architecture and performance of WebSail are also discussed.\n\n2-Hop Neighbour:\nTopical Locality in the Web Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable WorldWide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.\n\n2-Hop Neighbour:\nDocument Classification as an Internet service: Choosing the best classifier This project investigates some of the issues involved in a new proposal for expanding the scope of the field of Data Mining by providing mining models as services on the Internet. This idea can widely increase the reach and accessibility of Data Mining to common people because one of the primary stumbling blocks in the adoption of mining is the extremely high level of expertise and data resources needed in building a robust mining model. We feel this task should be left to the specialists with access to data and resources, who can provide their most up to date model as a service on the Internet for public use.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Agent-Oriented Software Engineering Agent-oriented techniques represent an exciting new means of analysing, designing and building complex software systems. They have the potential to significantly improve current practice in software engineering and to extend the range of applications that can feasibly be tackled. Yet, to date, there have been few serious attempts to cast agent systems as a software engineering paradigm. This paper seeks to rectify this omission. Specifically, it will be argued that: (i) the conceptual apparatus of agent-oriented systems is well-suited to building software solutions for complex systems and (ii) agent-oriented approaches represent a genuine advance over the current state of the art for engineering complex systems. Following on from this view, the major issues raised by adopting an agent-oriented approach to software engineering are highlighted and discussed.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA multi-agent system for advising and monitoring students navigating instructional Web sites A growing community of teachers, at all levels of the educational system, provides course material in the form of hypertext/multimedia documents. In most cases this is done by creating a course Web site. This paper explores the issues related to the design of software systems that aid teachers in monitoring how students use their sites and proactively advise students navigating the sites. In connection to these functions two important topics in current applications of technology to education are discussed. Firstly the definition of a set of criteria allowing the evaluation of the appropriateness of multi-media and hypertext technologies vis vis to classic course support material and in particular textbooks. Secondly the issue of the utility and acceptability of proactive user interfaces such as interface agents or personal assistant agents. A multi agent system capable of advising and monitoring students navigating instructional Web sites is introduced and it is used as a basis for discussion of the above two topics. The system generates and uses a set of indicators evaluating how much use is made of hypertext and multimedia tools as well as indicators of usefulness and cognitive support of the proactive user interface. Keywords: Tutoring systems, Multi agent systems, World Wide Web, Autonomous Interface Agents, Digital Course Material, XML. 1.\n\n1-Hop Neighbour:\nAgentSpeak(L): BDI Agents speak out in a logical computable language Abstract. Belief-Desire-Intention (BDI) agents have been investigated by many researchers from both a theoretical specification perspectiveand a practical design perspective. However, there still remains a large gap between theory and practice. The main reason for this has been the complexity of theorem-proving or modelchecking in these expressive specification logics. Hence, the implemented BDI systems have tended to use the three major attitudes as data structures, rather than as modal operators. In this paper, we provide an alternative formalization of BDI agents by providing an operational and proof-theoretic semantics of a language AgentSpeak(L). This language can be viewed as an abstraction of one of the implemented BDI systems (i.e., PRS) and allows agent programs to be written and interpreted in a manner similar to that of horn-clause logic programs. We show how to perform derivations in this logic using a simple example. These derivations can then be used to prove the properties satisfied by BDI agents. 1\n\n1-Hop Neighbour:\nBDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...\n\n2-Hop Neighbour:\nExtending a Multi-Agent System for Genomic Annotation . The explosive growth in genomic (and soon, expression and proteomic) data, exemplified by the Human Genome Project, is a fertile domain for the application of multi-agent information gathering technologies. Furthermore, hundreds of smaller-profile, yet still economically important organisms are being studied that require the efficient and inexpensive automated analysis tools that multiagent approaches can provide. In this paper we give a progress report on the use of the DECAF multi-agent toolkit to build reusable information gathering systems for bioinformatics. We will briefly summarize why bioinformatics is a classic application for information gathering, how DECAF supports it, and recent extensions underway to support new analysis paths for genomic information. 1\n\n2-Hop Neighbour:\nVirtual Enterprise Design - BDI Agents vs. Objects Current research identifying architectures for a virtual enterprise has moved from information modelling to role modelling. Thus, a high level of autonomy results from the distribution of responsibilities, capabilities, and knowledge among different business units in the virtual enterprise at the design stage. Current trends tend towards using object-oriented technology as an effective abstract system design and implementation methodology. We argue that applying the software agent paradigm to the virtual enterprise provides various advantages on both the design and operational levels. We further show that the Belief Desire Intention agent architecture has additional abilities of mapping real world business unit autonomy and interaction. We also introduce the Belief Desire Intention agent paradigm capability of facilitating highly flexible (agile) enterprise design and implementation. 1\n\n2-Hop Neighbour:\nReal-Time Scheduling for Distributed Agents Providing an environment for a software agent to execute is very similar to building an operating system for the execution of general purpose applications. In the same fashion that an operating system provides a set of services for the execution of a user request, an agent framework provides a similar set of services for the execution of agent actions. Such services include the ability to communicate with other agents, maintaining the current state of an executing agent, and selecting an execution path from a set of possible execution paths. The particular focus of this paper is the study of Soft Real-Time agentscheduling in the context of a framework for the execution of intelligent software agents; acharacterization of agent performance; and developmentofanenvironment for testing and comparing the performance of agent activities. The agent architecture used for this study, DECAF #Distributed Environment Centered AgentFramework#, is a software toolkit for the rapid d...\n\n2-Hop Neighbour:\nA Logical Framework for Multi-Agent Systems and Joint Attitudes We present a logical framework for reasoning about multi-agent systems. This framework uses Giunchiglia et al.'s notion of a logical context to define a methodology for the modular specification of agents and systems of agents. In particular, the suggested methodology possesses important features from the paradigm of object-oriented (OO) design. We are particularly interested in the specification of agent behaviours via BDI theories---i.e., theories of belief, desire and intention. We explore various issues arising from the BDI specification of systems of agents and illustrate how our framework can be used to specify bottom-level agent behaviour via the specification of top-level intentions, or to reason about complex \"emergent behaviour\" by specifying the relationship between simple interacting agents. 1 Introduction The formal specification of autonomous reasoning agents has recently received much attention in the AI community, particular under the paradigm of agent-oriented progr...\n\n2-Hop Neighbour:\nIntelligent agents in virtual enterprises Decreasing innovation cycles, changing market situations as well as growing specialisation in individual market segments demand new ways of economic thinking, increasingly forcing enterprises into cooperations, sometimes even with direct competitors. Presently discussed and designated as the corporate and cooperation model of the future is the so-called virtual enterprise. In this paper, we advocate the use of intelligent agents as a useful metaphor and as a software engineering methodology for the design and the operation of virtual enterprises. We focus on how agents can support the cooperative process of setting up virtual enterprises through the Internet by performing tasks such as presentation, information retrieval and extraction, and the participation in auctions in electronic markets. This paper does not describe completed research; it rather offers a perspective of the high potential of agent-based technology for one of tomorrow's key industrial areas by presenting the main objectives of the new research project AVE", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Keyword Searching and Browsing in Databases using BANKS With the growth of the Web, there has been a rapid increase in the number of users who need to access online databases without having a detailed knowledge of the schema or of query languages; even relatively simple query languages designed for non-experts are too complicated for them. We describe BANKS, a system which enables keyword-based search on relational databases, together with data and schema browsing. BANKS enables users to extract information in a simple manner without any knowledge of the schema or any need for writing complex queries. A user can get information by typing a few keywords, following hyperlinks, and interacting with controls on the displayed results. BANKS models tuples as nodes in a graph, connected by links induced by foreign key and other relationships. Answers to a query are modeled as rooted trees connecting tuples that match individual keywords in the query. Answers are ranked using a notion of proximity coupled with a notion of prestige of nodes based on inlinks, similar to techniques developed for Web search. We present an efficient heuristic algorithm for finding and ranking query results. 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBBQ: A Visual Interface for Integrated Browsing and Querying of XML In this paper we present BBQ (Blended Browsing and Querying), a graphic user interface for seamlessly browsing and querying XML data sources. BBQ displays the structure of multiple data sources using a paradigm that resembles drilling-down in Windows' directory structures. BBQ allows queries incorporating one or more of the sources. Queries are constructed in a query-by-example (QBE) manner, where DTDs play the role of schema. The queries are arbitrary conjunctive queries with GROUPBY, and their results can be subsequently used and refined. To support query refinement, BBQ introduces virtual result views: standalone virtual data sources that (i) are constructed by user queries, from elements in other data sources, and (ii) can be used in subsequent queries as first-class data sources themselves. Furthermore, BBQ allows users to query data sources with loose or incomplete schema, and can augment such schema with a DTD inference mechanism.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nYarrow: A Real-Time Client Side Meta-Search Learner In this paper we report our research on building Yarrow - an intelligent web meta-search engine. The predominant feature of Yarrow is that in contrast to the lack of adaptive learning features in existing metasearch engines, Yarrow is equipped with a practically efficient on-line learning algorithm so that it is capable of helping the user to search for the desired documents with as little feedback as possible. Currently, Yarrow can query eight of the most popular search engines and is able to perform document parsing and indexing, and learning in real-time on client side. Its architecture and performance are also discussed. 1. Introduction As the world wide web evolves and grows so rapidly, web search, an interface between the human users and the vast information gold mine of the web, is becoming a necessary part of people's daily life. Designing and implementing practically effective web search tools is a challenging task. It calls for innovative methods and strategies f...\n\n2-Hop Neighbour:\nHigh-Performance Web Crawling SRC’s charter is to advance the state of the art in computer systems by doing basic and applied research in support of our company’s business objectives. Our interests and projects span scalable systems (including hardware, networking, distributed systems, and programming-language technology), the Internet (including the Web, e-commerce, and information retrieval), and human/computer interaction (including user-interface technology, computer-based appliances, and mobile computing). SRC was established in 1984 by Digital Equipment Corporation. We test the value of our ideas by building hardware and software prototypes and assessing their utility in realistic settings. Interesting systems are too complex to be evaluated solely in the abstract; practical use enables us to investigate their properties in depth. This experience is useful in the short term in refining our designs and invaluable in the long term in advancing our knowledge. Most of the major advances in information systems have come through this approach, including personal computing, distributed systems, and the Internet. We also perform complementary work of a more mathematical character. Some of\n\n2-Hop Neighbour:\nRank Aggregation Revisited The rank aggregation problem is to combine many different rank orderings on the same set of candidates, or alternatives, in order to obtain a \"better\" ordering. Rank aggregation has been studied extensively in the context of social choice theory, where several \"voting paradoxes\" have been discovered. The problem\n\n2-Hop Neighbour:\nText-Based Content Search and Retrieval in ad hoc P2P Communities We consider the problem of content search and retrieval in peer-to-peer (P2P) communities. P2P computing is a potentially powerful model for information sharing between ad hoc groups' of users because of its' low cost of entry and natural model for resource scaling with community size. As P2P communities grow in size, however, locating information distributed across the large number of peers becomes problematic. We present a distributed text-based content search and retrieval algorithm to address this' problem. Our algorithm is' based on a state-of-the-art text-based document ranking algorithm: the vector-space model instantiated with the TFxlDF ranking rule. A naive application of TFxlDF wouM require each peer in a community to collect an inverted index of the entire community. This' is' costly both in terms of bandwidth and storage. Instea & we show how TFxlDF can be approximated given compact summaries of peers' local inverted indexes. We make three contributions: (a) we show how the TFxlDF rule can be adapted to use the index summaries, (b) we provide a heuristic for adaptively determining the set of peers that shouM be contacted for a query, and (c) we show that our algorithm tracks' TFxlDF's performance very closely, regardless of how documents' are distributed throughout the community. Furthermore, our algorithm preserves the main flavor of TFxlDF by retrieving close to the same set of documents for any given query.\n\n2-Hop Neighbour:\nWeb Mining Research: A Survey With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Web Mining Research: A Survey With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAuthoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative \"authority\" of pages in such environments. This issue is central to a number of basic hypertext search t...\n\n1-Hop Neighbour:\nWorkshop on Intelligent Information Integration (III'99)\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nA Case Study in Web Search using TREC Algorithms Web search engines rank potentially relevant pages/sites for a user query. Ranking documents for user queries has also been at the heart of the Text REtrieval Conference (TREC in short) under the label ###### retrieval. The TREC community has developed document ranking algorithms that are known to be the best for searching the document collections used in TREC, which are mainly comprised of newswire text. However, the web search community has developed its own methods to rank web pages/sites, many of which use link structure on the web, and are quite dierentfrom the algorithms developed at TREC. This study evaluates the performance of a state-of-the-art keyword-based document ranking algorithm (coming out of TREC) on a popular web search task: nding the web page/site of an entity, #### companies, universities, organizations, individuals, etc. This form of querying is quite prevalentonthe web. The results from the TREC algorithms are compared to four commercial web search engines. Results show that for nding the web page/site of an entity, commercial web search engines are notably better than a state-of-the-art TREC algorithm. These results are in sharp contrast to results from several previous studies. Keywords Search engines, TREC ad-hoc, keyword-based ranking, linkbased ranking 1.\n\n2-Hop Neighbour:\nSynchronizing a database to Improve Freshness In this paper we study how to refresh a local copy of an autonomous data source to maintain the copy up-to-date. As the size of the data grows, it becomes more difficult to maintain the copy \"fresh,\" making it crucial to synchronize the copy effectively. We define two freshness metrics, change models of the underlying data, and synchronization policies. We analytically study how effective the various policies are. We also experimentally verify our analysis, based on data collected from 270 web sites for more than 4 months, and we show that our new policy improves the \"freshness\" very significantly compared to current policies in use.\n\n2-Hop Neighbour:\nExploiting Structure for Intelligent Web Search Together with the rapidly growing amount of online data we register an immense need for intelligent search engines that access a restricted amount of data as found in intranets or other limited domains. This sort of search engines must go beyond simple keyword indexing/matching, but they also have to be easily adaptable to new domains without huge costs. This paper presents a mechanism that addresses both of these points: first of all, the internal document structure is being used to extract concepts which impose a directorylike structure on the documents similar to those found in classified directories. Furthermore, this is done in an efficient way which is largely language independent and does not make assumptions about the document structure.\n\n2-Hop Neighbour:\nSearching the World Wide Web in Low-Connectivity Communities The Internet has the potential to deliver information to communities around the world that have no other information resources. High telephone and ISP fees- in combination with lowbandwidth connections- make it unaffordable for many people to browse the Web online. We are developing the TEK system to enable users to search the Web using only email. TEK stands for &quot;Time Equals Knowledge, &quot; since the user exchanges time (waiting for email) for knowledge. The system contains three components: 1) the client, which provides a graphical interface for the end user, 2) the server, which performs the searches from MIT, and 3) a reliable email-based communication protocol between the client and the server. The TEK search engine differs from others in that it is designed to return low-bandwidth results, which are achieved by special filtering, analysis, and compression on the server side. We believe that TEK will bring Web resources to people who otherwise would not be able to afford them.\n\n2-Hop Neighbour:\nAn Overview of World Wide Web Search Technologies With over 800 million pages covering most areas of human endeavor, the World Wide Web is fertile ground for information retrieval. Numerous search technologies have been applied to Web searches, and the dominant search method has yet to be identified. This chapter provides an overview of existing Web search technologies and classifies them into six categories: (i) hyperlink exploration, (ii) information retrieval, (iii) metasearches, (iv) SQL approaches, (v) content-based multimedia searches, and (vi) others. A comparative study of some major commercial and experimental search services is presented, and some future research directions for Web searches are suggested. Keywords: Survey, World Wide Web, Searches, Search Engines, and Information Retrieval. 1.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Meeting Plan Recognition Requirements for Real-Time Air-Mission Simulations In this paper, the potential synergy between instancebased pattern recognition and means-end (possible world) reasoning is explored, for supporting plan recognition in multi-aeroplane air-mission simulations. A combination of graph matching, induction, probabilistic principles and dynamic programming are applied to traces of aeroplane behaviour during flight manoeuvres. These satisfy the real-time constraints of the simulation. This enables the agents to recognise what other agents are doing and to abstract about their activity, at the instrumentation level. A means-end-reasoning model is then used to deliberate about and invoke standard operating procedures, based on recognised activity. The reasoning model constrains the recognition process by framing queries according to what a pilot would expect during the execution of the current plan(s). Results from experiments involving the dMARS procedural reasoning system and the CLARET pattern matching and induction system are described for ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nTowards Flexible Multi-Agent Decision-Making Under Time Pressure Abstract — Autonomous agents need considerable computational resources to perform rational decision-making. These demands are even more severe when other agents are present in the environment. In these settings, the quality of an agent’s alternative behaviors depends not only on the state of the environment, but also on the actions of other agents, which in turn depend on the others ’ beliefs about the world, their preferences, and further on the other agents’ beliefs about others, and so on. The complexity becomes prohibitive when large number of agents are present and when decisions have to be made under time pressure. In this paper we investigate strategies intended to tame the computational burden by using off-line computation in conjunction with on-line reasoning. We investigate two approaches. First, we use rules compiled off-line to constrain alternative actions considered during on-line reasoning. This method minimizes overhead, but is not sensitive to changes in realtime demands of the situation at hand. Second, we use performance profiles computed off-line and the notion of urgency (i.e., the value of time) computed on-line to choose the amount of information to be included during on-line deliberation. This method can adjust to various levels of real-time demands, but incurs some overhead associated with iterative deepening. We test our framework with experiments in a simulated anti-air defense domain. The experiments show that both procedures are effective in reducing computation time while offering good performance under time pressure.\n\n1-Hop Neighbour:\nBDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...\n\n1-Hop Neighbour:\nPlan Recognition in Military Simulation: Incorporating Machine Learning with Intelligent Agents A view of plan recognition shaped by both operational and computational requirements is presented. Operational requirements governing the level of fidelity and nature of the reasoning process combine with computational requirements including performance speed and software engineering effort to constrain the types of solutions available to the software developer. By adopting machine learning to provide spatio-temporal recognition of environmental events and relationships, an agent can be provided with a mechanism for mental state recognition qualitatively different from previous research. An architecture for integrating machine learning into a BDI agent is suggested and the results from the development of a prototype provide proof-of-concept. 1 Introduction This paper proposes machine-learning as a tool to assist in the construction of agents capable of plan recognition. This paper focuses on the beliefs-desires-intentions (BDI) class of agents. These agents have been ...\n\n2-Hop Neighbour:\nDynamic Reconfiguration in Collaborative Problem Solving In this article we will describe our research efforts in coping with a trade-off that can be often found in the control and optimization of todays business processes. Though centralized control may achieve nearto -optimum results in optimizing the system behavior, there are usually social, technical and security restrictions on applying centralized control. Distributed control on the other hand may cope with these restrictions but also entails sub-optimality and communicational overhead. Our concept of composable agents tries to allow a dynamic and fluent transition between globalization and localization in business process control by adapting to the current real-world system structure. We are currently evaluating this concept in the framework of patient flow control at Charit'e Berlin. Introduction Research in Distributed Artificial Intelligence (DAI, (Bond & Gasser 1988)) has been traditionally divided into Distributed Problem Solving (DPS) and Multi Agent Systems (MAS). However, r...\n\n2-Hop Neighbour:\nIntelligent agents in virtual enterprises Decreasing innovation cycles, changing market situations as well as growing specialisation in individual market segments demand new ways of economic thinking, increasingly forcing enterprises into cooperations, sometimes even with direct competitors. Presently discussed and designated as the corporate and cooperation model of the future is the so-called virtual enterprise. In this paper, we advocate the use of intelligent agents as a useful metaphor and as a software engineering methodology for the design and the operation of virtual enterprises. We focus on how agents can support the cooperative process of setting up virtual enterprises through the Internet by performing tasks such as presentation, information retrieval and extraction, and the participation in auctions in electronic markets. This paper does not describe completed research; it rather offers a perspective of the high potential of agent-based technology for one of tomorrow's key industrial areas by presenting the main objectives of the new research project AVE\n\n2-Hop Neighbour:\nSpecification of Heterogeneous Agent Architectures . Agent-based software applications need to incorporate agents having heterogeneous architectures in order for each agent to optimally perform its task. HEMASL is a simple meta-language used to specify intelligent agents and multi-agent systems when different and heterogeneous agent architectures must be used. HEMASL specifications are based on an agent model that abstracts several existing agent architectures. The paper describes some of the features of the language, presents examples of its use and outlines its operational semantics. We argue that adding HEMASL to CaseLP, a specification and prototyping environment for MAS, can enhance its flexibility and usability. 1 Introduction Intelligent agents and multi-agent systems (MAS) are increasingly being acknowledged as the \"new\" modelling techniques to be used to engineer complex and distributed software applications [17, 9]. Agent-based software development is concerned with the realization of software applications modelled ...\n\n2-Hop Neighbour:\nAn Agent-Based Approach to Distributed Simulation, Distributed Prototyping and Software Integration Due to nowadays huge availability of data and software, a software developer must be able not only to invent good algorithms and implement them efficiently, but also to assemble existing components to create, timely and economically, a new application. Prototyping is a software engineering paradigm particularly suitable for the compositional approach to software development. A working prototype embedding the heterogeneous software which will be used in the final application proves useful for at least two reasons: ffl the prototype is definitely closer to the final application; ffl the re-usability of the legacy software can be evaluated before the final application is built. The distribution of the software to be integrated within the prototype must be taken into account, as well as the distribution of the prototype execution. This would help to gain in efficiency and closeness to the final application. The aim of this thesis is to exploit the \"multi-agent system\" abstracti...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Textual Case-Based Reasoning Framework for Knowledge Management Applications Knowledge management (KM) systems manipulate organizational knowledge by storing and redistributing corporate memories that are acquired from the organization's members. In this paper, we introduce a textual casebased reasoning (TCBR) framework for KM systems that manipulates organizational knowledge embedded in artifacts (e.g., best practices, alerts, lessons learned). The TCBR approach acquires knowledge from human users (via knowledge elicitation) and from text documents (via knowledge extraction) using template-based information extraction methods, a subset of natural language, and a domain ontology. Organizational knowledge is stored in a case base and is distributed in the context of targeted processes (i.e., within external distribution systems). The knowledge artifacts in the case base have to be translated into the format of the external distribution systems. A domain ontology supports knowledge elicitation and extraction, storage of knowledge artifacts in a case base, and artifact translation.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Role of Information Extraction for Textual CBR Abstract. The benefits of CBR methods in domains where cases are text depend on the underlying text representation. Today, most TCBR approaches are limited to the degree that they are based on efficient, but weak IR methods. These do not allow for reasoning about the similarities between cases, which is mandatory for many CBR tasks beyond text retrieval, including adaptation or argumentation. In order to carry out more advanced CBR that compares complex cases in terms of abstract indexes, NLP methods are required to derive a better case representation. This paper discusses how state-of-the-art NLP/IE methods might be used for automatically extracting relevant factual information, preserving information captured in text structure and ascertaining negation. It also presents our ongoing research on automatically deriving abstract indexing concepts from legal case texts. We report progress toward integrating IE techniques and ML for generalizing from case texts to our CBR case representation. 1\n\n2-Hop Neighbour:\nMaintaining Unstructured Case Bases With the dramatic proliferation of case based reasoning systems in commercial applications, many case bases are now becoming legacy systems. They represent a significant portion of an organization's assets, but they are large and difficult to maintain. One of the contributing factors is that these case bases are often large and yet unstructured; they are represented in natural language text. Adding to the complexity is the fact that the case bases are often authored and updated by different people from a variety ofknowledge sources, making it highly likely for a case base to contain redundant and inconsistent knowledge. In this paper, we present methods and a system for maintaining large and unstructured case bases. We focus on two difficult problems in case-base maintenance: redundancy and inconsistency detection. These two problems are particularly pervasive when one deals with an unstructured case base. We will discuss both algorithms and a system for solving these problems. As the ability tocontain the knowledge acquisition problem is of paramount importance, our methods allow one to express relevant domain expertise for detecting both redundancy and inconsistency naturally and effortlessly. Empirical evaluations of the system prove the effectiveness of the methods in several large domains.\n\n2-Hop Neighbour:\nSupporting Dialogue Inferencing in Conversational Case-Based Reasoning . Dialogue inferencing is the knowledge-intensive process of inferring aspects of a user's problem from its partial description. Conversational case-based reasoning (CCBR) systems, which interactively and incrementally elicit a user's problem description, suffer from poor retrieval efficiency (i.e., they prompt the user with questions that the user has already implicitly answered) unless they perform dialogue inferencing. The standard method for dialogue inferencing in CCBR systems requires library designers to supply explicit inferencing rules. This approach is problematic (e.g., maintenance is difficult). We introduce an alternative approach in which the CCBR system guides the library designer in building a domain model. This model and the partial problem description are then given to a query retrieval system (PARKA-DB) to infer any implied answers during a conversation. In an initial empirical evaluation in the NaCoDAE CCBR tool, our approach improved retrieval efficiency without sa...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Rule Discovery From Time Series We consider the problem of finding rules relating patterns in a time series to other patterns in that series, or patterns in one series to patterns in another series. A simple example is a rule such as \"a period of low telephone call activity is usually followed by a sharp rise in call volume\". Examples of rules relating two or more time series are \"if the Microsoft stock price goes up and Intel falls, then IBM goes up the next day,\" and \"if Microsoft goes up strongly for one day, then declines strongly on the next day, and on the same days Intel stays about level, then IBM stays about level.\" Our emphasis is in the discovery of local patterns in multivariate time series, in contrast to traditional time series analysis which largely focuses on global models. Thus, we search for rules whose conditions refer to patterns in time series. However, we do not want to define beforehand which patterns are to be used; rather, we want the patterns to be formed from the data in t...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nIdentifying Distinctive Subsequences in Multivariate Time Series by Clustering Most time series comparison algorithms attempt to discover what the members of a set of time series have in common. We investigate a different problem, determining what distinguishes time series in that set from other time series obtained from the same source. In both cases the goal is to identify shared patterns, though in the latter case those patterns must be distinctive as well. An efficient incremental algorithm for identifying distinctive subsequences in multivariate, real-valued time series is described and evaluated with data from two very different sources: the response of a set of bandpass filters to human speech and the sensors of a mobile robot. 1 Introduction Given two or more sequences of discrete tokens, a dynamic programming algorithm exists for finding the longest common subsequence they share (Cormen, Leiserson, & Rivest 1990). This basic algorithm has been adapted in various ways to find patterns shared by real-valued time series as well (Kruskall & Sankoff 1983). ...\n\n1-Hop Neighbour:\nLearning Comprehensible Descriptions of Multivariate Time Series Supervised classification is one of the most active areas of machine learning research. Most work has focused on classification in static domains, where an instantaneous snapshot of attributes is meaningful. In many domains, attributes are not static; in fact, it is the way they vary temporally that can make classification possible. Examples of such domains include speech recognition, gesture recognition and electrocardiograph classification. While it is possible to use ad-hoc, domain-specific techniques for \"attening\" the time series to a learner-friendly representation, this fails to take into account both the special problems and special heuristics applicable to temporal data and often results in unreadable concept descriptions. Though traditional time series techniques can sometimes produce accurate classi ers, few can provide comprehensible descriptions. We propose a general architecture for classification and description of multivariate time series. It employs event primitives to ana...\n\n2-Hop Neighbour:\nBoosting Interval-Based Literals: Variable Length and Early Classification In previous works, a system for supervised time series classification has been presented. It is based on boosting very simple classifiers: only one literal. The used predicates are based on temporal intervals. There are two types of predicates: i) relative predicates, such as \"increases\" and \"stays\", and ii) region predicates, such as \"always\" and \"sometime\", which operate ver regions in the domain of the variable.\n\n2-Hop Neighbour:\nTime Series Classification by Boosting Interval Based Literals A supervised classification method for temporal series, even multivariate, is presented. It is based on boosting very simple classifiers: clauses with one literal in the body. The background predicates are based on temporal intervals. Two types of predicates are used: i) relative predicates, such as \"increases\" and \"stays\", and ii) region predicates, such as \"always\" and \"sometime\", which operate over regions in the dominion of the variable. Experiments on di#erent data sets, several of them obtained from the UCI repositories, show that the proposed method is highly competitive with previous approaches. Keywords: time series classification, interval based literals, boosting, machine learning. 1", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Mining the Web to Create Minority Language Corpora The Web is a valuable source of language specific resources but the process of collecting, organizing and utilizing these resources is difficult. We describe CorpusBuilder, an approach for automatically generating Web-search queries for collecting documents in a minority language. It differs from pseudo-relevance feedback in that retrieved documents are labeled by an automatic language classifier as relevant or irrelevant, and this feedback is used to generate new queries. We experiment with various query-generation methods and query-lengths to find inclusion/exclusion terms that are helpful for retrieving documents in the target language and find that using odds-ratio scores calculated over the documents acquired so far was one of the most consistently accurate query-generation methods. We also describe experiments using a handful of words elicited from a user instead of initial documents and show that the methods perform similarly. Experiments applying the same approach to multiple languages are also presented showing that our approach generalizes to a variety of languages. 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nMining the Web for Bilingual Text STRAND (Resnik, 1998) is a language- independent system for automatic discovery of text in parallel translation on the World Wide Web. This paper extends the preliminary STRAND results by adding automatic language identification, scaling up by orders of magnitude, and formally ewluating performance. The most recent end-product is an automaticaJly acquired parallel corpus comprising 2491 English-French document pairs, approximately 1.5 million words per language.\n\n1-Hop Neighbour:\nDocument Categorization and Query Generation on the World Wide Web Using WebACE We present WebACE, an agent for exploring and categorizing documents on the World Wide Web based on a user profile. The heart of the agent is an unsupervised categorization of a set of documents, combined with a process for generating new queries that is used to search for new related documents and for filtering the resulting documents to extract the ones most closely related to the starting set. The document categories are not given a priori. We present the overall architecture and describe two novel algorithms which provide significant improvement over Hierarchical Agglomeration Clustering and AutoClass algorithms and form the basis for the query generation and search component of the agent. We report on the results of our experiments comparing these new algorithms with more traditional clustering algorithms and we show that our algorithms are fast and scalable. y Authors are listed alphabetically. 1 Introduction The World Wide Web is a vast resource of information and services t...\n\n1-Hop Neighbour:\nAutomatic Discovery of Language Models for Text Databases The proliferation of text databases within large organizations and on the Internet makes it difficult for a person to know which databases to search. Given language models that describe the contents of each database, a database selection algorithm such as GlOSS can provide assistance by automatically selecting appropriate databases for an information need. Current practice is that each database provides its language model upon request, but this cooperative approach has important limitations. This paper demonstrates that cooperation is not required. Instead, the database selection service can construct its own language models by sampling database contents via the normal process of running queries and retrieving documents. Although random sampling is not possible, it can be approximated with carefully selected queries. This sampling approach avoids the limitations that characterize the cooperative approach, and also enables additional capabilities. Experimental results demonstrate th...\n\n2-Hop Neighbour:\nCo-clustering documents and words using Bipartite Spectral Graph Partitioning Both document clustering and word clustering are important and well-studied problems. By using the vector space model, a document collection may be represented as a word-document matrix. In this paper, we present the novel idea of modeling the document collection as a bipartite graph between documents and words. Using this model, we pose the clustering probliem as a graph partitioning problem and give a new spectral algorithm that simultaneously yields a clustering of documents and words. This co-clustrering algorithm uses the second left and right singular vectors of an appropriately scaled word-document matrix to yield good bipartitionings. In fact, it can be shown that these singular vectors give a real relaxation to the optimal solution of the graph bipartitioning problem. We present several experimental results to verify that the resulting co-clustering algoirhm works well in practice and is robust in the presence of noise.\n\n2-Hop Neighbour:\nCentroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...\n\n2-Hop Neighbour:\nConcept Decompositions for Large Sparse Text Data using Clustering Abstract. Unlabeled document collections are becoming increasingly common and available; mining such data sets represents a major contemporary challenge. Using words as features, text documents are often represented as high-dimensional and sparse vectors–a few thousand dimensions and a sparsity of 95 to 99 % is typical. In this paper, we study a certain spherical k-means algorithm for clustering such document vectors. The algorithm outputs k disjoint clusters each with a concept vector that is the centroid of the cluster normalized to have unit Euclidean norm. As our first contribution, we empirically demonstrate that, owing to the high-dimensionality and sparsity of the text data, the clusters produced by the algorithm have a certain “fractal-like ” and “self-similar ” behavior. As our second contribution, we introduce concept decompositions to approximate the matrix of document vectors; these decompositions are obtained by taking the least-squares approximation onto the linear subspace spanned by all the concept vectors. We empirically establish that the approximation errors of the concept decompositions are close to the best possible, namely, to truncated singular value decompositions. As our third contribution, we show that the concept vectors are localized in the word space, are sparse, and tend towards orthonormality. In contrast, the singular vectors are global in the word space and are dense. Nonetheless, we observe the surprising fact that the linear subspaces spanned by the concept vectors and the leading singular vectors are quite close in the sense of small principal angles between them. In conclusion, the concept vectors produced by the spherical k-means\n\n2-Hop Neighbour:\nParallel Strands: A Preliminary Investigation into Mining the Web for Bilingual Text . Parallel corpora are a valuable resource for machine translation, but at present their availability and utility is limited by genreand domain-specificity, licensing restrictions, and the basic difficulty of locating parallel texts in all but the most dominant of the world's languages. A parallel corpus resource not yet explored is the World Wide Web, which hosts an abundance of pages in parallel translation, offering a potential solution to some of these problems and unique opportunities of its own. This paper presents the necessary first step in that exploration: a method for automatically finding parallel translated documents on the Web. The technique is conceptually simple, fully language independent, and scalable, and preliminary evaluation results indicate that the method may be accurate enough to apply without human intervention. 1 Introduction In recent years large parallel corpora have taken on an important role as resources in machine translation and multilingual natural la...\n\n2-Hop Neighbour:\nProbe, Count, and Classify: Categorizing Hidden-Web Databases The contents of many valuable web-accessible databases are only accessible through search interfaces and are hence invisible to traditional web \"crawlers.\" Recent studies have estimated the size of this \"hidden web\" to be 500 billion pages, while the size of the \"crawlable\" web is only an estimated two billion pages. Recently, commercial web sites have started to manually organize web-accessible databases into Yahoo!-like hierarchical classification schemes. In this paper, we introduce a method for automating this classification process by using a small number of query probes. To classify a database, our algorithm does not retrieve or inspect any documents or pages from the database, but rather just exploits the number of matches that each query probe generates at the database in question. We have conducted an extensive experimental evaluation of our technique over collections of real documents, including over one hundred web-accessible databases. Our experiments show that our system has low overhead and achieves high classification accuracy across a variety of databases. 1.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Intelligent Surfer: Probabilistic Combination of Link and Content Information in PageRank The PageRank algorithm, used in the Google search engine, greatly improves the results of Web search by taking into account the link structure of the Web. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. We propose to improve PageRank by using a more intelligent surfer, one that is guided by a probabilistic model of the relevance of a page to a query. Efficient execution of our algorithm at query time is made possible by precomputing at crawl time (and thus once for all queries) the necessary terms. Experiments on two large subsets of the Web indicate that our algorithm significantly outperforms PageRank in the (human -rated) quality of the pages returned, while remaining efficient enough to be used in today's large search engines.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe PageRank Citation Ranking: Bringing Order to the Web The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nScaling Personalized Web Search Recent web search techniques augment traditional text matching with a global notion of “importance ” based on the linkage structure of the web, such as in Google’s PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance—for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques. 1\n\n2-Hop Neighbour:\nGenerating a Topically Focused VirtualReality Internet Surveys highlight that Internet users are frequently frustrated by failing to locate useful information, and by difficulty in browsing anarchically linked web-structures. We present a new Internet browsing application (called VR-net) that addresses these problems. It first identifies semantic domains consisting of tightly interconnected web-page groupings. The second part populates a 3D virtual world with these information sources, representing all relevant pages plus appropriate structural relations. Users can then easily browse through around a semantically focused virtual library. 1 Introduction The Internet is a probably the most significant global information resource ever created, allowing access to an almost unlimited amount of information. In this paper we describe two inter-related difficulties suffered by Internet users, and their combined influence on web use. We then introduce an integrated \"search and browse\" solution tool that directly tackles both issues. We also examin...\n\n2-Hop Neighbour:\nTheseus: Categorization by Context Introduction The traditional approach to document categorization is categorization by content, since information for categorizing a document is extracted from the document itself. In a hypertext environment like the Web, the structure of documents and the link topology can be exploited to perform what we call categorization by context [Attardi 98]: the context surrounding a link in an HTML document is used for categorizing the document referred by the link. Categorization by context is capable of dealing also with multimedia material, since it does not rely on the ability to analyze the content of documents. Categorization by context leverages on the categorization activity implicitly performed when someone places or refers to a document on the Web. By focusing the analysis to the documents used by a group of people, one can build a catalogue tuned to the need of that group. Categorization by context is based on the following assumptions: 1\n\n2-Hop Neighbour:\nWeb Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions This paper summarizes the different characteristics of web data, the basic components of web mining and its different types, and their current states of the art. The reason for considering web mining, a separate field from data mining, is explained. The limitations of some of the existing web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) highlighted. A survey of the existing literature on \"soft web mining\" is provided along with the commercially available systems. The prospective areas of web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing \"soft web mining\" systems is explained. An extensive bibliography is also provided.\n\n2-Hop Neighbour:\nChart of Darkness: Mapping a Large Intranet We introduce and de ne the concept of dark matter on the Web. Dark matter for a person or Web crawler consists of pages that they cannot reach and view, but which another observer can. Dark matter is important to our understanding of the Web in that the portion of the Web any of us can see depends on our viewpoint. Diffrent observers see different overlapping sections of the Web. However, no one can see all of the Web, even if they want to. We categorise the various types of dark matter that exist and how they may be discovered. Formal definitions of what constitutes lightness and darkness on the Web are formulated in terms of reachability. Our case study of dark matter within the Australian National University's intranet is reported. We estimate that 87% of the ANU intranet's information is dark to our local search service, and 37% is potentially loadable Web data unreachable to almost every Web user. Finally, we discuss some of the implications of dark matter for estimating the size of the Web and for general Web searching.\n\n2-Hop Neighbour:\nSearching the World Wide Web in Low-Connectivity Communities The Internet has the potential to deliver information to communities around the world that have no other information resources. High telephone and ISP fees- in combination with lowbandwidth connections- make it unaffordable for many people to browse the Web online. We are developing the TEK system to enable users to search the Web using only email. TEK stands for &quot;Time Equals Knowledge, &quot; since the user exchanges time (waiting for email) for knowledge. The system contains three components: 1) the client, which provides a graphical interface for the end user, 2) the server, which performs the searches from MIT, and 3) a reliable email-based communication protocol between the client and the server. The TEK search engine differs from others in that it is designed to return low-bandwidth results, which are achieved by special filtering, analysis, and compression on the server side. We believe that TEK will bring Web resources to people who otherwise would not be able to afford them.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Information Retrieval on the World Wide Web and Active Logic: A Survey and Problem Definition As more information becomes available on the World Wide Web (there are currently over 4 billion pages covering most areas of human endeavor), it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest). The first process is tentative and time consuming and the second may not satisfy the user because of many inaccurate and irrelevant results. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCiteSeer: An Autonomous Web Agent for Automatic Retrieval and Identification of Interesting Publications Published research papers available on the World Wide Web (WWW or Web) are often poorly organized, often exist in non-text form (e.g. Postscript) documents, and increase in quantity daily. Significant amounts of time and effort are commonly needed to find interesting and relevant publications on the Web. We have developed a Web based information agent that assists the user in the process of performing a scientific literature search. Given a set of keywords, the agent uses Web search engines and heuristics to locate and download papers. The papers are parsed in order to extract information features such as the abstract and individually identified citations which are placed into an SQL database. The agent's Web interface can be used to find relevant papers in the database using keyword searches, or by navigating the links between papers formed by the citations. Links to both \"citing\" and \"cited\" publications can be followed. In addition to simple browsing and keyword searches, the agent ...\n\n1-Hop Neighbour:\nInformation Retrieval on the Web: Selected Topics In this paper we review studies on the growth of the Internet and technologies which are useful for information search and retrieval on the Web. In the rst section, we present data on the Internet from several dierent sources, e.g., current as well as projected number of users, hosts and Web sites. Although the numerical gures vary, the overall trends cited by the sources are consistent and point to exponential growth during the coming decade. And Internet users are increasingly using search engines and search services to nd speci c information of interest. However, users are not satis ed with the performance of the current generation of search engines; the slow speed of retrieval, communication delays, and poor quality of retrieved results (e.g., noise and broken links) are commonly cited problems. The main body of our paper focuses on linear algebraic models and techniques for solving these problems. keywords: clustering, indexing, information retrieval, Internet, late...\n\n1-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n2-Hop Neighbour:\nRank Aggregation Methods for the Web We consider the problem of combining ranking results from various sources. In the context of the Web, the main applications include building meta-search engines, combining ranking functions, selecting documents based on multiple criteria, and improving search precision through word associations. Wedevelop a set of techniques for the rank aggregation problem and compare their performance to that of well-known methods. A primary goal of our work is to design rank aggregation techniques that can effectively combat \"spam,\" a serious problem in Web searches. Experiments show that our methods are simple, efficient, and effective. Keywords: rank aggregation, ranking functions, metasearch, multi-word queries, spam 1.\n\n2-Hop Neighbour:\nTowards Web-Scale Web Archeology Web-scale Web research is difficult. Information on the Web is vast in quantity, unorganized and uncatalogued, and available only over a network with varying reliability. Thus, Web data is difficult to collect, to store, and to manipulate efficiently. Despite these difficulties, we believe performing Web research at Web-scale is important. We have built a suite of tools that allow us to experiment on collections that are an order of magnitude or more larger than are typically cited in the literature. Two key components of our current tool suite are a fast, extensible Web crawler and a highly tuned, in-memory database of connectivity information. A Web page repository that supports easy access to and storage for billions of documents would allow us to study larger data sets and to study how the Web evolves over time.\n\n2-Hop Neighbour:\nTopical Locality in the Web: Experiments and Observations Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable World-Wide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results present the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents. 1 Introduction Most web pages are linked to others with related content...\n\n2-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n2-Hop Neighbour:\nThe Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Detection of Heterogeneities in a Multiple Text Database Environment As the number of text retrieval systems (search engines) grows rapidly on the World Wide Web, there is an increasing need to build search brokers (metasearch engines) on top of them. Often, the task of building an effective and efficient metasearch engine is hindered by the heterogeneities among the underlying local search engines. In this paper, we first analyze the impact of various heterogeneities on building a metasearch engine. We then present some techniques that can be used to detect the most prominent heterogeneities among multiple search engines. Applications of utilizing the detected heterogeneities in building better metasearch engines will be provided.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nText Database Selection for Longer Queries A metasearch engine is a system that supports unified access to multiple local search engines. One of the main challenges in building a large-scale metasearch engine is to solve the database (search engine) selection problem, which is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. For the database of each search engine, a representative which indicates approximately the contents of the database is created in advance to enable database selection. The representatives of all databases can be integrated into a single representative to make the selection process more scalable. While an integrated representative with high scalability has just been proposed and has been found to be effective for short queries, its effectiveness for longer queries is significantly lower. In the Internet environment, most queries initially submitted by users are short queries. However, it has been found that better search effectiveness can often be achieved when additional terms are added to the initial queries through query expansion or relevance feedback. The resulting queries are usually longer than the initial queries. In this paper, we propose a new method to construct database representatives and to decide which databases to select for longer queries. Experimental results are given to compare the performance of the new method with that of a previous method.\n\n1-Hop Neighbour:\nExperiences with Selecting Search Engines Using Metasearch Search engines are among the most useful and high profile resources on the Internet. The problem of finding information on the Internet has been replaced with the problem of knowing where search engines are, what they are designed to retrieve and how to use them. This paper describes and evaluates SavvySearch, a meta-search engine designed to intelligently select and interface with multiple remote search engines. The primary meta-search issue examined is the importance of carefully selecting and ranking remote search engines for user queries. We studied the efficacy of SavvySearch's incrementally acquired meta-index approach to selecting search engines by analyzing the effect of time and experience on performance. We also compared the meta-index approach to the simpler categorical approach and showed how much experience is required to surpass the simple scheme. 1 Introduction Search engines are powerful tools for assisting the otherwise unmanageable task of navigating the rapidly ex...\n\n2-Hop Neighbour:\nConcept Hierarchy Based Text Database Categorization Document categorization as a technique to improve the retrieval of useful documents has been extensively investigated. One important issue in a large-scale metasearch engine is to select text databases that are likely to contain useful documents for a given query. We believe that database categorization can be a potentially effective technique for good database selection, especially in the Internet environment where short queries are usually submitted. In this paper, we propose and evaluate several database categorization algorithms. This study indicates that while some document categorization algorithms could be adopted for database categorization, algorithms that take into consideration the special characteristics of databases may be more effective. Preliminary experimental results are provided to compare the proposed database categorization algorithms. A prototype database categorization system based on one of the proposed algorithms has been developed.\n\n2-Hop Neighbour:\nAuthoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative \"authority\" of pages in such environments. This issue is central to a number of basic hypertext search t...\n\n2-Hop Neighbour:\nVisual Information Retrieval from Large Distributed On-line Repositories ion --- VIR systems differ in the level of abstraction in which content is indexed. For example, images may be indexed at various levels, such as at the feature-level (e.g., color, texture, and shape), object-level (e.g., moving foreground object), syntax-level (e.g., video shot), and semantic-level (e.g., image subject), and so forth. Most automatic VIR systems aim at lowlevel features, while the high-level indexes are usually generated manually. Interaction among different levels is an exciting but unsolved issue. . Generality --- VIR systems differ in their specificity of the domain of visual information. For example, customized feature sets can be developed to incorporate specific domain knowledge, such as those in medical and remote-sensing applications. Other, more general VIR systems aim at indexing unconstrained visual information such as that on the Internet. . Content Collection --- VIR systems differ in the methods in which new visual information is ad...\n\n2-Hop Neighbour:\nA Content-Based Image Meta-Search Engine using Relevance Feedback Search engines are the most powerful resources for finding information on the rapidly expanding World-Wide Web. Finding the desired search engines and learning how to use them, however, can be very time consuming. Metasearch engines, which integrate a group of such search tools, enable users to access information across the world in a transparent and more efficient manner. The recent emergence of visual information retrieval (VIR) systems on the Web is leading to the same efficiency problem. This paper describes MetaSEEk, a meta-search engine used for retrieving images based on their visual content on the Web. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. MetaSEEk has been developed to explore the issues involved in querying large, distributed, on-line visual information system sources. We compare MetaSEE...\n\n2-Hop Neighbour:\nServer Selection on the World Wide Web We evaluate server selection methods in a Web environment, modeling a digital library which makes use of existing Web search servers rather than building its own index. The evaluation framework portrays the Web realistically in several ways. Its search servers index real Web documents, are of various sizes, cover different topic areas and employ different retrieval methods. Selection is based on statistics extracted from the results of probe queries submitted to each server. We evaluate published selection methods and a new method for enhancing selection based on expected search server effectiveness. Results show CORI to be the most effective of three published selection methods. CORI selection steadily degrades with fewer probe queries, causing a drop in early precision of as much as 0#05 (one relevant document out of 20). Modifying CORI selection based on an estimation of expected effectiveness disappointingly yields no significant improvement in effectiveness. However, modifying COR...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Background Readings for Collection Synthesis' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nFinding Related Pages in the World Wide Web When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach toweb searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related web pages. A related web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers. We describe two algorithms to identify related web pages. These algorithms use only the connectivity information in the web (i.e., the links between pages) and not the content of pages or usage information. We haveimplemented both algorithms and measured their runtime performance. To evaluate the e ectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's \\What's Related \" service [12]. Our study showed that the precision at 10 for our two algorithms are 73 % better and 51 % better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nCentroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...\n\n2-Hop Neighbour:\nInCommonSense - Rethinking Web Search Results The World Wide Web is a rich annotation system which allows people to relate to documents and sites from different perspectives. People describe, comment, relate or mock other Web pages in the context of their document. This richness is currently not reflected in snippets presented by Web search engines, where a search result is represented by the text found in the Web document alone. This paper proposes a new method for representing documents in Web search engines' results. This method is based on recent trends in search engine technology and provides descriptions of the retrieved documents, assembled from people's commentary and annotations on the Web. This paper suggests a new way for automatically retrieving and reusing people's annotations on the Web, incorporating these annotations into a search engine for creating a hybrid directory-search-engine, allowing for both automatic retrieval and on-the-fly human authored summaries. I. INTRODUCTION It is common knowledge that many use...\n\n2-Hop Neighbour:\nMining the Link Structure of the World Wide Web The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. We develop algorithms that exploit the hyperlink structure of the WWW for information discovery and categorization, the construction of high-quality resource lists, and the analysis of on-line hyperlinked communities. 1 Introduction The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. There are a number of fundamental reasons for this. The Web is a hypertext corpus of enormous size --- approximately three hundred million Web pages as of this writing --- and it continues to grow at a phenomenal rate. But the variation in pages is even worse than the raw scale of the data: the set of Web pages taken as a whole has almost no unifying structure, wi...\n\n2-Hop Neighbour:\nJoins that Generalize: Text Classification Using WHIRL WHIRL is an extension of relational databases that can perform \"soft joins\" based on the similarity of textual identifiers; these soft joins extend the traditional operation of joining tables based on the equivalence of atomic values. This paper evaluates WHIRL on a number of inductive classification tasks using data from the World Wide Web. We show that although WHIRL is designed for more general similaritybased reasoning tasks, it is competitive with mature inductive classification systems on these classification tasks. In particular, WHIRL generally achieves lower generalization error than C4.5, RIPPER, and several nearest-neighbor methods. WHIRL is also fast---up to 500 times faster than C4.5 on some benchmark problems. We also show that WHIRL can be efficiently used to select from a large pool of unlabeled items those that can be classified correctly with high confidence. Introduction Consider the problem of exploratory analysis of data obtained from the Internet. Assuming that o...\n\n2-Hop Neighbour:\nWeb Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions This paper summarizes the different characteristics of web data, the basic components of web mining and its different types, and their current states of the art. The reason for considering web mining, a separate field from data mining, is explained. The limitations of some of the existing web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) highlighted. A survey of the existing literature on \"soft web mining\" is provided along with the commercially available systems. The prospective areas of web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing \"soft web mining\" systems is explained. An extensive bibliography is also provided.\n\n2-Hop Neighbour:\nWorld Wide Web Information Retrieval Using Web Connectivity Information Gathering, processing and distributing information from the World Wide Web will be a vital technology for the next century. Web search techniques have played a critical role in the development of information systems. Due to the diverse nature of web documents, traditional search techniques must be improved. Hyperlink structure based methods have proved to be powerful ways of exploring the relationships between web documents. In this project, a prototype web search engine was developed to exploit the link structure of web documents, based on the use of the Companion algorithm. The prototype consists of a web spider, local database, and search software. The system was written using the Java programming language. Our spider crawls and downloads web pages using Lynx, then saves the hyperlinks into an Oracle database. JDBC is used to implement the database processing. Search software makes a vicinity graph for the query URL and returns the most related pages after calculating the hub and authority weights. Finally, HTML web pages provide user interfaces and communicate with CGI using the Perl language. iii ACKNOWLEDGMENTS The author would like to express thanks to all of the members of his M.S. committee for their useful comments on the thesis, assistance in scheduling the defense date and kind help during the final defense period. The author would like to express his deepest appreciation to Dr. Wen-Chen Hu, his thesis mentor, for the depth of the training and the appropriate guidance he has provided. The author would also like to acknowledge the Department of Computer Science and Software Engineering of Auburn University for financial support. Finally, thanks especially go to the author's wife Qifang, his son, Alex, and his father and mother for their support and love. ...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Logical Case Memory Systems: Foundations And Learning Issues The focus of this paper is on the introduction of a quite general type of case-based reasoning systems called logical case memory systems. The development of the underlying concepts has been driven by investigations in certain problems of case-based learning. Therefore, the present development of the target concepts is accompanied by an in-depth discussion of related learning problems. Logical case memory systems provide some formal framework for the investigation and for the application of structural similarity concepts. Those concepts have some crucial advantage over traditional numerical similarity concepts: The result of determining a new case's similarity to some formerly experienced case can be directly taken as a basis for performing case adaptation. Essentially, every logical case memory system consists of two constituents, some partially ordered case base and some partially ordered set of predicates. Cases are terms, in a logical sense. Given some problem case, every predicat...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Necessity Of User Guidance In Case-Based Knowledge Acquisition The intention of the present paper is to justify both theoretically and experimentally that user guidance is inevitable in case-based knowledge acquisition. The methodology of our approach is simple: We choose some paradigmatic idea of case-based learning which can be very briefly expressed as follows: Given any CBR system, apply it. Whenever it works sucessfully, do not change it. Whenever it fails on some input case, add this experience to the case base. Don't do anything else. Then, we perform a number of knowledge acquisition experiments. They clearly exhibit essential limitations of knowledge acquisition from randomly chosen cases. As a consequence, we develop scenarios of user guidance. Based on these theoretical concepts, we prove a few theoretical results characterizing the power of our approach. Next, we perform a new series of more constrained results which support our theoretical investigations. This paper is based on more than 1 000 000 runs of case-based knowledge acquisi...\n\n1-Hop Neighbour:\nOn Case-Based Learnability of Languages Case-based reasoning is deemed an important technology to alleviate the bottleneck of knowledge acquisition in Artificial Intelligence (AI). In case-based reasoning, knowledge is represented in the form of particular cases with an appropriate similarity measure rather than any form of rules. The case-based reasoning paradigm adopts the view that an AI system is dynamically changing during its life-cycle which immediately leads to learning considerations. Within the present paper, we investigate the problem of case-based learning of indexable classes of formal languages. Prior to learning considerations, we study the problem of case-based representability and show that every indexable class is case-based representable with respect to a fixed similarity measure. Next, we investigate several models of case-based learning and systematically analyze their strengths as well as their limitations. Finally, the general approach to case-based learnability of indexable classes of form...\n\n1-Hop Neighbour:\nExtended Experimental Explorations Of The Necessity Of User Guidance In Case-Based Learning This is an extended report focussing on experimental results to explore the necessity of user guidance in case-based knowledge acquisition. It is covering a collection of theoretical investigations as well. The methodology of our approach is quite simple: We choose a well-understood area which is tailored to case-based knowledge acquisition. Furthermore, we choose a prototypical case-based learning algorithm which is obviously suitable for the problem domain under consideration. Then, we perform a number of knowledge acquisition experiments. They clearly exhibit essential limitations of knowledge acquisition from randomly chosen cases. As a consequence, we develop scenarios of user guidance. Based on these theoretical concepts, we prove a few theoretical results characterizing the power of our approach. Next, we perform a new series of more constrained results which support our theoretical investigations. The main experiments deal with the difficulties of learning from randomly arrange...\n\n2-Hop Neighbour:\nApplying Formal Concepts to Learning Systems Validation In the problem area of evaluating complex software systems, there are two distinguished areas of research, development, and application identified by the two buzzwords validation and verification, respectively. From the perspective adopted by the authors, verification is usually more formally based and, thus, can be supported by formal reasoning tools like theorem provers, for instance. The scope of verification approaches is limited by the difficulty of finding a sufficiently complete formalization to built upon. In paramount realistic problem domains, validation seems to be more appropriate, although it is less stringent in character and, therefore, validation results are often less definite. The aim of this paper is to exemplify a validation approach based on a clear and thoroughly formal theory. In this way, validation and verification should be brought closer to each other. To allow for precise and sufficiently clear results, the authors have selected the applicatio...\n\n2-Hop Neighbour:\nGood Examples in Learning Containment Decision Lists this paper. By our very specific approaches and results in a very particular setting we intend to go a small step towards a better understanding and partial answering of questions like above.\n\n2-Hop Neighbour:\nTIC - A Toolkit for Validation in Formal Language Learning Quite often, heuristics and common sense suggest directions for improving well--known learning algorithms. However it seems not an easy task to verify that the modifications are indeed helpful. This is made more complicated through various additional influences inherent in different application domains. In order to obtain a faithful impression of phenomena that are intrinsic to the algorithms, the role of specific domains should be minimized. Our validation toolkit TIC allows to explore the behaviour of various algorithms for learning formal languages. This is a well-examined and standardized application domain. TIC is operated by interactive as well as automatic control. Motivation and Introduction Today, a lot of different learning approaches and algorithms do exist. There are \"classical\" as well as \"brand new\" approaches, and all of them come in many versions and refinements. On the one hand this indicates a desirable improvement of methods, but on the other hand it ...\n\n2-Hop Neighbour:\nSolving Stabilization Problems in Case-Based Knowledge Acquisition Case-based reasoning is widely deemed an important methodology towards alleviating the bottleneck of knowledge acquisition. The key idea is to collect cases representing a human's or a system's experience directly rather than trying to construct generalizations. Episodic knowledge accumulated this way may be used flexibly for different purposes by determining similarities between formerly solved problems and current situations under investigation. But the flexibility of case-based reasoning brings with it a number of disadvantages. One crucial difficulty is that every new experience might seem worth to be memorized. As a result, a case-based reasoning system may substantially suffer from collecting a huge amount of garbage without being able to separate the chaff from the wheat. This paper presents a case study in case-based learning. Some target concept has to be learned by collecting cases and tuning similarity concepts. It is extremely difficult to avoid collecting a huge amount of ...\n\n2-Hop Neighbour:\nTypes of Incremental Learning This paper is intended to introduce a closer look at incremental learning by developing the two concepts of informationally incremental learning and operationally incremental learning. These concept are applied to the problem of learning containment decision lists for demonstrating its relevance. 1 Introduction The intention of the present paper is to introduce two new notions in incremental learning which allow a classification of phenomena finer than known so far in the area. These concepts are denoted by the phrases informationally incremental learning and operationally incremental learning, respectively. Roughly spoken, informationally incremental algorithms are required to work incrementally as usual, i.e. they have no permission to look back at the whole history of information presented during the learning process. Operationally incremental learning algorithms may have permission to look back, but they are not allowed to use information of the past in some effective way. ...", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Control Architecture for Flexible Internet Auction Servers The flexibility to support both high activity and low activity auctions is required by any system that allows bidding by both humans and software agents. We present the control architecture of the Michigan Internet AuctionBot, and discuss some of the system engineering issues that arose in its design. 1 Introduction The Michigan Internet AuctionBot is a highly configurable auction server built to support research on electronic commerce and multiagent negotiation [3]. The first generation architecture was simple and robust, and allowed us to concentrate on other aspects of the system. However, several inefficiencies made it problematic to run auctions with very fast interactions. We have redesigned the core AuctionBot architecture in order to improve overall performance, while still meeting the original goal: a system that is configurable, maintainable, and capable of conducting a large number of simultaneous auctions. In AuctionBot architecture nomenclature, we say an auction is open ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Replicable Web-Based Negotiation Server For E-Commerce This paper describes our ongoing R&D effort in developing a replicable, Web-based negotiation server to conduct bargaining-type negotiations between clients (i.e., buyers and sellers) in e-commerce. Multiple copies of this server can be paired with existing Web-servers to provide negotiation capabilities. Each client can select a trusted negotiation server to represent his/her interests. Web-based GUI tools are used by clients in a build-time registration process to specify the requirements, constraints, negotiation strategic rules, and preference scoring methods related to the buying or selling of a product. The registration information is used by the negotiation servers to conduct negotiations automatically on behalf of the clients. In this paper, we present the architecture of the negotiation server and the framework for automated negotiations, and describe a number of communication primitives, which make up the negotiation protocol. We have developed a constraint satisfaction processor (CSP) to evaluate a negotiation proposal against the registered constraints. An Event-Trigger-Rule (ETR) server manages events and triggers the execution of strategic rules, which may relax constraints, notify clients, or perform other operations. Strategic rules can be added and modified at run-time to deal with the dynamic nature of negotiations. A cost-benefit analysis performs quantitative analysis of alternative negotiation conditions. We have implemented a prototype system to demonstrate automated negotiations among buyers and suppliers in a supply chain management system.", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Web Mining Research: A Survey With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Machine Learning Approach to Building Domain-Specific Search Engines Domain-specific search engines are becoming increasingly popular because they offer increased accuracy and extra features not possible with general, Web-wide search engines. Unfortunately, they are also difficult and timeconsuming to maintain. This paper proposes the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific search engines. We describe new research in reinforcement learning, text classification and information extraction that enables efficient spidering, populates topic hierarchies, and identifies informative text segments. Using these techniques, we have built a demonstration system: a search engine for computer science research papers available at www.cora.justresearch.com. 1 Introduction As the amount of information on the World Wide Web grows, it becomes increasingly difficult to find just what wewant. While general-purpose search engines suchas AltaVista and HotBot offer high coverage, they often provi...\n\n1-Hop Neighbour:\nMining the Link Structure of the World Wide Web The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. We develop algorithms that exploit the hyperlink structure of the WWW for information discovery and categorization, the construction of high-quality resource lists, and the analysis of on-line hyperlinked communities. 1 Introduction The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. There are a number of fundamental reasons for this. The Web is a hypertext corpus of enormous size --- approximately three hundred million Web pages as of this writing --- and it continues to grow at a phenomenal rate. But the variation in pages is even worse than the raw scale of the data: the set of Web pages taken as a whole has almost no unifying structure, wi...\n\n1-Hop Neighbour:\nWeb Log Data Warehousing and Mining for Intelligent Web Caching We introduce intelligent web caching algorithms that employ predictive models of web requests; the general idea is to extend the LRU policy of web and proxy servers by making it sensible to web access models extracted from web log data using data mining techniques. Two approaches have been studied in particular, frequent patterns and decision trees. The experimental results of the new algorithms show substantial improvement over existing LRU-based caching techniques, in terms of hit rate. We designed and developed a prototypical system, which supports data warehousing of web log data, extraction of data mining models and simulation of the web caching algorithms.\n\n2-Hop Neighbour:\nA Query Paradigm to Discover the Relation between Text and Images This paper studies the relation between images and text in image databases. An analysis of this relation results in the definition of three distinct query modalities: (1) linguistic scenario: images are part of a whole including a self-contained linguistic discourse, and their meaning derives from their interaction with the linguistic discourse. A typical case of this scenario is constituted by images on the World Wide Web; (2) closed world scenario: images are defined in a limited domain, and their meaning is anchored by conventions and norms in that domain. (3) user scenario: the linguistic discourse is provided by the user. This is the case of highly interactive systems with relevance feedback. This paper deals with image databases of the first type. It shows how the relation between images (or parts of images) and text can be inferred, and exploited for search. The paper develops a similarity model in which the similarity between two images is given by both their visual similarity...\n\n2-Hop Neighbour:\nLearning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.\n\n2-Hop Neighbour:\nThe Shape of the Web and Its Implications for Searching the Web With the rapid growth of the number of web pages, designing a search engine that can retrieve high quality information in response to a user query is a challenging task. Automated search engines that rely on keyword matching usually return too many low quality matches and they take a long time to run. It is argued in the literature that link-following search methods can substantially increase the search quality, provided that these methods use an accurate assumption about useful patterns in the hyperlink topology of the web. Recent work in the field has focused on detecting identi able patterns in the web graph and exploiting this information to improve the performance of search algorithms. We survey relevant work in this area and comment on the implications of these patterns for other areas such as advertisement and marketing.\n\n2-Hop Neighbour:\nCiteSeer: An Autonomous Web Agent for Automatic Retrieval and Identification of Interesting Publications Published research papers available on the World Wide Web (WWW or Web) are often poorly organized, often exist in non-text form (e.g. Postscript) documents, and increase in quantity daily. Significant amounts of time and effort are commonly needed to find interesting and relevant publications on the Web. We have developed a Web based information agent that assists the user in the process of performing a scientific literature search. Given a set of keywords, the agent uses Web search engines and heuristics to locate and download papers. The papers are parsed in order to extract information features such as the abstract and individually identified citations which are placed into an SQL database. The agent's Web interface can be used to find relevant papers in the database using keyword searches, or by navigating the links between papers formed by the citations. Links to both \"citing\" and \"cited\" publications can be followed. In addition to simple browsing and keyword searches, the agent ...\n\n2-Hop Neighbour:\nLearning Hidden Markov Model Structure for Information Extraction Statistical machine learning techniques, while well proven in fields such as speech recognition, are just beginning to be applied to the information extraction domain. We explore the use of hidden Markov models for information extraction tasks, specifically focusing on how to learn model structure from data and how to make the best use of labeled and unlabeled data. We show that a manually-constructed model that contains multiple states per extraction field outperforms a model with one state per field, and discuss strategies for learning the model structure automatically from data. We also demonstrate that the use of distantly-labeled data to set model parameters provides a significant improvement in extraction accuracy. Our models are applied to the task of extracting important fields from the headers of computer science research papers, and achieve an extraction accuracy of 92.9%. Introduction Hidden Markov modeling is a powerful statistical machine learning technique that is just ...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Joins that Generalize: Text Classification Using WHIRL WHIRL is an extension of relational databases that can perform \"soft joins\" based on the similarity of textual identifiers; these soft joins extend the traditional operation of joining tables based on the equivalence of atomic values. This paper evaluates WHIRL on a number of inductive classification tasks using data from the World Wide Web. We show that although WHIRL is designed for more general similaritybased reasoning tasks, it is competitive with mature inductive classification systems on these classification tasks. In particular, WHIRL generally achieves lower generalization error than C4.5, RIPPER, and several nearest-neighbor methods. WHIRL is also fast---up to 500 times faster than C4.5 on some benchmark problems. We also show that WHIRL can be efficiently used to select from a large pool of unlabeled items those that can be classified correctly with high confidence. Introduction Consider the problem of exploratory analysis of data obtained from the Internet. Assuming that o...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Web-based Information System that Reasons with Structured Collections of Text The degree to which information sources are pre-processed by Web-based information systems varies greatly. In search engines like Altavista, little pre-processing is done, while in \"knowledge integration\" systems, complex site-specific \"wrappers\" are used integrate different information sources into a common database representation. In this paper we describe an intermediate between these two models. In our system, information sources are converted into a highly structured collection of small fragments of text. Databaselike queries to this structured collection of text fragments are approximated using a novel logic called WHIRL, which combines inference in the style of deductive databases with ranked retrieval methods from information retrieval. WHIRL allows queries that integrate information from multiple Web sites, without requiring the extraction and normalization of object identifiers that can be used as keys; instead, operations that in conventional databases require equality tests...\n\n1-Hop Neighbour:\nWeight Adjustment Schemes for a Centroid Based Classifier In recent years we have seen a tremendous growth in the volume of text documents available on the Internet, digital libraries, news sources, and company-wide intra-nets. Automatic text categorization, which is the task of assigning text documents to pre-specified classes (topics or themes) of documents, is an important task that can help both in organizing as well as in finding information on these huge resources. Similarity based categorization algorithms such as k-nearest neighbor, generalized instance set and centroid based classification have been shown to be very effective in document categorization. A major drawback of these algorithms is that they use all features when computing the similarities. In many document data sets, only a small number of the total vocabulary may be useful for categorizing documents. A possible approach to overcome this problem is to learn weights for different features (or words in document data sets). In this report we present two fast iterativ...\n\n1-Hop Neighbour:\nText Categorization Using Weight Adjusted k-Nearest Neighbor Classification . Automatic text categorization is an important task that can help people finding information on huge online resources. Text categorization presents unique challenges due to the large number of attributes present in the data set, large number of training samples, attribute dependency, and multi-modality of categories. Existing classification techniques have limited applicability in the data sets of these natures. In this paper, we present a Weight Adjusted k-Nearest Neighbor (WAKNN) classification that learns feature weights based on a greedy hill climbing technique. We also present two performance optimizations of WAKNN that improve the computational performance by a few orders of magnitude, but do not compromise on the classification quality. We experimentally evaluated WAKNN on 52 document data sets from a variety of domains and compared its performance against several classification algorithms, such as C4.5, RIPPER, Naive-Bayesian, PEBLS and VSM. Experimental results ...\n\n2-Hop Neighbour:\nRAW: A Relational Algebra for the Web The main idea underlying the paper is to extend the relational algebra such that it becomes possible to process queries against the World-Wide Web. These extensions are minor in that we tried to keep them at the domain level. Additionally to the known domains (int, bool, float, string), we introduce three new domains to deal with URLs, html-documents or fragments thereof, and path expressions. Over these domains we define several functions that are accessible from the algebra within the subscripts of the relational operators. The approach allows us to reuse the operators of the relational algebra without major modifications. Indead, the only extension necessary is the introduction of a map operator. Further, two modifications to the scan and the indexscan are necessary. Finally, the indexscan which has the functionality of a typical meta-search engine is capable of computing a unified rank based on the tuple order provided by the underlying search engines. 1 Introduction The Web [2] w...\n\n2-Hop Neighbour:\nA Hierarchical Approach to Wrapper Induction With the tremendous amount of information that becomes available on the Web on a daily basis, the ability to quickly develop information agents has become a crucial problem. A vital component of any Web-based information agent is a set of wrappers that can extract the relevant data from semistructured information sources. Our novel approach to wrapper induction is based on the idea of hierarchical information extraction, which turns the hard problem of extracting data from an arbitrarily complex document into a series of easier extraction tasks. We introduce an inductive algorithm, stalker, that generates high accuracy extraction rules based on user-labeled training examples. Labeling the training data represents the major bottleneck in using wrapper induction techniques, and our experimental results show that stalker does significantly better then other approaches; on one hand, stalker requires up to two orders of magnitude fewer examples than other algorithms, while on the other hand...\n\n2-Hop Neighbour:\nDistributional Clustering of Words for Text Classification This paper applies Distributional Clustering (Pereira et al. 1993) to document classification. The approach clusters words into groups based on the distribution of class labels associated with each word. Thus, unlike some other unsupervised dimensionality-reduction techniques, such as Latent Semantic Indexing, we are able to compress the feature space much more aggressively, while still maintaining high document classification accuracy. Experimental results obtained on three real-world data sets show that we can reduce the feature dimensionality by three orders of magnitude and lose only 2% accuracy---significantly better than Latent Semantic Indexing (Deerwester et al. 1990), class-based clustering (Brown et al. 1992), feature selection by mutual information (Yang and Pederson 1997), or Markovblanket -based feature selection (Koller and Sahami 1996). We also show that less aggressive clustering sometimes results in improved classification accuracy over classification without clusteri...\n\n2-Hop Neighbour:\nImproving Short-Text Classification Using Unlabeled Background Knowledge to Assess Document Similarity We describe a method for improving the classification of short text strings using a combination of labeled training data plus a secondary corpus of unlabeled but related longer documents. We show that such unlabeled background knowledge can greatly decrease error rates, particularly if the number of examples or the size of the strings in the training set is small. This is particularly useful when labeling text is a labor-intensive job and when there is a large amount of information available about a particular problem on the World Wide Web. Our approach views the task as one of information integration using WHIRL, a tool that combines database functionalities with techniques from the information-retrieval literature. 1. Introduction The task of classifying textual data that has been culled from sites on the World Wide Web is both difficult and intensively studied (Cohen & Hirsh, 1998; Joachims, 1998; Nigam et al., 1999). Applications of various machine learning techniqu...\n\n2-Hop Neighbour:\nDistributional Clustering of Words for Text Classification This paper applies Distributional Clustering (Pereira et al. 1993) to document classification. The approach clusters words into groups based on the distribution of class labels associated with each word. Thus, unlike some other unsupervised dimensionality-reduction techniques, such as Latent Semantic Indexing, we are able to compress the feature space much more aggressively, while still maintaining high document classification accuracy. Experimental results obtained on three real-world data sets show that we can reduce the feature dimensionality by three orders of magnitude and lose only 2% accuracy---significantly better than Latent Semantic Indexing (Deerwester et al. 1990), class-based clustering (Brown et al. 1992), feature selection by mutual information (Yang and Pederson 1997), or Markovblanket -based feature selection (Koller and Sahami 1996). We also show that less aggressive clustering sometimes results in improved classification accuracy over classification without clusteri...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'On the Expressivity and Complexity of Temporal Conceptual Modelling The contribution of this paper is twofold. On the one hand, it introduces T DLR, a novel temporal logic for temporal conceptual modelling, motivated as the obvious generalisation of the successful DLR Description Logic. Tight decidability and complexity results are proved for T DLR and the monodic fragment of it (T DLR ). Moreover, the decidability of conjunctive query containment under T DLR constraints is proved. On the other hand, the paper provides a formal semantic characterisation of all the important temporal conceptual modelling constructs (for valid time representation) as found in the literature. To the best of our knowledge, this is the first systematic formalisation of the constructs present in most temporal conceptual modelling systems. This systematic characterisation as T DLR theories is an evidence of the adequacy of the T DLR temporal Description Logic for temporal conceptual modelling. 1 Introduction In this paper the novel T DLR temporal logic is introduced...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nUnifying Class-Based Representation Formalisms The notion of class is ubiquitous in computer science and is central in many formalisms for the representation of structured knowledge used both in knowledge representation and in databases. In this paper we study the basic issues underlying such representation formalisms and single out both their common characteristics and their distinguishing features. Such investigation leads us to propose a unifying framework in which we are able to capture the fundamental aspects of several representation languages used in different contexts. The proposed formalism is expressed in the style of description logics, which have been introduced in knowledge representation as a means to provide a semantically well-founded basis for the structural aspects of knowledge representation systems. The description logic considered in this paper is a subset of first order logic with nice computational characteristics. It is quite expressive and features a novel combination of constructs that has not been studied before. The distinguishing constructs are number restrictions, which generalize existence and functional dependencies, inverse roles, which allow one to refer to the inverse of a relationship, and possibly cyclic assertions, which are necessary for capturing real world\n\n1-Hop Neighbour:\nConcept Based Design of Data Warehouses: The DWQ Demonstrators The ESPRIT Project DWQ (Foundations of Data Warehouse Quality) aimed at improving the quality of DW design and operation through systematic enrichment of the semantic foundations of data warehousing. Logic-based knowledge representation and reasoning techniques were developed to control accuracy, consistency, and completeness via advanced conceptual modeling techniques for source integration, data reconciliation, and multi-dimensional aggregation. This is complemented by quantitative optimization techniques for view materialization, optimizing timeliness and responsiveness without losing the semantic advantages from the conceptual approach. At the operational level, query rewriting and materialization refreshment algorithms exploit the knowledge developed at design time. The demonstration shows the interplay of these tools under a shared metadata repository, based on an example extracted from an application at Telecom Italia. 1 Overview of the Demonstration The demonstration follows ...\n\n1-Hop Neighbour:\nDecidable Fragments of First-Order Temporal Logics In this paper, we introduce a new fragment of the first-order temporal language, called the monodic fragment, in which all formulas beginning with a temporal operator (Since or Until) have at most one free variable. We show that the satisfiability problem for monodic formulas in various linear time structures can be reduced to the satisfiability problem for a certain fragment of classical first-order logic. This reduction is then used to single out a number of decidable fragments of first-order temporal logics and of two-sorted first-order logics in which one sort is intended for temporal reasoning. Besides standard first-order time structures, we consider also those that have only finite first-order domains, and extend the results mentioned above to temporal logics of finite domains. We prove decidability in three different ways: using decidability of monadic second-order logic over the intended flows of time, by an explicit analysis of structures with natural numbers time, and by a composition method that builds a model from pieces in finitely many steps. 1\n\n2-Hop Neighbour:\nFirst-order expressivity for S5-models: modal vs. two-sorted languages this paper we are going to prove some results on the expressive power of the standard first-order modal language by comparing it with its extensional counterpart. We thereby restrict our attention to the case where the modal language is interpreted on S5-models. Moreover, we decided to deal exclusively with constant domain models, that is with models in which the domains of all worlds are the same. It is worth mentioning, however, that our method can be applied to logics based on varying domain models as well. Before we describe the results of this paper in more detail, we hasten to add that there exists some work done by other authors to which our results are related. In [8, 9, 10] Fine proves, among other things, a number of preservation results for modal first-order formulas which are relevant for certain philosophical distinctions. For instance, he provides a semantical characterization of de dicto formulas (within S5) and investigates conditions under which de re formulas are eliminable in certain extensions of S5 (see also [6, 18]). In [3] it was shown 2 that a remarkable portion of classical model theory can be transferred to the domain of modal logic. More closely related to our work are [14] and [15]. They discuss a number of formulas from the two-sorted language that are not expressible in the modal language.\n\n2-Hop Neighbour:\nA Framework for Ontology Integration One of the basic problems in the development of techniques for the semantic web is the integration of ontologies. Indeed, the web is constituted by a variety of information sources, each expressed over a certain ontology, and in order to extract information from such sources, their semantic integration and reconciliation in terms of a global ontology is required. In this paper, we address the fundamental problem of how to specify the mapping between the global ontology and the local ontologies. We argue that for capturing such mapping in an appropriate way, the notion of query is a crucial one, since it is very likely that a concept in one ontology corresponds to a view (i.e., a query) over the other ontologies. As a result query processing in ontology integration systems is strongly related to view-based query answering in data integration. 1\n\n2-Hop Neighbour:\nReasoning over Conceptual Schemas and Queries in Temporal Databases This paper introduces a new logical formalism, intended for temporal conceptual modelling, as a natural combination of the wellknown description logic DLR and pointbased linear temporal logic with Since and Until. The expressive power of the resulting DLRUS logic is illustrated by providing a systematic formalisation of the most important temporal entity-relationship data models appeared in the literature. We define a query language (where queries are nonrecursive Datalog programs and atoms are complex DLRUS expressions) and investigate the problem of checking query containment under the constraints defined by DLRUS conceptual schemas, as well as the problems of schema satisfiability and logical implication. Although it is shown that reasoning in full DLRUS is undecidable, we identify the decidable (in a sense, maximal) fragment DLR US by allowing applications of temporal operators to formulas and entities only (but not to relation expressions). We obtain the following hierarchy of complexity results: (a) reasoning in DLR US with atomic formulas is EXPTIME-complete, (b) satisfiability and logical implication of arbitrary DLR US formulas is EXPSPACE-complete, and (c) the problem of checking query containment of non-recursive Datalog queries under DLR US constraints is decidable in 2EXPTIME.\n\n2-Hop Neighbour:\nReasoning over Conceptual Schemas and Queries in Temporal Databases This paper introduces a new logical formalism, intended for temporal conceptual modelling, as a natural combination of the wellknown description logic DLR and pointbased linear temporal logic with Since and Until. The expressive power of the resulting DLRUS logic is illustrated by providing a systematic formalisation of the most important temporal entity-relationship data models appeared in the literature. We define a query language (where queries are nonrecursive Datalog programs and atoms are complex DLRUS expressions) and investigate the problem of checking query containment under the constraints defined by DLRUS conceptual schemas, as well as the problems of schema satisfiability and logical implication. Although it is shown that reasoning in full DLRUS is undecidable, we identify the decidable (in a sense, maximal) fragment DLR US by allowing applications of temporal operators to formulas and entities only (but not to relation expressions). We obtain the following hierarchy of complexity results: (a) reasoning in DLR US with atomic formulas is EXPTIME-complete, (b) satisfiability and logical implication of arbitrary DLR US formulas is EXPSPACE-complete, and (c) the problem of checking query containment of non-recursive Datalog queries under DLR US constraints is decidable in 2EXPTIME.", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Dynamics in Wastewater Treatment: A Framework for Understanding Formal Constructs in Complex Technical Settings . Based on the study of unskilled work in a Danish wastewater treatment plant, the problem of formalisation of work is discussed and extended to technical processes. Five symmetrical levels of dynamics in complex technical work arrangements are proposed as a tool for understanding the limits of formalisation and for designing formal constructs in such settings. The analysis is based on concepts of heterogeneity, granularity of goals and motives, and process and structure. Introduction An inevitable problem in the design of CSCW systems is that work is not standing still. Rather, work settings are dynamic: routines evolve over time and unusual situations force deviations from the routine. For this reason, purely formal constructs and descriptions of work have proven inadequate when designing effective, real world CSCW systems. Suchman and Wynn (1984) set off the debate about the role of formalism in CSCW, and their empirical studies clearly illustrated that there is more to office wo...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAugmenting reality in mobile substrates - On the design of computer support for process control The paper investigates augmented reality as a perspective on the design of computer support for process control in a distributed environment. Based on empirical studies of work in a wastewater treatment plant, three technical approaches on augmented reality --- augmenting the user; the object of work; and the environment --- are examined in terms of a collection of design scenarios .We conclude that these approaches when used, as metaphors rather than a consistent theoretical framework, may inform design of mobile support for process control work. Keywords: Augmented reality, process control, mobile computing, human-computer interaction, participatory design, workplace studies. BRT Keywords: AB, FA, FC, GA, HD Introduction Advanced technical (process) settings, such as modern wastewater treatment plants, are characterised by being highly distributed and dynamic. A possible strategy for supporting work in such settings is through the introduction of mobile technology. In this paper, ...\n\n2-Hop Neighbour:\nReinventing the Familiar: Exploring an Augmented Reality Design Space for Air Traffic Control This paper describes our exploration of a design space for an augmented reality prototype. We began by observing air traffic controllers and their interactions with paper flight strips. We then worked with a multi-disciplinary team of researchers and controllers over a period of a year to brainstorm and prototype ideas for enhancing paper flight strips. We argue that augmented reality is more promising (and simpler to implement) than the current strategies that seek to replace flight strips with keyboard/monitor interfaces. We also argue that an exploration of the design space, with active participation from the controllers, is essential not only for designing particular artifacts, but also for understanding the strengths and limitations of augmented reality in general. Keywords: Augmented Reality, Design Space, Interactive Paper, Participatory Design, Video Prototyping INTRODUCTION Air traffic control is a complex, collaborative activity, with well-established and successful work p...", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Context in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nLearning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.\n\n1-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n2-Hop Neighbour:\nBuilding Efficient and Effective Metasearch Engines Frequently a user's information needs are stored in the databases of multiple search engines. It is inconvenient and inefficient for an ordinary user to invoke multiple search engines and identify useful documents from the returned results. To support unified access to multiple search engines, a metasearch engine can be constructed. When a metasearch engine receives a query from a user, it invokes the underlying search engines to retrieve useful information for the user. Metasearch engines have other benefits as a search tool such as increasing the search coverage of the Web and improving the scalability of the search. In this article, we survey techniques that have been proposed to tackle several underlying challenges for building a good metasearch engine. Among the main challenges, the database selection problem is to identify search engines that are likely to return useful documents to a given query. The document selection problem is to determine what documents to retrieve from each identified search engine. The result merging problem is to combine the documents returned from multiple search engines. We will also point out some problems that need to be further researched.\n\n2-Hop Neighbour:\nFEATURES: Real-time Adaptive Feature Learning and Document Learning for Web Search In this paper we report our research on building Features - an intelligent web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Not only does Features learn from the user's document relevance feedback, but also automatically extracts and suggests indexing keywords relevant to a search query and learns from the user's keyword relevance feedback so that it is able to speed up its search process and to enhance its search performance. We design two efficient and mutual-benefiting learning algorithms that work concurrently, one for feature learning and the other for document learning. Features employs these algorithms together with an internal index database and a real-time meta-searcher so to perform adaptive real-time learning to find desired documents with as little relevance feedback from the user as possible. The architecture and performance of Features are also discussed. 1 Introduction As the world wide web rapidly evo...\n\n2-Hop Neighbour:\nEvaluating Strategies for Similarity Search on the Web Finding pages on the Web that are similar to a query page (Related Pages) is an important component of modern search engines. A variety of strategies have been proposed for answering Related Pages queries, but comparative evaluation by user studies is expensive, especially when large strategy spaces must be searched (e.g., when tuning parameters). We present a technique for automatically evaluating strategies using Web hierarchies, such as Open Directory, in place of user feedback. We apply this evaluation methodology to a mix of document representation strategies, including the use of text, anchor-text, and links. We discuss the relative advantages and disadvantages of the various approaches examined. Finally, we describe how to efficiently construct a similarity index out of our chosen strategies, and provide sample results from our index.\n\n2-Hop Neighbour:\nA Case Study in Web Search using TREC Algorithms Web search engines rank potentially relevant pages/sites for a user query. Ranking documents for user queries has also been at the heart of the Text REtrieval Conference (TREC in short) under the label ###### retrieval. The TREC community has developed document ranking algorithms that are known to be the best for searching the document collections used in TREC, which are mainly comprised of newswire text. However, the web search community has developed its own methods to rank web pages/sites, many of which use link structure on the web, and are quite dierentfrom the algorithms developed at TREC. This study evaluates the performance of a state-of-the-art keyword-based document ranking algorithm (coming out of TREC) on a popular web search task: nding the web page/site of an entity, #### companies, universities, organizations, individuals, etc. This form of querying is quite prevalentonthe web. The results from the TREC algorithms are compared to four commercial web search engines. Results show that for nding the web page/site of an entity, commercial web search engines are notably better than a state-of-the-art TREC algorithm. These results are in sharp contrast to results from several previous studies. Keywords Search engines, TREC ad-hoc, keyword-based ranking, linkbased ranking 1.\n\n2-Hop Neighbour:\nExploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Virtual Enterprise Design - BDI Agents vs. Objects Current research identifying architectures for a virtual enterprise has moved from information modelling to role modelling. Thus, a high level of autonomy results from the distribution of responsibilities, capabilities, and knowledge among different business units in the virtual enterprise at the design stage. Current trends tend towards using object-oriented technology as an effective abstract system design and implementation methodology. We argue that applying the software agent paradigm to the virtual enterprise provides various advantages on both the design and operational levels. We further show that the Belief Desire Intention agent architecture has additional abilities of mapping real world business unit autonomy and interaction. We also introduce the Belief Desire Intention agent paradigm capability of facilitating highly flexible (agile) enterprise design and implementation. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nTowards socially sophisticated BDI agents We present an approach to social reasoning that integrates prior work on norms and obligations with the BDI approach to agent architectures. Norms and obligations can be used to increase the eficiency of agent reasoning, and their explicit representation supports reasoning about a wide range of behaviour types in a single framework. We propose a modified BDI interpreter loop that takes norms and obligations into account in an agent's deliberation.\n\n1-Hop Neighbour:\nBDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...\n\n2-Hop Neighbour:\nSupporting Internet-Scale Multi-Agent Systems ts a model of AgentScape from the agent perspective, that is, the location comprising the middleware and the resources are represented by a location manager agent and resource objects. Calls from an agent to the middleware are modeled by requests to the location manager agent to, for example, create an agent or move an agent. Information about resources residing at the location can be retrieved by binding to the resource objects, which are local distributed objects. These objects can be accessed only within the location they reside, not from outside the location. For development of agent applications, an application programming interface (API) and a runtime system (RTS) are provided, see Fig. 1. The default API and RTS can be extended to provide a higher-level application programming interface with, for example, a model that offers more structure and semantics to the agent application developer. Within AgentScape, management of large-scale agent systems is an important issue, includi\n\n2-Hop Neighbour:\nReal-Time Scheduling for Distributed Agents Providing an environment for a software agent to execute is very similar to building an operating system for the execution of general purpose applications. In the same fashion that an operating system provides a set of services for the execution of a user request, an agent framework provides a similar set of services for the execution of agent actions. Such services include the ability to communicate with other agents, maintaining the current state of an executing agent, and selecting an execution path from a set of possible execution paths. The particular focus of this paper is the study of Soft Real-Time agentscheduling in the context of a framework for the execution of intelligent software agents; acharacterization of agent performance; and developmentofanenvironment for testing and comparing the performance of agent activities. The agent architecture used for this study, DECAF #Distributed Environment Centered AgentFramework#, is a software toolkit for the rapid d...\n\n2-Hop Neighbour:\nAgent-Oriented Software Engineering Agent-oriented techniques represent an exciting new means of analysing, designing and building complex software systems. They have the potential to significantly improve current practice in software engineering and to extend the range of applications that can feasibly be tackled. Yet, to date, there have been few serious attempts to cast agent systems as a software engineering paradigm. This paper seeks to rectify this omission. Specifically, it will be argued that: (i) the conceptual apparatus of agent-oriented systems is well-suited to building software solutions for complex systems and (ii) agent-oriented approaches represent a genuine advance over the current state of the art for engineering complex systems. Following on from this view, the major issues raised by adopting an agent-oriented approach to software engineering are highlighted and discussed.\n\n2-Hop Neighbour:\nThe BOID Architecture - Conflicts Between Beliefs, Obligations, Intentions and Desires In this paper we introduce the so-called Beliefs-Obligations-Intentions-Desires or BOID architecture. It contains feedback loops to consider all eects of actions before committing to them, and mechanisms to resolve conflicts between the outputs of its four components. Agent types such as realistic or social agents correspond to specific types of conflict resolution embedded in the BOID architecture.\n\n2-Hop Neighbour:\nDynamic Reconfiguration in Collaborative Problem Solving In this article we will describe our research efforts in coping with a trade-off that can be often found in the control and optimization of todays business processes. Though centralized control may achieve nearto -optimum results in optimizing the system behavior, there are usually social, technical and security restrictions on applying centralized control. Distributed control on the other hand may cope with these restrictions but also entails sub-optimality and communicational overhead. Our concept of composable agents tries to allow a dynamic and fluent transition between globalization and localization in business process control by adapting to the current real-world system structure. We are currently evaluating this concept in the framework of patient flow control at Charit'e Berlin. Introduction Research in Distributed Artificial Intelligence (DAI, (Bond & Gasser 1988)) has been traditionally divided into Distributed Problem Solving (DPS) and Multi Agent Systems (MAS). However, r...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Evaluating Topic-Driven Web Crawlers Due to limited bandwidth, storage, and computational resources, and to the dynamic nature of the Web, search engines cannot index every Web page, and even the covered portion of the Web cannot be monitored continuously for changes. Therefore it is essential to develop effective crawling strategies to prioritize the pages to be indexed. The issue is even more important for topic-specific search engines, where crawlers must make additional decisions based on the relevance of visited pages. However, it is difficult to evaluate alternative crawling strategies because relevant sets are unknown and the search space is changing. We propose three different methods to evaluate crawling strategies. We apply the proposed metrics to compare three topic-driven crawling algorithms based on similarity ranking, link analysis, and adaptive agents.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n2-Hop Neighbour:\nWorld Wide Web Information Retrieval Using Web Connectivity Information Gathering, processing and distributing information from the World Wide Web will be a vital technology for the next century. Web search techniques have played a critical role in the development of information systems. Due to the diverse nature of web documents, traditional search techniques must be improved. Hyperlink structure based methods have proved to be powerful ways of exploring the relationships between web documents. In this project, a prototype web search engine was developed to exploit the link structure of web documents, based on the use of the Companion algorithm. The prototype consists of a web spider, local database, and search software. The system was written using the Java programming language. Our spider crawls and downloads web pages using Lynx, then saves the hyperlinks into an Oracle database. JDBC is used to implement the database processing. Search software makes a vicinity graph for the query URL and returns the most related pages after calculating the hub and authority weights. Finally, HTML web pages provide user interfaces and communicate with CGI using the Perl language. iii ACKNOWLEDGMENTS The author would like to express thanks to all of the members of his M.S. committee for their useful comments on the thesis, assistance in scheduling the defense date and kind help during the final defense period. The author would like to express his deepest appreciation to Dr. Wen-Chen Hu, his thesis mentor, for the depth of the training and the appropriate guidance he has provided. The author would also like to acknowledge the Department of Computer Science and Software Engineering of Auburn University for financial support. Finally, thanks especially go to the author's wife Qifang, his son, Alex, and his father and mother for their support and love. ...\n\n2-Hop Neighbour:\nRank Aggregation Methods for the Web We consider the problem of combining ranking results from various sources. In the context of the Web, the main applications include building meta-search engines, combining ranking functions, selecting documents based on multiple criteria, and improving search precision through word associations. Wedevelop a set of techniques for the rank aggregation problem and compare their performance to that of well-known methods. A primary goal of our work is to design rank aggregation techniques that can effectively combat \"spam,\" a serious problem in Web searches. Experiments show that our methods are simple, efficient, and effective. Keywords: rank aggregation, ranking functions, metasearch, multi-word queries, spam 1.\n\n2-Hop Neighbour:\nTopical Locality in the Web Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable WorldWide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.\n\n2-Hop Neighbour:\nTopical Locality in the Web: Experiments and Observations Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable World-Wide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results present the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents. 1 Introduction Most web pages are linked to others with related content...\n\n2-Hop Neighbour:\nThe Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'SNePS: A Logic for Natural Language Understanding and Commonsense Reasoning The use of logic for knowledge representation and reasoning systems is controversial. There are, indeed, several ways that standard First Order Predicate Logic is inappropriate for modelling natural language understanding and commonsense reasoning. However, a more appropriate logic can be designed. This chapter presents several aspects of such a logic. 1 Introduction My colleagues, students, and I have been engaged in a long-term project to build a natural language using intelligent agent. While our approach to natural language understanding (NLU) and commonsense reasoning (CSR) has been logic-based, we have thought that the logics developed for metamathematics, e.g. (Kleene, 1950), are not the best ones for our purpose. Instead, we have designed new logics, better suited for NLU and CSR. The current version of these logics constitutes the formal language and inference mechanism of the knowledge representation/reasoning (KRR) system, SNePS 2.4 (Shapiro and The SNePS Implementation Gr...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nEmbodied Cassie We have enhanced a computational cognitive agent by embodying it with real and simulated bodies operating in real and simulated worlds. This has allowed us to experiment with various ways that embodiment influences the creation and meaning of the agent's beliefs and other terms in its knowledge base, including: symbol-grounding by perception and action; firstperson privileged knowledge; the representation and use of indexicals; having a personal sense of time; and low-level bodily awareness. Introduction We have been engaged in a series of projects in which Cassie, the SNePS cognitive agent (Shapiro & Rapaport 1987; Shapiro 1989; Shapiro & Rapaport 1991; 1992; Shapiro & The SNePS Implementation Group 1998), has been incorporated into a hardware or softwaresimulated cognitive robot. The capabilities of the embodied Cassie have included: input and output in fragments of English; reasoning; performance of primitive and composite acts; and vision. In this paper, I give an overview of the...", "role": "user" }, { "content": "<answer>Artificial Intelligence</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Maintaining the Illusion of Interacting Within a 3D Virtual Space It is widely thought to more or less a degree, that a sense of presence may be induced in users of new and emerging media technologies, such as, the Internet, digital television and cinema (supporting interaction), teleconferencing and 3D virtual reality systems. In this paper, it is argued that presence presupposes that participants are absorbed in the illusion of interacting within the visual spaces created by these media. That is, prior to the possibility of any inducement of presence, participants need to be absorbed in the illusion conveyed by the media. Without this, participants' attention is broken and the illusion is lost. Hence, the potential to induce presence in participants ceases. To encourage participants to lose sight of the means of representation and be drawn into the illusion conveyed by these media, this paper proposes the development of design principles to increase participants' experience. In an attempt to inform design principles, this paper focuses on another artificial although highly successful visual medium - film. By way of example, this paper concentrates on one medium, virtual reality, and proposes design principles that attempt to maintain the illusion of interacting within 3D virtual space. This attempts to provide a platform through the resourceful blend of hardware and software Virtual Reality (VR) enabling technologies on which to support a well designed virtual environment and hence, from which the inducement of presence in participants may develop.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCo-operative Evaluation of a Desktop Virtual Reality System A summative usability evaluation of a desktop virtual reality (VR) system was developed and a preliminary study then conducted. The purpose of the study was twofold. Firstly, to test whether the traditional evaluation technique, co-operative evaluation, is effective in the evaluation of desktop VR systems. Co-operative evaluation is a variation on a `think-aloud' verbal protocol, whereby, in addition to concurrently 'thinking-aloud', users are encouraged to ask any questions about an evaluation, relating to the computer-based system, the application, or the tasks that they are required to perform during the evaluation. As well as this, the evaluator may ask questions of the user at any time during the evaluation. Results from the study indicate that this additional probing technique enables an evaluator to elicit further usability problems that may not have otherwise been exteriorized by the user. Additionally, a method is developed which attempts to turn round the qualitative 'think-aloud' type data into quantitative data. This provides a way of evaluating empirical 'think-aloud' evaluation methods and will be useful for comparing their effectiveness to evaluate 3D virtual reality systems.\n\n1-Hop Neighbour:\nGuiding User Navigation in Virtual Environments Using Awareness of Virtual Off-Screen Space Navigation in virtual environments can be difficult. One contributing factor is the problem of user disorientation. Two major causes of this are the lack of navigation cues in the environment and problems with navigating too close to or through virtual world objects. Previous work has developed guidelines, informed by cinematography conventions, for the construction of virtual environments to aid user comprehension of virtual \"space\" to reduce user disorientation. This paper describes the validation of these guidelines via a user study involving a navigation task in a virtual \"maze\". Results suggest that the use of the guidelines can help reduce the incidences of user disorientation. However, the guidelines seemed to have little impact on users' abilities to construct 'cognitive maps' of the environment.\n\n2-Hop Neighbour:\nImproving Interaction with Virtual Environments Introduction Virtual environments (VEs) provide a computer-based interface to a real-life or abstract space, using 3D graphics and 3D interaction techniques. VEs represent a novel interface style which offers new possibilities and challenges to human-computer interface design. However, studies of the design of VEs (Kaur et al., 1996) show that designers lack a coherent approach to design, especially interaction design. Designers appear to be pre-occupied with difficult technical issues and think little about supporting user interaction. However, major interaction problems have been found with current VEs, such as disorientation, perceptual misjudgements and difficulty finding and understanding available interactions (McGovern, 1993; COVEN, 1997). These common problems have been known to result in user frustration and a low usability and acceptability for the VE (Kaur et al., 1996; Miller 1994). Guidance is needed on interaction design for VEs to avoid such usability problems.\n\n2-Hop Neighbour:\nUsing the Resources Model in Virtual Environment Design this paper we take a step back from the formal specification of VEs to investigate where requirements and design information are located within these environments and how it can be structured and analysed. More specifically, we are interested in considering VEs in terms of distributed cognition (DC) [5, 7, 14, 20].\n\n2-Hop Neighbour:\nEvaluating Guidelines for Reducing User Disorientation When Navigating in Virtual Environments Navigation in virtual environments can be difficult. One contributing factor is user disorientation.", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Cost-Based Object Buffer Replacement Algorithm for Object-Oriented Database Systems Many object-oriented database systems manage object buffers to provide fast access to objects. Traditional buffer replacement algorithms based on fixed-length pages simply assume that the cost incurred by operating a buffer is proportional to the number of buffer faults. However, this assumption no longer holds in an object buffer where objects are of variable-lengths and the cost of replacing an object varies for each object. In this paper, we propose a cost-based replacement algorithm for object buffers. The proposed algorithm replaces the objects that have minimum costs per unit time and unit space. The cost model extends the previous page-based one to include the replacement costs and the sizes of objects. The performance tests show that the proposed algorithm is almost always superior to the LRU-2 algorithm and in some cases is more than twice as fast. The idea of cost-based replacement can be applied to any buffer management architectures that adopt earlier algorithms. It is espe...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Optimization and Global Minimization Methods Suitable for Neural Networks Neural networks are usually trained using local, gradient-based procedures. Such methods frequently find suboptimal solutions being trapped in local minima. Optimization of neural structures and global minimization methods applied to network cost functions have strong influence on all aspects of network performance. Recently genetic algorithms are frequently combined with neural methods to select best architectures and avoid drawbacks of local minimization methods. Many other global minimization methods are suitable for that purpose, although they are used rather rarely in this context. This paper provides a survey of such global methods, including some aspects of genetic algorithms. CONTENTS 1 Introduction 2 2 Monte Carlo and its improvements 4 3 Simulated annealing and its variants 6 3.1 Adaptive Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 Alopex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nMinimal Distance Neural Methods A general framework for minimal distance methods is presented. Radial Basis Functions (RBFs) and Multilayer Perceptrons (MLPs) neural networks are included in this framework as special cases. New versions of minimal distance methods are formulated. A few of them have been tested on a real-world datasets obtaining very encouraging results.", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Multi-Agent Systems by Incremental Gradient Reinforcement Learning situated with local and scalable perceptions, . have identical capabilities, . are possibly heterogeneous, . cooperate, . do not directly communicate. Each agent learns its behavior on its own. 11/22 IJCAI'01 # # # # # # Bloc merging (the problem) . reward: +3 if blocs are merged . actions: N W E S . perceptions: -- dir(agent) #4 -- dir(yellow bloc) -- dir(blue bloc) -- near(yellow bloc) -- near(blue bloc) total 1024/4 (MDP with 2 agents and 2 cubes for an 8 8 world: 15.248.024 states !!!) 12/22 IJCAI'01 # # # # # # An agent learns a policy: # : But the < O, A > is not markovian: . convergence is not assured, . stochastic policies should perform better [SJJ94]. 1b 1a A (-R) B(+R) B(-R) A(+R) 13/22 IJCAI'01 # # # # # # Multi-agent framework: . each agent considers other agents as part of the environment, . all agents learn, therefore evolve, unpredictable transitions. Q-learning Baxter's gradient descent A' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nGeneral Principles Of Learning-Based Multi-Agent Systems We consider the problem of how to design large decentralized multi-agent systems (MAS’s) in an automated fashion, with little or no hand-tuning. Our approach has each agent run a reinforcement learning algorithm. This converts the problem into one of how to automatically set/update the reward functions for each of the agents so that the global goal is achieved. In particular we do not want the agents to “work at cross-purposes ” as far as the global goal is concerned. We use the term artificial COllective INtelligence (COIN) to refer to systems that embody solutions to this problem. In this paper we present a summary of a mathematical framework for COINs. We then investigate the real-world applicability of the core concepts of that framework via two computer experiments: we show that our COINs perform near optimally in a difficult variant of Arthur’s bar problem [1] (and in particular avoid the tragedy of the commons for that problem), and we also illustrate optimal performance for our COINs in the leader-follower problem. 1\n\n1-Hop Neighbour:\nAdaptive Combination of Behaviors in an Agent rchical structures [PS97]. often require specific manual preparations (the shape of the structure). . Factored representations [Sal00]. are not scalable, but restricted to a given environment (number of objects, size of environment...). 8/19 ECAI'02 # # # # # # 9/19 ECAI'02 # # # # # # Idea : Hypothesis: A complex behavior is often guided by a set of basic motivations (i.e. goals). Idea: Make use of basic behaviors associated with the basic motivations to recombine them in one complex behavior. agent O1 O3 10/19 ECAI'02 # # # # # # Scene decomposition: O2 agent O1 O2 O3 O2 O1 + =? . Perceived objects: O 1 , O 2 et O 3 . . Basic behaviors: avoid holes (b a , {#hole#}) push tiles (b p , {#hole#, #tile#}) . (behavior, config) pairs: (b a , 2 11/19 ECAI'02 # # # # # # Basic Behaviors: Example : }): behavior pushing tile O 1 in hole O 2 O2 O1 O3 A basic generic behavior b is defined by . a type of configuration {#T obj 1\n\n2-Hop Neighbour:\nInfinite-Horizon Policy-Gradient Estimation Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter 2 [0; 1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward.\n\n2-Hop Neighbour:\nCoordinated Reinforcement Learning We present several new algorithms for multiagent reinforcement learning. A common feature of these algorithms is a parameterized, structured representation of a policy or value function. This structure is leveraged in an approach we call coordinated reinforcement learning, by which agents coordinate both their action selection activities and their parameter updates. Within the limits of our parametric representations, the agents will determine a jointly optimal action without explicitly considering every possible action in their exponentially large joint action space. Our methods differ from many previous reinforcement learning approaches to multiagent coordination in that structured communication and coordination between agents appears at the core of both the learning algorithm and the execution architecture. Our experimental results, comparing our approach to other RL methods, illustrate both the quality of the policies obtained and the additional benefits of coordination.\n\n2-Hop Neighbour:\nMulti-Layer Methods and the Optimal Optimizer Multi-Layer Methods are methods that act on several layers simultaneously. Examples of multi-layer methods are found in multi-agent systems (global and per-agent behavior), in learning (e.g. boosting, bias tuning), in self-adaptive methods (such as evolution strategies), in hybrid approaches, and in optimization (e.g. multiple runs, result pooling). We give a formal definition of what a multi-layer method is. We discuss the relationship with the no free lunch theorem, to show that such a thing as the optimal optimizer exists, and how multi-layer methods can be used to approximate it.\n\n2-Hop Neighbour:\nIncremental Reinforcement Learning for designing Multi-Agent Systems Designing individual agents so that, when put together, they reach a given global goal is not an easy task. One solution to automatically build such large Multi-Agent Systems is to use decentralized learning: each agent learns by itself its own behavior. To that purpose, Reinforcement Learning methods are very attractive as they do not require a solution of the problem to be known before hand. Nevertheless, many hard points need to be solved for such a learning process to be viable. Among others, the credit assignement problem, combinatorial explosion and local perception of the world seem the most crucial and prevent optimal behavior. In this paper, we propose a framework based on a gradual learning of harder and harder tasks until the desired global behavior is reached. The applicability of our paradigm is tested on computer experiments where many agents have to coordinate to reach a global goal. Our results show that incremental learning leads to better performances than more classical techniques. We then discuss several improvements which could lead to even better performances.\n\n2-Hop Neighbour:\nHierarchical Optimization of Policy-Coupled Semi-Markov Decision Processes One general strategy for approximately solving large Markov decision processes is \"divide-and-conquer\": the original problem is decomposed into sub-problems which interact with each other, but yet can be solved independently by taking into account the nature of the interaction. In this paper we focus on a class of \"policy-coupled\" semi-Markov decision processes (SMDPs), which arise in many nonstationary real-world multi-agent tasks, such as manufacturing and robotics. The nature of the interaction among sub-problems (agents) is more subtle than that studied previously: the components of a sub-SMDP, namely the available states and actions, transition probabilities and rewards, depend on the policies used in solving the \"neighboring\" sub-SMDPs. This \"strongly-coupled\" interaction among subproblems causes the approach of solving each sub-SMDP in parallel to fail. We present a novel approach whereby many variants of each sub-SMDP are solved, explicitly taking into account the different mod...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Empirical Performance Analysis of Linear Discriminant Classifiers In face recognition literature, holistic template matching systems and geometrical local feature based systems have been pursued . In the holistic approach, PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) are popular ones. More recently, the combination of PCA and LDA has been proposed as a superior alternative over pure PCA and LDA. In this paper, we illustrate the rationales behind these methods and the pros and cons of applying them to pattern classification task. A theoretical performance analysis of LDA suggests applying LDA over the principal components from the original signal space or the subspace. The improved performance of this combined approach is demonstrated through experiments conducted on both simulated data and real data. 1 Introduction Statistical pattern recognition techniques have been successfully applied to many problems, including speech recognition, automatic target recognition and image classification. For a given pattern classificat...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nDiscriminant Analysis based Feature Extraction We propose a new feature extraction scheme called Discriminant Component Analysis. The new scheme decomposes a signal into orthonormal bases such that for each base there is an eigenvalue representing the discriminatory power of projection in that direction. The bases and eigenvalues are obtained based on certain classification criterion. For simplicity, a criterion used in Fisher's Discriminant Analysis (DA) is chosen and is applied iteratively to implement the scheme. We illustrate the motivation of this new scheme and show how it can be used to construct new distance metrics. We then argue that these new distance metrics are more robust than DA based metrics. Finally, very good classification performance on simulation data and real face images are demonstrated using these new distance metrics. 1 Introduction It is important that for different applications, we use different representations for the same signal [1]. For example, PCA (Principal Component Analysis) or wavelet decompos...\n\n1-Hop Neighbour:\nThe FERET Evaluation Methodology for Face-Recognition Algorithms Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to (1) assess the state of the art, (2) identify future areas of research, and (3) measure algorithm performance. 1 Introduction Over the last decade, face recognition has become an active area of research in computer vision, neuroscience, and psychology. Progress has advanced to the point that face-recognition systems are being demonstrated in real-...\n\n2-Hop Neighbour:\nFace Recognition Identifying a human individual from his or her face is one of the most nonintrusive modalities in biometrics. However, it is also one of the most challenging ones. This chapter discusses why it is challenging and the factors that a practitioner can take advantage of in developing a practical face recognition system. Some major existing approaches are discussed along with some algorithmic considerations. A face recognition algorithm is presented as an example along with some experimental data. Some possible future research directions are outlined at the end of the chapter. 1.1 INTRODUCTION Face recognition from images is a sub-area of the general object recognition problem. It is of particular interest in a wide variety of applications. Applications in law enforcement for mugshot identification, verification for personal identification such as driver's licenses and credit cards, gateways to limited access areas, surveillance of crowd behavior are all potential applications of a succes...\n\n2-Hop Neighbour:\nAn Incremental Learning Algorithm with Automatically Derived Discriminating Features We propose a new technique which incrementally derive discriminating features in the input space. This technique casts both classification problems (class labels as outputs) and regression problems (numerical values as outputs) into a unified regression problem. The virtual labels are formed by clustering in the output space. We use these virtual labels to extract discriminating features in the input space. This procedure is performed recursively. We organize the resulting discriminating subspace in a coarse-to-fine fashion and store the information in a decision tree. Such an incrementally hierarchical discriminating regression (IHDR) decision tree can be realized as a hierarchical probability distribution model. We also introduce a sample size dependent negativelog -likelihood (NLL) metric to deal with large-sample size cases, small-sample size cases, and unbalanced-sample size cases. This is very essential since the number of training samples per class are different at each internal node of the IHDR tree. We report experimental results for two types of data: face image data along with comparison with some major appearance-based method and decision trees, hall way images with driving directions as outputs for the automatic navigation problem -- a regression application.\n\n2-Hop Neighbour:\nRecognition of Partially Occluded and/or Imprecisely Localized Faces Using a Probabilistic Approach New face recognition approaches are needed, because although much progress has been recently achieved in the field (e.g. within the eigenspace domain), still many problems are to be robustly solved. Two of these problems are occlusions and the imprecise localization of faces (which ultimately imply a failure in identification) . While little has been done to account for the first problem, almost nothing has been proposed to account for the second. This paper presents a probabilistic approach that attempts to solve both problems while using an eigenspace representation. To resolve the localization problem, we need to find the subspace (within the feature space, e.g. eigenspace) that represents this error for each of the training image. To resolve the occlusion problem, each face is divided into n local regions which are analyzed in isolation. In contrast with other previous approaches, where a simple voting space is used, we present a probabilistic method that analyzes how \"good\" a loca...\n\n2-Hop Neighbour:\nSFS Based View Synthesis for Robust Face Recognition Sensitivity to variations in pose is a challenging problem in face recognition using appearance-based methods. More specifically, the appearance of a face changes dramatically when viewing and/or lighting directions change. Various approaches have been proposed to solve this difficult problem. They can be broadly divided into three classes: 1) multiple image based methods where multiple images of various poses per person are available, 2) hybrid methods where multiple example images are available during learning but only one database image per person is available during recognition, and 3) single image based methods where no example based learning is carried out. In this paper, we present a method that comes under class 3. This method based on shape-from-shading (SFS) improves the performance of a face recognition system in handling variations due to pose and illumination via image synthesis. 1 Introduction Face recognition has become one of the most active areas of research in image...\n\n2-Hop Neighbour:\nHallucinating Faces In most surveillance scenarios there is a large distance between the camera and the objects of interest in the scene. Surveillance cameras are also usually set up with wide fields of view in order to image as much of the scene as possible. The end result is that the objects in the scene normally appear very small in surveillance imagery. It is generally possible to detect and track the objects in the scene, however, for tasks such as automatic face recognition and license plate reading, resolution enhancement techniques are often needed. Although numerous resolution enhancement algorithms have been proposed in the literature, most of them are limited by the fact that they make weak, if any, assumptions about the scene. We propose an algorithm that can be used to learn a prior on the spatial distribution of the image gradient for frontal images of faces. We proceed to show how such a prior can be incorporated into a super-resolution algorithm to yield 4-8 fold improvements in resolution #...", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCategorisation by Context Assistance in retrieving of documents on the World Wide Web is provided either by search engines, through keyword based queries, or by catalogues, which organise documents into hierarchical collections. Maintaining catalogues manually is becoming increasingly difficult due to the sheer amount of material, and therefore it will be necessary to resort to techniques for automatic classification of documents. Classification is traditionally performed by extracting information for indexing a document from the document itself. The paper describes the technique of categorisation by context, which exploits the context perceivable from the structure of HTML documents to extract useful information for classifying the documents they refer to. We present the results of experiments with a preliminary implementation of the technique. 1. INTRODUCTION Most Web search engines (e.g. Altavista^TM [Altavista], HotBot^TM [HotBot], Excite^TM [Excite]) perform search based on the content of docume...\n\n1-Hop Neighbour:\nDocument Classification as an Internet service: Choosing the best classifier This project investigates some of the issues involved in a new proposal for expanding the scope of the field of Data Mining by providing mining models as services on the Internet. This idea can widely increase the reach and accessibility of Data Mining to common people because one of the primary stumbling blocks in the adoption of mining is the extremely high level of expertise and data resources needed in building a robust mining model. We feel this task should be left to the specialists with access to data and resources, who can provide their most up to date model as a service on the Internet for public use.\n\n2-Hop Neighbour:\nThe Order of Things: Activity-Centred Information Access This paper focuses on the representation and access of Web-based information, and how to make such a representation adapt to the activities or interests of individuals within a community of users. The heterogeneous mix of information on the Web restricts the coverage of traditional indexing techniques and so limits the power of search engines. In contrast to traditional methods, and in a way that extends collaborative filtering approaches, the path model centres representation on usage histories rather than content analysis. By putting activity at the centre of representation and not the periphery, the path model concentrates on the reader not the author and the browser not the site. We describe metrics of similarity based on the path model, and their application in a URL recommender tool and in visualising sets of URLs. Keywords: heterogeneous data, activity, indexing, collaborative filtering, information retrieval, access and visualization. 1 Introduction As Tim Berners-Lee pointe...\n\n2-Hop Neighbour:\nData Mining Models as Services on the Internet The goal of this article is to raise a debate on the usefulness of providing data mining models as services on the internet. These services can be provided by anyone with adequate data and expertise and made available on the internet for anyone to use. For instance, Yahoo or Altavista, given their huge categorized document collection, can train a document classifier and provide the model as a service on the internet. This way data mining can be made accessible to a wider audience instead of being limited to people with the data and the expertise. A host of practical problems need to be solved before this idea can be made to work. We identify them and close with an invitation for further debate and investigation. 1.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Active Perception and Map Learning for Robot Navigation This paper describes a simulated on-line mapping system for robot navigation. This system allows the autonomous creation of topological maps enhanced with metrical information provided by internal (odometry) and external (vision and sonars) sensors. Within such maps, the robot's position is represented and calculated probabilistically according to algorithms that are inspired by Hidden Markov Models. The visual system is very simple and does not allow reliable recognition of speci c places but, used jointly with odometry, sonar recordings and an active perception system, it allows reliable localization even when the robot starts exploring its environment, and when it is passively translated from one place to another. Advantages and drawbacks of the current system are discussed, together with means to remediate the latter.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nLearning to Perceive the World as Articulated: An Approach for Hierarchical Learning in Sensory-Motor Systems This paper describes how agents can learn an internal model of the world structurally by focusing on the problem of behavior-based articulation. We develop an on-line learning scheme -- the so-called mixture of recurrent neural net (RNN) experts -- in which a set of RNN modules becomes self-organized as experts on multiple levels in order to account for the different categories of sensory-motor flow which the robot experiences. Autonomous switching of activated modules in the lower level actually represents the articulation of the sensory-motor flow. In the meanwhile, a set of RNNs in the higher level competes to learn the sequences of module switching in the lower level, by which articulation at a further more abstract level can be achieved. The proposed scheme was examined through simulation experiments involving the navigation learning problem. Our dynamical systems analysis clarified the mechanism of the articulation; the possible correspondence between the articulation...\n\n2-Hop Neighbour:\nEstimating Dependency Structure as a Hidden Variable This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. 1 INTRODUCTION A fundamental feature of a good model is the ability to uncover and exploit independencies in the data it is presented with. For many commonly used models, such as neural nets and belief networks, the dependency structure encoded in the model is fixed, in the sense that it is not allowed to vary depending on actual values of the variables or with the current case. However, dependency structures that are conditional on values of variables abound in the world around us. Consider for example bitmaps of handwritten digits. They obviously contain many dependencies between pixels; however, the pattern of these dependencies will vary acr...", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Exploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBreadth-First Search Crawling Yields High-Quality Pages This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl.\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n1-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n2-Hop Neighbour:\nC4-1: Building a community hierarchy for the Web based on Bipartite Graphs In this paper we propose an approach to extract and relate the communities by considering a community signature as a group of content creators that manifests itself as a set of interlinked pages. We abstract a community signature as a group of pages that form a dense bipartite graph (DBG), and proposed an algorithm to extract the DBGs from the given data set. Also, using the proposed approach, the extracted communities can be grouped to form a high-level communities. We apply the proposed algorithm on 10 GB TREC (Text REtrieval Conference) data set and extract a three-level community hierarchy. The extracted community hierarchy facilitates an easy analysis of low-level communities and provides a way to understand the sociology of the Web.\n\n2-Hop Neighbour:\nDesign and Implementation of a High-Performance Distributed Web Crawler Broad web search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. Such a web crawler may interact with millions of hosts over a period of weeks or months, and thus issues of robustness, flexibility, and manageability are of major importance. In addition, I/O performance, network resources, and OS limits must be taken into account in order to achieve high performance at a reasonable cost. In this paper, we describe the design and implementation of a distributed web crawler that runs on a network of workstations. The crawler scales to (at least) several hundred pages per second, is resilient against system crashes and other events, and can be adapted to various crawling applications. We present the software architecture of the system, discuss the performance bottlenecks, and describe efficient techniques for achieving high performance. We also report preliminary experimental results based on a crawl of million pages on million hosts. Work supported by NSF CAREER Award NSF CCR-0093400, Intel Corporation, and the New York State Center for Advanced Technology in Telecommunications (CATT) at Polytechnic University, and by equipment grants from Intel Corporation and Sun Microsystems. 1 1\n\n2-Hop Neighbour:\nSelf-Organization and Identification of Web Communities The vast improvement in information access is not the only advantage resulting from the increasing percentage of hyperlinked human knowledge available on the Web. Additionally, much potential exists for analyzing interests and relationships within science and society. However, the Web's decentralized and unorganized nature hampers content analysis. Millions of individuals operating independently and having a variety of backgrounds, knowledge, goals and cultures author the information on the Web. Despite the Web's decentralized, unorganized, and heterogeneous nature, our work shows that the Web self-organizes and its link structure allows efficient identification of communities. This self-organization is significant because no central authority or process governs the formation and structure of hyperlinks.\n\n2-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n2-Hop Neighbour:\nStable Algorithms for Link Analysis The Kleinberg HITS and the Google PageRank algorithms are eigenvector methods for identifying \"authoritative\" or \"influential\" articles, given hyperlink or citation information. That such algorithms should give reliable or consistent answers is surely a desideratum, and in [10], we analyzed when they can be expected to give stable rankings under small perturbations to the linkage patterns. In this paper, we extend the analysis and show how it gives insight into ways of designing stable link analysis methods. This in turn motivates two new algorithms, whose performance we study empirically using citation data and web hyperlink data. 1.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Methodology and Modelling Technique for Systems of BDI Agents The construction of large-scale embedded software systems demands the use of design methodologies and modelling techniques that support abstraction, inheritance, modularity, and other mechanisms for reducing complexity and preventing error. If multi-agent systems are to become widely accepted as a basis for large-scale applications, adequate agentoriented methodologies and modelling techniques will be essential. This is not just to ensure that systems are reliable, maintainable, and conformant, but to allow their design, implementation, and maintenance to be carried out by software analysts and engineers rather than researchers. In this paper we describe an agent-oriented methodology and modelling technique for systems of agents based upon the Belief-Desire-Intention (BDI) paradigm. Our models extend existing Object-Oriented (OO) models. By building upon and adapting existing, well-understood techniques, we take advantage of their maturity to produce an approach that can be easily lear...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nModelling and Design of Multi-Agent Systems Abstract. Agent technologies are now being applied to the development of large-scale commercial and industrial software systems. Such systems are complex, involving hundreds, perhaps thousands of agents, and there is a pressing need for system modelling techniques that permit their complexity to be e ectively managed, and principled methodologies to guide the process of system design. Without adequate techniques to support the design process, such systems will not be su ciently reliable, maintainable or extensible, will be di cult to comprehend, and their elements will not be re-usable. In this paper, we present techniques for modelling agents and multi-agent systems which adapt and extend existing Object-Oriented representation techniques, and a methodology which provides a clear conceptual framework to guide system design and speci cation. We have developed these techniques for systems of agents based upon a particular Belief-Desire-Intention architecture, but have soughttoprovide a framework for the description of agent systems that is su ciently general to be applicable to other agent architectures, and which may be extended in various ways. 1\n\n1-Hop Neighbour:\nAn Overview of the Multiagent Systems Engineering Methodology . To solve complex problems, agents work cooperatively with other agents in heterogeneous environments. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior. The use of intelligent agents provides an even greater amount of flexibility to the ability and configuration of the system itself. With these new intricacies, software development is becoming increasingly difficult. Therefore, it is critical that our processes for building the inherently complex distributed software that must run in this environment be adequate for the task. This paper introduces a methodology for designing these systems of interacting agents. 1.\n\n1-Hop Neighbour:\nOrganisational Rules as an Abstraction for the Analysis and Design of Multi-Agent Systems Multi-agent systems... In this paper we introduce three additional organisational concepts - organisational rules, organisational structures, and organisational patterns - and discuss why we believe they are necessary for the complete specification of computational organisations. In particular, we focus on the concept of organisational rules and introduce a formalism, based on temporal logic, to specify them. This formalism is then used to drive the definition of the organisational structure and the identification of the organisational patterns. Finally, the paper sketches some guidelines for a methodology for agent-oriented systems based on our expanded set of organisational abstractions.\n\n2-Hop Neighbour:\nMultiagent Systems Engineering: A Methodology For Analysis And Design Of Multiagent Systems ................................................................................................................................................. IX I. INTRODUCTION ........................................................................................................................................... 1 1.1 Background................................................................................................................................. 2 1.2 Problem....................................................................................................................................... 3 1.3 Goal ............................................................................................................................................ 4 1.4 Assumptions ............................................................................................................................... 4 1.5 Areas of Collaboration.............................................................................\n\n2-Hop Neighbour:\nHeterogeneous Database Integration Using Agent-Oriented Information Systems : The Department of Defense (DOD) has an extensive family of models used to simulate the mission level interaction of weapon systems. Interoperability and reuse of the underlying data files used to create simulation scenarios pose great challenges in this regard. Unlike traditional data integration methods common to federated database research, the emerging field of agent-oriented information systems (AOIS) views data as the central focus of an application while also providing an overall architectural framework for application development. We develop an AOIS solution relevant to this problem domain by combining object-oriented data modeling (OMT), a persistent programming language using a commercial objectoriented database (ObjectStore#), and an agentoriented analysis and design methodology (MaSE). Requirements from a contractor-led effort at the Air Force Research Laboratory (AFRL) known as CERTCORT are the basis for analysis and design of our system. We implement prototypical information-layer applications to conceptually demonstrate the reusability and integration of scenarios across simulation models. Keywords: AOIS, Agents, Modeling and Simulations, Heterogeneous Database Integration 1.\n\n2-Hop Neighbour:\nDesign & Specification of Dynamic, Mobile, and Reconfigurable Multiagent Systems Multiagent Systems use the power of collaborative software agents to solve complex distributed problems. There are many Agent-Oriented Software Engineering (AOSE) methodologies available to assist system designers to create multiagent systems. However, none of these methodologies can specify agents with dynamic properties such as cloning, mobility or agent instantiation. This thesis starts the process to bridge the gap between AOSE methodologies and dynamic agent platforms by incorporating mobility into the current Multiagent Systems Engineering (MaSE) methodology. Mobility was specified within all components composing a mobile agent class. An agent component was also created that integrated the behavior of the components within an agent class and was transformed to handle most of the move responsibilities for a mobile agent. Those agent component and component mobility transformations were integrated into agentTool as a proof-of-concept and a demonstration system built on the mobility specifications was implemented for execution on the Carolina mobile agent platform. 1 DESIGN & SPECIFICATION OF DYNAMIC, MOBILE, AND RECONFIGURABLE MULTIAGENT SYSTEMS I.\n\n2-Hop Neighbour:\nA survey of Agent-Oriented Software Engineering Agent-Oriented Software Engineering is the one of the most recent contributions to the field of Software Engineering. It has several benefits compared to existing development approaches, in particular the ability to let agents represent high-level abstractions of active entities in a software system. This paper gives an overview of recent research and industrial applications of both general high-level methodologies and on more specific design methodologies for industry-strength software engineering.\n\n2-Hop Neighbour:\nAgent-Oriented Software Engineering Agent-oriented techniques represent an exciting new means of analysing, designing and building complex software systems. They have the potential to significantly improve current practice in software engineering and to extend the range of applications that can feasibly be tackled. Yet, to date, there have been few serious attempts to cast agent systems as a software engineering paradigm. This paper seeks to rectify this omission. Specifically, it will be argued that: (i) the conceptual apparatus of agent-oriented systems is well-suited to building software solutions for complex systems and (ii) agent-oriented approaches represent a genuine advance over the current state of the art for engineering complex systems. Following on from this view, the major issues raised by adopting an agent-oriented approach to software engineering are highlighted and discussed.", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Second-Order Networks for Wall-Building Agents This paper describes robust neurocontrollers for groups of agents that perform construction tasks. They enable agents to balance multiple goals, perform sequences of actions and survive while building walls, corridors, intersections, and briar patches.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nGoal Directed Adaptive Behavior in Second-Order Neural Networks: The MAXSON family of architectures The paper presents a neural network architecture (MAXSON) based on second-order connections that can learn a multiple goal approach/avoid task using reinforcement from the environment. It also enables an agent to learn vicariously, from the successes and failures of other agents. The paper shows that MAXSON can learn certain spatial navigation tasks much faster than traditional Q-learning, as well as learn goal directed behavior, increasing the agent's chances of long-term survival. The paper shows that an extension of MAXSON (V-MAXSON) enables agents to learn vicariously, and this improves the overall survivability of the agent population.\n\n1-Hop Neighbour:\nMultiple Goal Q-Learning: Issues and Functions This paper addresses the concerns of agents using reinforcement learning to learn to achieve multiple simultaneous goals. It proves that an algorithm based on acting upon the maximal goal at any one time will, in many cases, not not produce the Maximal Expected Utility for the agent. The paper then examines the type of function approximator necessary for the agent's reinforcement learning system, and concludes that a bi-linear function is the best compromise between expressive power and speed of learning.\n\n1-Hop Neighbour:\nGoal Directed Adaptive Behavior in Second-Order Neural Networks: Leaning and Evolving in the MAXSON architecture The paper presents a neural network architecture (MAXSON) based on second-order connections that can learn a multiple goal approach/avoid task using reinforcement from the environment. It also enables an agent to learn vicariously, from the successes and failures of other agents. The paper shows that MAXSON can learn certain spatial navigation tasks much faster than traditional Q-learning, as well as learn goal directed behavior, increasing the agent's chances of long-term survival. The paper shows that an extension of MAXSON (V-MAXSON) enables agents to learn vicariously, and this improves the overall survivability of the agent population.", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Focused Crawls, Tunneling, and Digital Libraries Crawling the Web to build collections of documents related to pre-specified topics became an active area of research during the late 1990's after crawler technology was developed for the benefit of search engines. Now, Web crawling is being seriously considered as an important strategy for building large scale digital libraries. This paper considers some of the crawl technologies that might be exploited for collection building. For example, to make such collection-building crawls more effective, focused crawling was developed, in which the goal was to make a \"best-first\" crawl of the Web. We are using powerful crawler software to implement a focused crawl but use tunneling to overcome some of the limitations of a pure best-first approach. Tunneling has been described by others as not only prioritizing links from pages according to the page's relevance score, but also estimating the value of each link and prioritizing on that as well. We add to this mix by devising a tunneling focused crawling strategy which evaluates the current crawl direction on the fly to determine when to terminate a tunneling activity. Results indicate that a combination of focused crawling and tunneling could be an e#ective tool for building digital libraries.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nHigh-Performance Web Crawling SRC’s charter is to advance the state of the art in computer systems by doing basic and applied research in support of our company’s business objectives. Our interests and projects span scalable systems (including hardware, networking, distributed systems, and programming-language technology), the Internet (including the Web, e-commerce, and information retrieval), and human/computer interaction (including user-interface technology, computer-based appliances, and mobile computing). SRC was established in 1984 by Digital Equipment Corporation. We test the value of our ideas by building hardware and software prototypes and assessing their utility in realistic settings. Interesting systems are too complex to be evaluated solely in the abstract; practical use enables us to investigate their properties in depth. This experience is useful in the short term in refining our designs and invaluable in the long term in advancing our knowledge. Most of the major advances in information systems have come through this approach, including personal computing, distributed systems, and the Internet. We also perform complementary work of a more mathematical character. Some of\n\n1-Hop Neighbour:\nCentroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...\n\n2-Hop Neighbour:\nInverted files and dynamic signature files for optimisation of Web Directories Web directories are taxonomies for the classification of Web documents. This kind of IR systems present a specific type of search where the document collection is restricted to one area of the category graph. This paper introduces a specific data architecture for Web directories which improves the performance of restricted searches. That architecture is based on a hybrid data structure composed of an inverted file with multiple embedded signature files. Two variants based on the proposed model are presented: hybrid architecture with total information and hybrid architecture with partial information. The validity of this architecture has been analysed by means of developing both variants to be compared with a basic model. The performance of the restricted queries was clearly improved, specially the hybrid model with partial information, which yielded a positive response under any load of the search system.\n\n2-Hop Neighbour:\nWebBase : A repository of web pages In this paper, we study the problem of constructing and maintaining a large shared repository of web pages. We discuss the unique characteristics of such a repository, propose an architecture, and identify its functional modules. We focus on the storage manager module, and illustrate how traditional techniques for storage and indexing can be tailored to meet the requirements of a web repository. To evaluate design alternatives, we also present experimental results from a prototype repository called WebBase, that is currently being developed at Stanford University. Keywords : Repository, WebBase, Architecture, Storage management 1 Introduction A number of important applications require local access to substantial portions of the web. Examples include traditional text search engines [Google] [Avista], related page services [Google] [Alexa], and topic-based search and categorization services [Yahoo]. Such applications typically access, mine or index a local cache or repository of web...\n\n2-Hop Neighbour:\nEvaluating Strategies for Similarity Search on the Web Finding pages on the Web that are similar to a query page (Related Pages) is an important component of modern search engines. A variety of strategies have been proposed for answering Related Pages queries, but comparative evaluation by user studies is expensive, especially when large strategy spaces must be searched (e.g., when tuning parameters). We present a technique for automatically evaluating strategies using Web hierarchies, such as Open Directory, in place of user feedback. We apply this evaluation methodology to a mix of document representation strategies, including the use of text, anchor-text, and links. We discuss the relative advantages and disadvantages of the various approaches examined. Finally, we describe how to efficiently construct a similarity index out of our chosen strategies, and provide sample results from our index.\n\n2-Hop Neighbour:\nFEATURES: Real-time Adaptive Feature Learning and Document Learning for Web Search In this paper we report our research on building Features - an intelligent web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Not only does Features learn from the user's document relevance feedback, but also automatically extracts and suggests indexing keywords relevant to a search query and learns from the user's keyword relevance feedback so that it is able to speed up its search process and to enhance its search performance. We design two efficient and mutual-benefiting learning algorithms that work concurrently, one for feature learning and the other for document learning. Features employs these algorithms together with an internal index database and a real-time meta-searcher so to perform adaptive real-time learning to find desired documents with as little relevance feedback from the user as possible. The architecture and performance of Features are also discussed. 1 Introduction As the world wide web rapidly evo...\n\n2-Hop Neighbour:\nInferring Web Communities Through Relaxed Cocitation and Dense Bipartite Graphs Community forming is one of the important activity in the Web. The Web harbors a large number of communities. A community is a group of content creators that manifests itself as a set of interlinked pages. Given a large collection of pages our aim is to find potential communities in the Web. In the literature, Ravi Kumar et al. [18] proposed a trawling method to find potential communities by abstracting a core of the community as a group of pages that form a complete bipartite graph (CBG) (web-page as a node and link as an edge between two nodes). The trawling approach extracts a small group of pages that form a CBG, which is a signature of a potential community.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Eel Programming Language and Internal Concurrency in Logic Agents This paper describes work done on creating the logic programming language Eel. The language is designed for implementing agents with a behaviour based, concurrent internal architecture. The paper also suggests a new such architecture which improves on the ones currently available. It gives examples of how parts of that architecture are implemented in Eel and comments that the Agent Oriented Programming paradigm currently contains two different metaphors for concurrency. Eel's event based approach to process communication and process initiation introduces an explicit representation of state to a logic program. A new declarative approach to object states is demonstrated as a part of the object oriented implementation of the suggested agent architecture. As well as being a programming language, Eel is a formalism which is well suited for logic based machine learning of behaviour and interaction. This paper briefly outlines the scope for using such learning to improve on exist...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nComputational Logic and Multi-Agent Systems: a Roadmap Agent-based computing is an emerging computing paradigm that has proved extremely successful in dealing with a number of problems arising from new technological developments and applications. In this paper we report the role of computational logic in modeling intelligent agents, by analysing existing agent theories, agent-oriented programming languages and applications, as well as identifying challenges and promising directions for future research. 1 Introduction In the past ten years the eld of agent-based computing has emerged and greatly expanded, due to new technological developments such as ever faster and cheaper computers, fast and reliable interconnections between them as well as the emergence of the world wide web. These developments have at the same time opened new application areas, such as electronic commerce, and posed new problems, such as that of integrating great quantities of information and building complex software, embedding legacy code. The establishment o...\n\n2-Hop Neighbour:\nCommunicating Agents We study the problem of endowing logic-based agents that can reason about their own beliefs as well as the beliefs of other agents with communication skills. We show how communication performatives from existing agent communication languages as well as their preconditions and eects can be expressed within logic-based agents in terms of the agents' beliefs. We illustrate the resulting language for programming logic-based agents by means of examples. 1 Introduction In an earlier paper [6], we propose an approach to logic-based agents by combining the approach to agents by Kowalski and Sadri [11] and the approach to meta-reasoning by Costantini et al. [5, 4]. Similarly to Kowalski and Sadri's agents, the agents in [6] are hybrid in that they exhibit both rational (or deliberative) and reactive behaviour. The reasoning core of the agents is a proof procedure that combines forward and backward reasoning. Backward reasoning is used primarily for planning, problem solving and other deliber...\n\n2-Hop Neighbour:\nReasoning Agents In Dynamic Domains The paper discusses an architecture for intelligent agents based on the use of A-Prolog - a language of logic programs under the answer set semantics. A-Prolog is used to represent the agent's knowledge about the domain and to formulate the agent's reasoning tasks. We outline how these tasks can be reduced to answering questions about properties of simple logic programs and demonstrate the methodology of constructing these programs. Keywords: Intelligent agents, logic programming and nonmonotonic reasoning. 1 INTRODUCTION This paper is a report on the attempt by the authors to better understand the design of software components of intelligent agents capable of reasoning, planning and acting in a changing environment. The class of such agents includes, but is not limited to, intelligent mobile robots, softbots, immobots, intelligent information systems, expert systems, and decision-making systems. The ability to design intelligent agents (IA) is crucial for such diverse tasks as ...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Performance Evaluation of Alternative Mapping Schemes for Storing XML Data in a Relational Database XML is emerging as one of the dominant data formats for data processing on the Internet. To query XML data, query languages likeXQL, Lorel, XML-QL, or XML-GL have been proposed. In this paper, we study how XML data can be stored and queried using a standard relational database system. For this purpose, we present alternative mapping schemes to store XML data in a relational database and discuss how XML-QL queries can be translated into SQL queries for every mapping scheme. We present the results of comprehensive performance experiments that analyze the tradeo#s of the alternative mapping schemes in terms of database size, query performance and update performance. While our discussion is focussed on XML and XML-QL, the results of this paper are relevant for most semi-structured data models and most query languages for semi-structured data. 1 Introduction It has become clear that not all applications are met by the relational, object-relational, or object-oriented data models. ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBenchmarking XML Management Systems: The XOO7 Way The effectiveness of existing XML query languages has been studied by many who focused on the comparison of linguistic features, implicitly reflecting the fact that most XML tools exist only on paper. In this paper, with a focus on efficiency and concreteness, we propose a pragmatic first step toward the systematic benchmarking of XML query processing platforms. We begin by identifying the necessary functionalities an XML data management system should support. We review existing approaches for managing XML data and the query processing capabilities of these approaches. We then compare three XML query benchmarks XMach-1, XMark and XOO7 and discuss the applicability, strengths and limitations of these benchmarks. We highlight the bias of these benchmarks towards the data centric view of XML and motivate our selection of XOO7 to extend with document centric queries. We complete XOO7 to capture the information retrieval capabilities of XML management systems. Finally we summarize our contributions and discuss future directions.\n\n1-Hop Neighbour:\nPath Materialization Revisited: An Efficient Storage Model for XML Data XML is emerging as a new major standard for representing data on the world wide web. Several XML storage models have been proposed to store XML data in di#erent database management systems. The unique feature of model-mappingbased approaches is that no DTD information is required for XML data storage. In this paper, we present a new modelmapping -based storage model, called XParent. Unlike the existing work on model-mapping-based approaches that emphasized on converting XML documents to/from database schema and translation of XML queries into SQL queries, in this paper, we focus ourselves on the e#ectiveness of storage models in terms of query processing. We study the key issues that a#ect query performance, namely, storage schema design (storing XML data across multiple tables) and path materialization (storing path information in databases). We show that similar but di#erent storage models significantly a#ect query performance. A performance study is conducted using three data sets and query sets. The experimental results are presented. Keywords: Semistructured data, XML database 1\n\n1-Hop Neighbour:\nIndexing Semistructured Data This paper describes techniques for building and exploiting indexes on semistructured data: data that may not have a fixed schema and that may be irregular or incomplete. We first present a general framework for indexing values in the presence of automatic type coercion. Then based on Lore, a DBMS for semistructured data, we introduce four types of indexes and illustrate how they are used during query processing. Our techniques and indexing structures are fully implemented and integrated into the Lore prototype. 1 Introduction We call data that is irregular or that exhibits type and structural heterogeneity semistructured, since it may not conform to a rigid, predefined schema. Such data arises frequently on the Web, or when integrating information from heterogeneous sources. In general, semistructured data can be neither stored nor queried in relational or object-oriented database management systems easily and efficiently. We are developing Lore 1 , a database management system d...\n\n2-Hop Neighbour:\nThe XML Benchmark Project With standardization efforts of a query language for XML documents drawing to a close, researchers and users increasingly focus their attention on the database technology that has to deliver on the new challenges that the sheer amount of XML documents produced by applications pose to data management: validation, performance evaluation and optimization of XML query processors are the upcoming issues. Following a long tradition in database research, the XML Store Benchmark Project provides a framework to assess an XML database's abilities to cope with a broad spectrum of different queries, typically posed in real-world application scenarios. The benchmark is intended to help both implementors and users to compare XML databases independent of their own, specific application scenario. To this end, the benchmark o ers a set queries each of which is intended to challenge a particular primitive of the query processor or storage engine. The overall workload wepropose consists of a scalable document database and a concise, yet comprehensive set of queries, which covers the major aspects of query processing. The queries' challenges range from stressing the textual character of the document to data analysis queries, but include also typical ad-hoc queries. We complement our research with results obtained from running the benchmark on our XML database platform. They are intended to give a rst baseline, illustrating the state of the art.\n\n2-Hop Neighbour:\nImproving Index Structures for Structured Document Retrieval Structured document retrieval has established itself as a new research area in the overlap between Database Systems and Information Retrieval. This work proposes a filtering technique, that can be added to already existing index structures of many structured document retrieval systems. This new technique takes the contextual structure information of query and document database into account and reduces the occurrence sets returned by the original index structure drastically. This improves the performance of query evaluation. A measure is introduced that allows to quantify the added value of the proposed index structure. Based on this measure a heuristic is presented that allows to include only valuable context information in the index structure. 1 Introduction With the growing importance of Information Retrieval in the presence of a vast amount of structured documents in formalisms like SGML ([ISO86]) or the future WWW language XML ([W3C99]), sophisticated and efficient indexing techn...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Recognizing User's Context from Wearable Sensors: Baseline System INTRODUCTION We describe a baseline system for training and classifying natural situations. It is a baseline system because it will provide the reference implementation of the context classifier against which we can compare more sophisticated machine learning techniques. It should be understood that this system is a precursor to a system for understanding all types of observable context not just location. We are less interested in obtaining high precision and recall rates than we are in obtaining appropriate model structures for doing higher order tasks like clustering and prediction on a user's life activities. II. BACKGROUND There has been some excellent work on recognizing various kinds of user situations via wearable sensors. Starner [6] uses HMMs and omnidirectional and directional cameras to determine the user's location in a building and current action during a physical game. Aoki also uses a head mounted directional camera to d' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nRealtime Personal Positioning System for Wearable Computers Context awareness is an important functionality for wearable computers. In particular, the computer should know where the person is in the environment. This paper proposes an image sequence matching technique for the recognition of locations and previously visited places. As in single word recognition in speech recognition, a dynamic programming algorithm is proposed for the calculation of the similarity of different locations. The system runs on a stand alone wearable computer such as a Libretto PC. Using a training sequence a dictionary of locations is created automatically. These locations are then be recognized by the system in realtime using a hatmounted camera. 1. Introduction Obtaining user location is one of the important functions for wearable computers in two applications. One is automatic self-summary, and the other is contextaware user interface. In self-summary, the user is wearing a small camera and a small computer, capturing and recording every event of his/her daily ...\n\n1-Hop Neighbour:\nVisual Contextual Awareness in Wearable Computing Small, body-mounted video cameras enable a different style of wearable computing interface. As processing power increases, a wearable computer can spend more time observing its user to provide serendipitous information, manage interruptions and tasks, and predict future needs without being directly commanded by the user. This paper introduces an assistant for playing the real-space game Patrol. This assistant tracks the wearer's location and current task through computer vision techniques and without off-body infrastructure. In addition, this paper continues augmented reality research, started in 1995, for binding virtual data to physical locations. 1. Introduction For most computer systems, even virtual reality systems, sensing techniques are a means of getting input directly from the user. However, wearable computers offer a unique opportunity to re-direct sensing technology towards recovering more general user context. Wearable computers have the potential to \"see\" as the user sees...\n\n1-Hop Neighbour:\nFinding Location Using Omnidirectional Video on a Wearable Computing Platform In this paper we present a framework for a navigation system in an indoor environment using only omnidirectional video. Within a Bayesian framework we seek the appropriate place and image from the training data to describe what we currently see and infer a location. The posterior distribution over the state space conditioned on image similarity is typically not Gaussian. The distribution is represented using sampling and the location is predicted and verified over time using the Condensation algorithm. The system does not require complicated feature detection, but uses a simple metric between two images. Even with low resolution input, the system may achieve accurate results with respect to the training data when given favorable initial conditions. 1. Introduction and Previous Work Recognizing location is a difficult but often essential part of identifying a wearable computer user's context. Location sensing may be used to provide mobility aids for the blind [13], spatially-based not...\n\n2-Hop Neighbour:\nLearning Visual Models of Social Engagement We introduce a face detector for wearable computers that exploits constraints in face scale and orientation imposed by the proximity of participants in near social interactions. Using this method we describe a wearable system that perceives \"social engagement,\" i.e., when the wearer begins to interact with other individuals. Our experimental system proved > 90% accurate when tested on wearable video data captured at a professional conference. Over 300 individuals were captured during social engagement, and the data was separated into independent training and test sets. A metric for balancing the performance of face detection, localization, and recognition in the context of a wearable interface is discussed. Recognizing social engagement with a user's wearable computer provides context data that can be useful in determining when the user is interruptible. In addition, social engagement detection may be incorporated into a user interface to improve the quality of mobile face recognition software. For example, the user may cue the face recognition system in a socially graceful way by turning slightly away and then toward a speaker when conditions for recognition are favorable. 1\n\n2-Hop Neighbour:\nReal-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video Hidden Markov models (HMM's) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that demonstrate a realtime HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon. 1 Introduction While there are many different types of gestures, the most structured sets belong to the sign languages. In sign language, each gesture already has assigned meaning, and strong rules of context and grammar may be applied to make recognition tractable. To date, most work on sign language recognition has employed expensi...\n\n2-Hop Neighbour:\nA Wearable Spatial Conferencing Space Wearable computers provide constant access to computing and communications resources. In this paper we describe how the computing power of wearables can be used to provide spatialized 3D graphics and audio cues to aid communication. The result is a wearable augmented reality communication space with audio enabled avatars of the remote collaborators surrounding the user. The user can use natural head motions to attend to the remote collaborators, can communicate freely while being aware of other side conversations and can move through the communication space. In this way the conferencing space can support dozens of simultaneous users. Informal user studies suggest that wearable communication spaces may offer several advantages, both through the increase in the amount of information it is possible to access and the naturalness of the interface. 1: Introduction One of the broad trends emerging in human-computer interaction is the increasing portability of computing and communication fac...\n\n2-Hop Neighbour:\nSymbiotic Interfaces For Wearable Face Recognition We introduce a wearable face detection method that exploits constraints in face scale and orientation imposed by the proximity of participants in near social interactions. Using this method we describe a wearable system that perceives “social engagement,” i.e., when the wearer begins to interact with other individuals. One possible application is improving the interfaces of portable consumer electronics, such as cellular phones, to avoid interrupting the user during face-to-face interactions. Our experimental system proved> 90 % accurate when tested on wearable video data captured at a professional conference. Over three hundred individuals were captured, and the data was separated into independent training and test sets. A goal is to incorporate user interface in mobile machine recognition systems to improve performance. The user may provide real-time feedback to the system or may subtly cue the system through typical daily activities, such as turning to face a speaker, as to when conditions for recognition are favorable. 1\n\n2-Hop Neighbour:\nWearable Computing Meets Ubiquitous Computing: Reaping the best of both worlds This paper describes what we see as fundamental diculties in both the pure ubiquitous computing and pure wearable computing paradigms when applied to context-aware applications. In particular, ubiquitous computing and smart room systems tend to have dif- culties with privacy and personalization, while wearable systems have trouble with localized information, localized resource control, and resource management between multiple people. These diculties are discussed, and a peer-to-peer network of wearable and ubiquitous computing components is proposed as a solution. This solution is demonstrated through several implemented applications. 1 Introduction Ubiquitous computing and wearable computing have been posed as polar opposites even though they are often applied in very similar applications. Here we rst outline the advantages and disadvantages of each and propose that the two perspectives have complementary problems. We then attempt to demonstrate that the failing of both ubiquitous...", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Comparison of Usage Evaluation and Inspection Methods for Assessing Groupware Usability Many researchers believe that groupware can only be evaluated by studying real collaborators in their real contexts, a process that tends to be expensive and timeconsuming. Others believe that it is more practical to evaluate groupware through usability inspection methods. Deciding between these two approaches is difficult, because it is unclear how they compare in a real evaluation situation. To address this problem, we carried out a dual evaluation of a groupware system, with one evaluation applying userbased techniques, and the other using inspection methods. We compared the results from the two evaluations and concluded that, while the two methods have their own strengths, weaknesses, and trade-offs, they are complementary. Because the two methods found overlapping problems, we expect that they can be used in tandem to good effect, e.g., applying the discount method prior to a field study, with the expectation that the system deployed in the more expensive field study has a better chance of doing well because some pertinent usability problems will have already been addressed. Keywords Evaluation, groupware usability, inspection evaluation techniques, usage evaluation techniques.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCollabLogger: A Tool for Visualizing Groups At Work The CollabLogger is a visual tool that supports usability analyses of human-computer interaction in a team environment. Participants in our computer-mediated activity were engaged in a small-scale manufacturing testbed project. Interactions of the group were mediated by Teamwave Workplace 1 and the members performed both synchronous and asynchronous activities depending on their availability, project requirements, and due to chance meetings in the collaborative space. The software was instrumented to log users' interactions with the system and each other. The CollabLogger addresses the problem of helping investigators analyze the volumes of log data that groupware tools can generate. Visual tools are powerful when large amounts of diverse data present themselves. The place-based collaboration environment offered by Teamwave Workplace provided a level of organization that allowed us to create a visual interface with which to perform exploratory sequential data analysis. Preliminary ...\n\n1-Hop Neighbour:\nAdapting the Locales Framework for Heuristic Evaluation of Groupware Heuristic evaluation is a rapid, cheap and effective way for identifying usability problems in single user systems. However, current heuristics do not provide guidance for discovering problems specific to groupware usability. In this paper, we take the Locales Framework and restate it as heuristics appropriate for evaluating groupware. These are: 1) Provide locales; 2) Provide awareness within locales; 3) Allow individual views; 4) Allow people to manage and stay aware of their evolving interactions; and 5) Provide a way to organize and relate locales to one another. To see if these new heuristics are useful in practice, we used them to inspect the interface of Teamwave Workplace, a commercial groupware product. We were successful in identifying the strengths of Teamwave as well as both major and minor interface problems. KEY WORDS: Groupware evaluation, heuristic evaluation, inspection methods, locales framework. 1 INTRODUCTION HCI researchers and practitioners now have a good rep...\n\n1-Hop Neighbour:\nUsing a Room Metaphor to Ease Transitions in Groupware Many groupware systems contain gaps that hinder or block natural social interaction or that do not let people easily move between different styles of work. We believe that the adoption of a room metaphor can ease people's transitions across these gaps, allowing them to work together more naturally. Using the TeamWave Workplace system as an example, we show how particular gaps are removed. First, we ease a person's transition between single user and groupware applications by making rooms suitable for both individual and group activity. Second, people can move fluidly between asynchronous and synchronous work because room artifacts persist. People can leave messages, documents and annotations for others, or work on them together when occupying the room at the same time. Third, we ease the difficulty of initiating real time work by providing people with awareness of others who may be available for real-time interactions, and by automatically establishing connections as users enter a commo...\n\n2-Hop Neighbour:\nRooms, Protocols, and Nets: Metaphors for Computer Supported Cooperative Learning of Distributed Groups : We discuss an integrative design for computer supported cooperative learning (CSCL) environments. Three common problems of CSCL are addressed: How to achieve social orientation and group awareness, how to coordinate goal-directed interaction, and how to construct a shared knowledge base. With respect to each problem, we propose a guiding metaphor which links theoretical, technical, and usability requirements. If appropriately implemented, each metaphor resolves one problem: Virtual rooms support social orientation, learning protocols guide interactions aimed at knowledge acquisition, and learning nets represent socially shared knowledge. Theoretically, the metaphor of virtual rooms originates in work on virtual spaces in human computer interaction, learning protocols are related to speech act theory, and learning nets are based on models of knowledge representation. A prototype system implementing the virtual room metaphor is presented. We argue that by further integrating these thre...\n\n2-Hop Neighbour:\nGroup Task Analysis for Groupware Usability Evaluations Techniques for inspecting the usability of groupware applications have recently been proposed. These techniques focus on the mechanics of collaboration rather than the work context in which a system is used, and offer time and cost savings by not requiring actual users or fully-functional prototypes. Although these techniques are valuable, adding information about task and work context could improve the quality of inspection results. We introduce a method for analysing group tasks that can be used to add context to discount groupware evaluation techniques. Our method allows for the specification of collaborative scenarios and tasks by considering the mechanics of collaboration, levels of coupling during task performance, and variability in task execution. We describe how this type of task analysis could be used in a new inspection technique based on cognitive walkthrough.\n\n2-Hop Neighbour:\nMining Usability Information from Log Files: AMulti-Pronged Approach rooms is configurable by its occupants in how they organize various tools housing their data, documents, and graphics. The TW system provides for synchronous and asynchronous user interactions, but importantly these interactions are in the context of relevant data. The work in this experiment was characterized by several full group meetings (for planning and coordination) interspersed with periods of individual activity (asychronous work) and smaller coordination meetings of two or three team members around the \"hand-off\" of output from a task used as input for another task. Collected Data The native version of TW produces a server-based log-file that contains information about the identity of users entering the distributed application, the identity of the rooms through which users navigate, file uploads, and message passing between users. This set of interactions was deemed too rudimentary for capturing the type of data needed for usability analysis. Since the source co", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'WSQ/DSQ: A Practical Approach for Combined Querying of Databases and the Web www-db.stanford.edu We present WSQ/DSQ (pronounced “wisk-disk”), a new approach for combining the query facilities of traditional databases with existing search engines on the Web. WSQ, for Web-Supported (Database) Queries, leverages results from Web searches to enhance SQL queries over a relational database. DSQ, for Database-Supported (Web) Queries, uses information stored in the database to enhance and explain Web searches. This paper focuses primarily on WSQ, describing a simple, low-overhead way to support WSQ in a relational DBMS, and demonstrating the utility of WSQ with a number of interesting queries and results. The queries supported by WSQ are enabled by two virtual tables, whose tuples represent Web search results generated dynamically during query execution. WSQ query execution may involve many high-latency calls to one or more search engines, during which the query processor is idle. We present a lightweight technique called asynchronous iteration that can be integrated easily into a standard sequential query processor to enable concurrency between query processing and multiple Web search requests. Asynchronous iteration has broader applications than WSQ alone, and it opens up many interesting query optimization issues. We have developed a prototype implementation of WSQ by extending a DBMS with virtual tables and asynchronous iteration; performance results are reported. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nEvaluating Top-k Queries over Web-Accessible Databases A query to a web search engine usually consists of a list of keywords, to which the search engine responds with the best or “top ” k pages for the query. This top-k query model is prevalent over multimedia collections in general, but also over plain relational data for certain applications. For example, consider a relation with information on available restaurants, including their location, price range for one diner, and overall food rating. A user who queries such a relation might simply specify the user’s location and target price range, and expect in return the best 10 restaurants in terms of some combination of proximity to the user, closeness of match to the target price range, and overall food rating. Processing top-k queries efficiently is challenging for a number of reasons. One critical such reason is that, in many web applications, the relation attributes might not be available other than through external web-accessible form interfaces, which we will have to query repeatedly for a potentially large set of candidate objects. In this article, we study how to process top-k queries efficiently in this setting, where the attributes for which users specify target values might be handled by external, autonomous sources with a variety of access interfaces. We present a sequential algorithm for processing such queries, but observe that any sequential top-k query processing strategy is bound to require unnecessarily long query processing times, since web accesses exhibit high and variable latency. Fortunately, web sources can be probed in parallel, and each source can typically process concurrent requests, although sources may impose some restrictions on the type and number of probes that they are willing to accept. We adapt our sequential query processing technique and introduce an efficient algorithm that maximizes source-access parallelism to minimize query response time, while satisfying source-access constraints. We evaluate\n\n1-Hop Neighbour:\nPartial Answers for Unavailable Data Sources Abstract. Many heterogeneous database system products and prototypes exist today; they will soon be deployed in a wide variety of environments. Most existing systems suffer from an Achilles ’ heel: they ungracefully fail in presence of unavailable data sources. If some data sources are unavailable when accessed, these systems either silently ignore them or generate an error. This behavior is improper in environments where there is a non-negligible probability that data sources cannot be accessed (e.g., Internet). In case some data sources cannot be accessed when processing a query, the complete answer to this query cannot be computed; some work can however be done with the data sources that are available. In this paper, we propose a novel approach where, in presence of unavailable data sources, the answer to a query is a partial answer. A partial answer is a representation of the work that has been done in case the complete answer to a query cannot be computed, and of the work that remains to be done in order to obtain this complete answer. The use of a partial answer is twofold. First, it contains an incremental query that allows to obtain the complete answer without redoing the work that has already been done. Second, the application program can extract information from a partial answer through the use of a secondary query, which we call a parachute query. In this paper, we present a framework for partial answers and we propose three algorithms for the evaluation of queries in presence of unavailable sources, the construction of incremental queries and the evaluation of parachute queries. 1\n\n1-Hop Neighbour:\nHyperQueries: Dynamic Distributed Query Processing on the Internet In this paper we propose a new framework for dynamic distributed query processing based on so-called HyperQueries which are essentially query evaluation sub-plans \"sitting behind \" hyperlinks. We illustrate the flexibility of this distributed query processing architecture in the context of B2B electronic market places. Architecting an electronic market place as a data warehouse by integrating all thedatafromall participating enterprises in one centralized repository incurs severe problems. Using HyperQueries, application integration is achieved via dynamic distributed query evaluation plans. The electronic market place serves as an intermediary between clients and providers executing their sub-queries referenced via hyperlinks. The hyperlinks are embedded within data objects of the intermediary 's database. Retrieving such a virtual object will automatically initiate the execution of the referenced HyperQuery in order to materialize the entire object. Thus, sensitive data remains under the full control of the data providers. 1\n\n2-Hop Neighbour:\nMaximizing Coverage of Mediated Web Queries Over the Web, mediators are built on large collections of sources to provide integrated access to Web content (e.g., meta-search engines). In order to minimize the expense of visiting a large number of sources, mediators need to choose a subset of sources to contact when processing queries. As fewer sources participate in processing a mediated query, the coverage of the query goes down. In this paper, we study this trade-off and develop techniques for mediators to maximize the coverage for their queries while at the same time visiting a subset of their sources. We formalize the problem; study its complexity; propose algorithms to solve it; and analyze the theoretical performance guarantees of the algorithms. We also study the performance of our algorithms through simulation experiments. 1 Introduction Web sources often provide limited information \"coverage.\" For instance, one type of information source is search engines, such as Lycos [27], Northern Light [29] and Yahoo [30]....\n\n2-Hop Neighbour:\nOptimal Aggregation Algorithms for Middleware Assume that each object in a database has m grades, or scores, one for each of m attributes. ForexamzUan object can ave a color grade, t at tells ow red it is, and a s ape grade, t at tells ow round it is. For eac attribute, t ere is a sorted list, w ic lists eac object and its grade under t at attribute, sorted by grade ( ig est grade first). Eac object is assigned an overall grade, t at is obtained bycom`-T ng t e attribute grades using a fixedm notone aggregation function,orcombining rule, suc as mh or average. To determ`h t e top k objects, t at is, k objects wit t e ig est overall grades, t e naive algoritm mor access every object in t e database, to find its grade under eac attribute. Fagin as given an algoritm (\"Fagin's Algorit mit or FA) t at is mh mh` efficient. For som mm'T` e aggregation functions, FA isoptim al wit ig probability in t e worst case. We analyze an elegant andrem'-`- lysim ple algoritm (\"t e t res old algoritm \", or TA) t at isoptim al in am` stronger sense t an FA. We s ow t at TA is essentiallyoptim al, not just for som mmT-k' aggregation functions, but for all of tem and not just in a ig-probability worst-case sense, but over every database. Unlike FA, w ic requires large buffers (w ose sizemz grow unboundedly as t e database size grows), TA requires only asmkzz constant-size buffer. TA allows early stopping, w ic yields, in a precise sense, anapproxim ate version of t e top k answers. We distinguis two types of access: sorted access (w ere t em`U' ewaresystem obtains t e grade of an object insom sorted list by proceeding t roug t e list sequentiallyfrom t e top), and random access (w ere t e mzUETh. resystem requests t e grade of object in a list, and obtains it in one step). We consider t e scenarios w ere ra...\n\n2-Hop Neighbour:\nNiagaraCQ: A Scalable Continuous Query System for Internet Databases Continuous queries are persistent queries that allow users to receive new results when they become available. While continuous query systems can transform a passive web into an active environment, they need to be able to support millions of queries due to the scale of the Internet. No existing systems have achieved this level of scalability. NiagaraCQ addresses this problem by grouping continuous queries based on the observation that many web queries share similar structures. Grouped queries can share the common computation, tend to fit in memory and can reduce the I/O cost significantly. Furthermore, grouping on selection predicates can eliminate a large number of unnecessary query invocations. Our grouping technique is distinguished from previous group optimization approaches in the following ways. First, we use an incremental group optimization strategy with dynamic re-grouping. New queries are added to existing query groups, without having to regroup already installed queries. Second, we use a query-split scheme that requires minimal changes to a general-purpose query engine. Third, NiagaraCQ groups both change-based and timer-based queries in a uniform way. To insure that NiagaraCQ is scalable, we have also employed other techniques including incremental evaluation of continuous queries, use of both pull and push models for detecting heterogeneous data source changes, and memory caching. This paper presents the design of NiagaraCQ system and gives some experimental results on the system’s performance and scalability. 1.", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Internet search for Indian languages With the Internet growing at an exponential rate, no single search engine can index all of the web. It is therefore necessary to build specialized search engines that fulfill particular needs of a community of people. An example is Citeseer, which indexes research papers on the web. Also, as the web is increasingly hosting web pages in different languages, it is essential to be able to search for information stored in a specific language. For a search engine aimed at information in a particular language, an easy to use user interface is as essential as good response time and relevance of results. We introduce Shodh, a search engine for an Indian language. A prototype has been developed for selected set of pages and results are satisfactory. The user interface of the search engine includes both querying facilities as well as display of query results in the same language in which the information is stored. Contents 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSearching the world wide web The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the “indexable Web, ” the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages. The Internet has grown rapidly since its inception in December 1969 (1) and is anticipated to expand 1000 % over the next few years (2). The amount of scientific information and the number of electronic journals on the Internet continue to increase [about 1000 journals as of 1996 (2, 3)]. The Internet and the World Wide Web (the Web) represent significant advancements for the retrieval and dissemination of scientific and other literature and for the advancement of education (2, 4). With the introduction of full-text search engines such as AltaVista (www.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nThe XML Benchmark Project With standardization efforts of a query language for XML documents drawing to a close, researchers and users increasingly focus their attention on the database technology that has to deliver on the new challenges that the sheer amount of XML documents produced by applications pose to data management: validation, performance evaluation and optimization of XML query processors are the upcoming issues. Following a long tradition in database research, the XML Store Benchmark Project provides a framework to assess an XML database's abilities to cope with a broad spectrum of different queries, typically posed in real-world application scenarios. The benchmark is intended to help both implementors and users to compare XML databases independent of their own, specific application scenario. To this end, the benchmark o ers a set queries each of which is intended to challenge a particular primitive of the query processor or storage engine. The overall workload wepropose consists of a scalable document database and a concise, yet comprehensive set of queries, which covers the major aspects of query processing. The queries' challenges range from stressing the textual character of the document to data analysis queries, but include also typical ad-hoc queries. We complement our research with results obtained from running the benchmark on our XML database platform. They are intended to give a rst baseline, illustrating the state of the art.\n\n2-Hop Neighbour:\nEstimating the Usefulness of Search Engines In this paper, we present a statistical method to estimate the usefulness of a search engine for any given query. The estimates can be used by a metasearch engine to choose local search engines to invoke. For a given query, the usefulness of a search engine in this paper is defined to be a combination of the number of documents in the search engine that are sufficiently similar to the query and the average similarity of these documents. Experimental results indicate that the proposed estimation method is quite accurate. 1 Introduction Many search engines have been created on the Internet to help ordinary users find desired data. Each search engine has a corresponding database that defines the set of documents that can be searched by the search engine. Usually, an index for all documents in the database is created and stored in the search engine to speed up query processing. The amount of data in the Internet is huge (it is believed that by the end of 1997, there were more than 300 mil...\n\n2-Hop Neighbour:\nWeb Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions This paper summarizes the different characteristics of web data, the basic components of web mining and its different types, and their current states of the art. The reason for considering web mining, a separate field from data mining, is explained. The limitations of some of the existing web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) highlighted. A survey of the existing literature on \"soft web mining\" is provided along with the commercially available systems. The prospective areas of web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing \"soft web mining\" systems is explained. An extensive bibliography is also provided.\n\n2-Hop Neighbour:\nC4-1: Building a community hierarchy for the Web based on Bipartite Graphs In this paper we propose an approach to extract and relate the communities by considering a community signature as a group of content creators that manifests itself as a set of interlinked pages. We abstract a community signature as a group of pages that form a dense bipartite graph (DBG), and proposed an algorithm to extract the DBGs from the given data set. Also, using the proposed approach, the extracted communities can be grouped to form a high-level communities. We apply the proposed algorithm on 10 GB TREC (Text REtrieval Conference) data set and extract a three-level community hierarchy. The extracted community hierarchy facilitates an easy analysis of low-level communities and provides a way to understand the sociology of the Web.\n\n2-Hop Neighbour:\nClustering Categorical Data: An Approach Based on Dynamical Systems We describe a novel approach for clustering collections of sets, and its application to the analysis and mining of categorical data. By \"categorical data,\" we mean tables with fields that cannot be naturally ordered by a metric --- e.g., the names of producers of automobiles, or the names of products offered by a manufacturer. Our approach is based on an iterative method for assigning and propagating weights on the categorical values in a table; this facilitates a type of similarity measure arising from the cooccurrence of values in the dataset. Our techniques can be studied analytically in terms of certain types of non-linear dynamical systems. We discuss experiments on a variety of tables of synthetic and real data; we find that our iterative methods converge quickly to prominently correlated values of various categorical fields. 1 Introduction Much of the data in databases is categorical: fields in tables whose attributes cannot naturally be ordered as numerical values can. The pro...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Breadth-First Search Crawling Yields High-Quality Pages This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBackground Readings for Collection Synthesis\n\n1-Hop Neighbour:\nDesign and Implementation of a High-Performance Distributed Web Crawler Broad web search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. Such a web crawler may interact with millions of hosts over a period of weeks or months, and thus issues of robustness, flexibility, and manageability are of major importance. In addition, I/O performance, network resources, and OS limits must be taken into account in order to achieve high performance at a reasonable cost. In this paper, we describe the design and implementation of a distributed web crawler that runs on a network of workstations. The crawler scales to (at least) several hundred pages per second, is resilient against system crashes and other events, and can be adapted to various crawling applications. We present the software architecture of the system, discuss the performance bottlenecks, and describe efficient techniques for achieving high performance. We also report preliminary experimental results based on a crawl of million pages on million hosts. Work supported by NSF CAREER Award NSF CCR-0093400, Intel Corporation, and the New York State Center for Advanced Technology in Telecommunications (CATT) at Polytechnic University, and by equipment grants from Intel Corporation and Sun Microsystems. 1 1\n\n1-Hop Neighbour:\nTopic-Driven Crawlers: Machine Learning Issues Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers.\n\n2-Hop Neighbour:\nTopical Locality in the Web Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable WorldWide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.\n\n2-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n2-Hop Neighbour:\nBuilding Domain-Specific Search Engines with Machine Learning Techniques Domain-specific search engines are becoming increasingly popular because they offer increased accuracy and extra features not possible with the general, Web-wide search engines. For example, www.campsearch.com allows complex queries by agegroup, size, location and cost over summer camps. Unfortunately, these domain-specific search engines are difficult and time consuming to maintain. This paper proposes the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific search engines. We describe new research in reinforcement learning, text classification and information extraction that automates efficient spidering, populating topic hierarchies, and identifying informative text segments. Using these techniques, we have built a demonstration system: a search engine for computer science research papers. It already contains over 33,000 papers and is publicly available at www.cora.jprc.com. 1 Introduction As the amount of information on the World ...\n\n2-Hop Neighbour:\nWeb Document Clustering: A Feasibility Demonstration Abstract Users of Web search engines are often forced to sift through the long ordered list of document “snippets” returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial. 1\n\n2-Hop Neighbour:\nFinding Related Pages in the World Wide Web When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach toweb searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related web pages. A related web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers. We describe two algorithms to identify related web pages. These algorithms use only the connectivity information in the web (i.e., the links between pages) and not the content of pages or usage information. We haveimplemented both algorithms and measured their runtime performance. To evaluate the e ectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's \\What's Related \" service [12]. Our study showed that the precision at 10 for our two algorithms are 73 % better and 51 % better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Methodology to Retrieve Text Documents from Multiple Databases In this paper, we present a methodology for finding the n most similar documents across multiple text databases for any given query and for any positive integer n. This methodology consists of two steps. First, the contents of databases are indicated approximately by database representatives. Databases are ranked using their representatives in a certain order with respect to the given query. We provide a necessary and sufficient condition to rank the databases optimally. In order to satisfy this necessary and sufficient condition, we provide three estimation methods. One estimation method is intended for short queries; the other two are for all queries. Second, we provide an algorithm, OptDocRetrv, to retrieve documents from the databases according to their rank and in a particular way. We show that if the databases containing the n most similar documents for a given query are ranked ahead of other databases, our methodology will guarantee the retrieval of the n most similar d...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nConcept Hierarchy Based Text Database Categorization Document categorization as a technique to improve the retrieval of useful documents has been extensively investigated. One important issue in a large-scale metasearch engine is to select text databases that are likely to contain useful documents for a given query. We believe that database categorization can be a potentially effective technique for good database selection, especially in the Internet environment where short queries are usually submitted. In this paper, we propose and evaluate several database categorization algorithms. This study indicates that while some document categorization algorithms could be adopted for database categorization, algorithms that take into consideration the special characteristics of databases may be more effective. Preliminary experimental results are provided to compare the proposed database categorization algorithms. A prototype database categorization system based on one of the proposed algorithms has been developed.\n\n1-Hop Neighbour:\nExperiences with Selecting Search Engines Using Metasearch Search engines are among the most useful and high profile resources on the Internet. The problem of finding information on the Internet has been replaced with the problem of knowing where search engines are, what they are designed to retrieve and how to use them. This paper describes and evaluates SavvySearch, a meta-search engine designed to intelligently select and interface with multiple remote search engines. The primary meta-search issue examined is the importance of carefully selecting and ranking remote search engines for user queries. We studied the efficacy of SavvySearch's incrementally acquired meta-index approach to selecting search engines by analyzing the effect of time and experience on performance. We also compared the meta-index approach to the simpler categorical approach and showed how much experience is required to surpass the simple scheme. 1 Introduction Search engines are powerful tools for assisting the otherwise unmanageable task of navigating the rapidly ex...\n\n1-Hop Neighbour:\nSearching the world wide web The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the “indexable Web, ” the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages. The Internet has grown rapidly since its inception in December 1969 (1) and is anticipated to expand 1000 % over the next few years (2). The amount of scientific information and the number of electronic journals on the Internet continue to increase [about 1000 journals as of 1996 (2, 3)]. The Internet and the World Wide Web (the Web) represent significant advancements for the retrieval and dissemination of scientific and other literature and for the advancement of education (2, 4). With the introduction of full-text search engines such as AltaVista (www.\n\n2-Hop Neighbour:\nA Content-Based Image Meta-Search Engine using Relevance Feedback Search engines are the most powerful resources for finding information on the rapidly expanding World-Wide Web. Finding the desired search engines and learning how to use them, however, can be very time consuming. Metasearch engines, which integrate a group of such search tools, enable users to access information across the world in a transparent and more efficient manner. The recent emergence of visual information retrieval (VIR) systems on the Web is leading to the same efficiency problem. This paper describes MetaSEEk, a meta-search engine used for retrieving images based on their visual content on the Web. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. MetaSEEk has been developed to explore the issues involved in querying large, distributed, on-line visual information system sources. We compare MetaSEE...\n\n2-Hop Neighbour:\nTowards a Highly-Scalable Metasearch Engine The World Wide Web has been expanding in a very fast rate. The coverage of the Web by each of the major search engines has been steadily decreasing despite their effort to index more web pages. Worse yet, as these search engines get larger, higher percentages of their indexed information are becoming obsolete. More and more people are having doubt about the scalability of centralized search engine technology. A more scalable alternative to search the Web is the metasearch engine approach. A metasearch engine can be considered as an interface on top of multiple local search engines to provide uniform access to many local search engines. Database selection is one of the main challenges in building a large-scale metasearch engine. The problem is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. In order to enable accurate selection, metadata that reflect the content of each search engine need to be co...\n\n2-Hop Neighbour:\nDigital Libraries and Autonomous Citation Indexing The World Wide Web is revolutionizing the way that researchers access scientific information. Articles are increasingly being made available on the homepages of authors or institutions, at journal Web sites, or in online archives. However, scientific information on the Web is largely disorganized. This article introduces the creation of digital libraries incorporating Autonomous Citation Indexing (ACI). ACI autonomously creates citation indices similar to the Science Citation Index R . An ACI system autonomously locates articles, extracts citations, identifies identical citations that occur in different formats, and identifies the context of citations in the body of articles. ACI can organize the literature and provide most of the advantages of traditional citation indices, such as literature search using citation links, and the evaluation of articles based on citation statistics. Furthermore, ACI can provide significant advantages over traditional citation indices. No manual effort is required for indexing, which should result in a reduction in cost and an increase in the availability of citation indices. An ACI system can also provide more comprehensive and up-to-date indices of the literature by indexing articles on the Web, technical reports, conference papers, etc. Furthermore, ACI makes it easy to browse the context of citations to given articles, allowing researchers to quickly and easily see what subsequent researchers have said about a given article. digital libraries incorporating ACI may significantly improve scientific dissemination and feedback.\n\n2-Hop Neighbour:\nWeb Metasearch as Belief Aggregation Web metasearch requires a mechanism for combining rank-ordered lists of ratings returned by multiple search engines in response to a given user query. We view this as being analogous to the need for combining degrees of belief in probabilistic and uncertain reasoning in artificial intelligence. This paper describes a practical method for performing web metasearch based on a novel transformationbased theory of belief aggregation. The consensus ratings produced by this method take into account the item ratings/rankings output by individual search engines as well as the user's preferences. Copyright c fl 2000, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. Introduction Web search engines (WSE) use tools ranging from simple text-based search to more sophisticated methods that attempt to understand the intended meanings of both queries and data items. There has been much work in this area in recent years. The link structure of the web has...\n\n2-Hop Neighbour:\nText Database Selection for Longer Queries A metasearch engine is a system that supports unified access to multiple local search engines. One of the main challenges in building a large-scale metasearch engine is to solve the database (search engine) selection problem, which is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. For the database of each search engine, a representative which indicates approximately the contents of the database is created in advance to enable database selection. The representatives of all databases can be integrated into a single representative to make the selection process more scalable. While an integrated representative with high scalability has just been proposed and has been found to be effective for short queries, its effectiveness for longer queries is significantly lower. In the Internet environment, most queries initially submitted by users are short queries. However, it has been found that better search effectiveness can often be achieved when additional terms are added to the initial queries through query expansion or relevance feedback. The resulting queries are usually longer than the initial queries. In this paper, we propose a new method to construct database representatives and to decide which databases to select for longer queries. Experimental results are given to compare the performance of the new method with that of a previous method.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'JRes: A Resource Accounting Interface for Java With the spread of the Internet the computing model on server systems is undergoing several important changes. Recent research ideas concerning dynamic operating system extensibility are finding their way into the commercial domain, resulting in designs of extensible databases and Web servers. In addition, both ordinary users and service providers must deal with untrusted downloadable executable code of unknown origin and intentions. Across the board, Java has emerged as the language of choice for Internet-oriented software. We argue that, in order to realize its full potential in applications dealing with untrusted code, Java needs a flexible resource accounting interface. The design and prototype implementation of such an interface --- JRes --- is presented in this paper. The interface allows to account for heap memory, CPU time, and network resources consumed by individual threads or groups of threads. JRes allows limits to be set on resources available to threads and it can invoke...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nDesign And Implementation Of The J-Seal2 Mobile Agent Kernel J-SEAL2 is a secure, portable, and efficient execution environment for mobile agents. The core of the system is a micro-kernel fulfilling the same functions as a traditional operating system kernel: protection, communication, domain termination, and resource control. This paper describes the key concepts of the J-SEAL2 micro-kernel and how they are implemented in pure Java.\n\n1-Hop Neighbour:\nHow to Monitor and Control Resource Usage in Mobile Agent Systems The Mobile Agent technology has already shown its advantages, but at the same time has already remarked new problems currently limiting its diffusion in commercial environments. A key issue is to control the operations that foreign mobile agents are authorized to perform on hosting execution environments. It is necessary not only to rule the MA access to resources but also to control resource usage of admitted agents at execution time, for instance to protect against possible denial-of-service attacks. The paper presents a solution framework for the on-line monitoring and control of Java-based MA platforms. In particular, it describes the design and implementation of MAPI, an on-line monitoring component that we have integrated within the SOMA system. The paper shows how to use MAPI as the building block of a distributed monitoring tool that gives application- and kernel-level information about the state of mobile agents and their resource usage, thus enabling the enforcement of management policies on MA resource consumption. 1.\n\n1-Hop Neighbour:\nHive: Distributed Agents for Networking Things Hive is a distributed agents platform, a decentralized system for building applications by networking local system resources. This paper presents the architecture of Hive, concentrating on the idea of an \"ecology of distributed agents\" and its implementation in a practical Java based system. Hive provides ad-hoc agent interaction, ontologies of agent capabilities, mobile agents, and a graphical interface to the distributed system. We are applying Hive to the problems of networking \"Things That Think,\" putting computation and communication in everyday places such as your shoes, your kitchen, or your own body. TTT shares the challenges and potentials of ubiquitous computing and embedded network applications. We have found that the flexibility of a distributed agents architecture is well suited for this application domain, enabling us to easily build applications and to reconfigure our systems on the fly. Hive enables us to make our environment and network more alive. This paper is dedic...\n\n2-Hop Neighbour:\nFirst Steps Towards an Event-Based Infrastructure for Smart Things In this paper, we examine requirements for an infrastructure that supports implementation and deployment of smart things in the real world. We describe a case study (RFID Chef) where kitchen items and ingredients, equipped with remotely accessible electronic tags, drive an interactive context-aware recipe finder through the use of an event-based infrastructure.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nWeb Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions This paper summarizes the different characteristics of web data, the basic components of web mining and its different types, and their current states of the art. The reason for considering web mining, a separate field from data mining, is explained. The limitations of some of the existing web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) highlighted. A survey of the existing literature on \"soft web mining\" is provided along with the commercially available systems. The prospective areas of web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing \"soft web mining\" systems is explained. An extensive bibliography is also provided.\n\n1-Hop Neighbour:\nThe PageRank Citation Ranking: Bringing Order to the Web The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.\n\n1-Hop Neighbour:\nWeb Interaction and the Navigation Problem in Hypertext written for Encyclopedia of Microcomputers The web has become a ubiquitous tool, used in day-to-day work, to find information and conduct business, and it is revolutionising the role and availability of information. One of the problems encountered in web interaction, which is still unsolved, is the navigation problem, whereby users can \"get lost in hyperspace\", meaning that when following a sequence of links, i.e. a trail of information, users tend to become disoriented in terms of the goal of their original query and the relevance to the query of the information they are currently browsing. Herein we build statistical foundations for tackling the navigation problem based on a formal model of the web in terms of a probabilistic automaton, which can also be viewed as a finite ergodic Markov chain. In our model of the web the probabilities attached to state transitions have two interpretations, namely, they can denote the proportion of times a user followed a link, and alternatively they can denote the expected utility of following a link. Using this approach we have developed two techniques for constructing a web view based on the two interpretations of the probabilities of links, where a web view is a collection of relevant trails. The first method we describe is concerned with finding frequent user behaviour patterns. A collection of trails is taken as input and an ergodic Markov chain is produced as output with the probabilities of transitions corresponding to the frequency the user traversed the associated links. The second method we describe is a reinforcement learning algorithm that attaches higher probabilities to links whose expected trail relevance is higher. The user's home page and a query are taken as input and an ergodic Markov chain is produced as output with the probabilities of...\n\n2-Hop Neighbour:\nComputing the Entropy of User Navigation in the Web Navigation through the web, colloquially known as “surfing”, is one of the main activities of users during web interaction. When users follow a navigation trail they often tend to get disoriented in terms of the goals of their original query and thus the discovery of typical user trails could be useful in providing navigation assistance. Herein we give a theoretical underpinning of user navigation in terms of the entropy of an underlying Markov chain modelling the web topology. We present a novel method for online incremental computation of the entropy and a large deviation result regarding the length of a trail to realise the said entropy. We provide an error analysis for our estimation of the entropy in terms of the divergence between the empirical and actual probabilities. We then indicate applications of our algorithm in the area of web data mining. Finally, we present an extension of our technique to higher-order Markov chains by a suitable reduction of a higher-order Markov chain model to a first-order one. Key words. Web user navigation, Web data mining, navigation problem, Markov chain, entropy 1\n\n2-Hop Neighbour:\nDocument Classification with Unsupervised Artificial Neural Networks Text collections may be regarded as an almost perfect application arena for unsupervised neural networks. This is because many operations computers have to perform on text documents are classification tasks based on noisy patterns. In particular we rely on self-organizing maps which produce a map of the document space after their training process. From geography, however, it is known that maps are not always the best way to represent information spaces. For most applications it is better to provide a hierarchical view of the underlying data collection in form of an atlas where, starting from a map representing the complete data collection, different regions are shown at finer levels of granularity. Using an atlas, the user can easily \"zoom\" into regions of particular interest while still having general maps for overall orientation. We show that a similar display can be obtained by using hierarchical feature maps to represent the contents of a document archive. These neural networks have layerd architecture where each layer consists of a number of individual self-organizing maps. By this, the contents of the text archive may be represented at arbitrary detail while still having the general maps available for global orientation.\n\n2-Hop Neighbour:\nTheseus: Categorization by Context Introduction The traditional approach to document categorization is categorization by content, since information for categorizing a document is extracted from the document itself. In a hypertext environment like the Web, the structure of documents and the link topology can be exploited to perform what we call categorization by context [Attardi 98]: the context surrounding a link in an HTML document is used for categorizing the document referred by the link. Categorization by context is capable of dealing also with multimedia material, since it does not rely on the ability to analyze the content of documents. Categorization by context leverages on the categorization activity implicitly performed when someone places or refers to a document on the Web. By focusing the analysis to the documents used by a group of people, one can build a catalogue tuned to the need of that group. Categorization by context is based on the following assumptions: 1\n\n2-Hop Neighbour:\nLearning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.\n\n2-Hop Neighbour:\nScaling Personalized Web Search Recent web search techniques augment traditional text matching with a global notion of “importance ” based on the linkage structure of the web, such as in Google’s PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance—for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques. 1", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Layered Approach to Learning Client Behaviors in the RoboCup Soccer Server In the past few years, Multiagent Systems (MAS) has emerged as an active subfield of Artificial Intelligence (AI). Because of the inherent complexity of MAS, there is much interest in using Machine Learning (ML) techniques to help build multiagent systems. Robotic soccer is a particularly good domain for studying MAS and Multiagent Learning. Our approach to using ML as a tool for building Soccer Server clients involves layering increasingly complex learned behaviors. In this article, we describe two levels of learned behaviors. First, the clients learn a low-level individual skill that allows them to control the ball effectively. Then, using this learned skill, they learn a higher-level skill that involves multiple players. For both skills, we describe the learning method in detail and report on our extensive empirical testing. We also verify empirically that the learned skills are applicable to game situations. 1 Introduction In the past few years, Multiagent Systems (MAS) has emerge...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCMUnited-97: RoboCup-97 Small-Robot World Champion Team Robotic soccer is a challenging research domain which involves multiple agents that need to collaborate in an adversarial environment to achieve specificobjectives. In this paper, we describe CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specific roles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focusses on the agent behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors. CMUnited won the RoboCup-97 small-robot competition at IJCAI-97 in Nagoya, Japan.\n\n1-Hop Neighbour:\nMulti-Robot Learning in a Cooperative Observation Task . An important need in multi-robot systems is the development of mechanisms that enable robot teams to autonomously generate cooperative behaviors. This paper rst briey presents the Cooperative Multi-robot Observation of Multiple Moving Targets (CMOMMT) application as a rich domain for studying the issues of multi-robot learning of new behaviors. We discuss the results of our handgenerated algorithm for CMOMMT, and then describe our research in generating multi-robot learning techniques for the CMOMMT application, comparing the results to the hand-generated solutions. Our results show that, while the learning approach performs better than random, naive approaches, much room still remains to match the results obtained from the hand-generated approach. The ultimate goal of this research is to develop techniques for multi-robot learning and adaptation that will generalize to cooperative robot applications in many domains, thus facilitating the practical use of multi-robot teams in a wid...\n\n1-Hop Neighbour:\nThe RoboCup Synthetic Agent Challenge 97 RoboCup Challenge offers a set of challenges for intelligent agent researchers using a friendly competition in a dynamic, real-time, multiagent domain. While RoboCup in general envisions longer range challenges over the next few decades, RoboCup Challenge presents three specific challenges for the next two years: (i) learning of individual agents and teams; (ii) multi-agent team planning and plan-execution in service of teamwork; and (iii) opponent modeling. RoboCup Challenge provides a novel opportunity for machine learning, planning, and multi-agent researchers --- it not only supplies a concrete domain to evalute their techniques, but also challenges researchers to evolve these techniques to face key constraints fundamental to this domain: real-time, uncertainty, and teamwork. 1 Introduction RoboCup (The World Cup Robot Soccer) is an attempt to promote AI and robotics research by providing a common task, Soccer, for evaluation of various theories, algorithms, and agent architectur...\n\n2-Hop Neighbour:\nLearning of Kick in Artificial Soccer Soccer simulation is a suitable domain for research in artificial intelligence. This paper describes a new ball kicking skill, that uses case based learning. In many situations a single kick command is not sufficient to reach the desired ball movement. Hence a skill is needed, that finds a suitable sequence of kicks. The new kicking skill was developed for the AT Humboldt artificial soccer team.\n\n2-Hop Neighbour:\nCAVEStudy: an Infrastructure for Computational Steering in Virtual Reality Environments We present the CAVEStudy system that enables scientists to interactively steer a simulation from a virtual reality (VR) environment. No modification to the source code is necessary. CAVEStudy allows interactive and immersive analysis of a simulation running on a remote computer. Using a high-level description of the simulation, the system generates the communication layer (based on CAVERNSoft) needed to control the execution and to gather data at runtime. We describe three case-studies implemented with CAVEStudy: soccer simulation, diode laser simulation, and molecular dynamics. 1. Introduction High-speed networks and high performance graphics open opportunities for completely new types of applications. As a result, the world of scientific computing is moving away from the batch-oriented management to interactive programs. Also, virtual reality (VR) systems are now commercially available, but so far scientists mainly use them for off-line visualization of data sets produced by a simu...\n\n2-Hop Neighbour:\nThe CMUnited-98 Champion Small-Robot Team Abstract. In this chapter, we present the main research contributions of our champion CMUnited-98 small robot team. The team is a multiagent robotic system with global perception, and distributed cognition and action. We describe the main features of the hardware design of the physical robots, including di erential drive, robust mechanical structure, and a kicking device. We brie y review the CMUnited-98 global vision processing algorithm, which is the same as the one used by the previous champion CMUnited-97 team. We introduce our new robot motion algorithm which reactively generates motion control to account for the target point, the desired robot orientation, and obstacle avoidance. Our robots exhibit successful collision-free motion in the highly dynamic robotic soccer environment. At the strategic and decision-making level, we present the role-based behaviors of the CMUnited-98 robotic agents. Team collaboration is remarkably achieved through a new algorithm that allows for team agents to anticipate possible collaboration opportunities. Robots position themselves strategically in open positions that increase passing opportunities. The chapter terminates with a summary of the results of the RoboCup-98 games in which the CMUnited-98 small robot team scored a total of 25 goals and su ered 6 goals in the 5 games that it played. 1\n\n2-Hop Neighbour:\nMan Multi-Agent Interaction in VR: a Case Study with RoboCup. We describe a Virtual Reality system that allows users at different locations to interact with a multi-agent system in a natural way. We use RoboCup (robot soccer) as a case study. A human player who is immersed in a CAVE can interact with the RoboCup simulation in its natural domain, by playing along with a virtual soccer game. The system supports distributed collaboration by allowing humans at different geographic locations to participate and interact in real time. The most difficult problem we address is how to deal with the latency that is induced by the multi-agent simulation and by the wide-area network between different CAVEs. Our navigation software anticipates the movements of the human player and optimizes the interaction (navigation, kicking). Also, it sends a minimal amount of state information over the wide-area network. 1. Introduction Multi-agent systems are becoming increasingly important in our society. The majority of such systems is in some way related to Internet ...", "role": "user" }, { "content": "<answer>Artificial Intelligence</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Towards Flexible Teamwork in Persistent Teams: Extended Report Teamwork is a critical capability in multi-agent environments. Many such environments mandate that the agents and agent-teams must be persistent i.e., exist over long periods of time. Agents in such persistent teams are bound together by their long-term common interests and goals. This paper focuses on flexible teamwork in such persistent teams. Unfortunately, while previous work has investigated flexible teamwork, persistent teams remain unexplored. For flexible tamwork, one promising approach that has emerged is model-based, i.e., providing agents with general models of teamwork that explicitly specify their commitments in teamwork. Such models enable agents to autonomously reason about coordination. Unfortunately, for persistent teams, such models may lead to coordination and communication actions that while locally optimal, are highly problematic for the team's long-term goals. We present a decisiontheoretic technique to enable persistent teams to overcome such limitations of the m...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nControlling Cooperative Problem Solving in Industrial Multi-Agent Systems using Joint Intentions One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparativ...\n\n2-Hop Neighbour:\nThe RoboCup Synthetic Agent Challenge 97 RoboCup Challenge offers a set of challenges for intelligent agent researchers using a friendly competition in a dynamic, real-time, multiagent domain. While RoboCup in general envisions longer range challenges over the next few decades, RoboCup Challenge presents three specific challenges for the next two years: (i) learning of individual agents and teams; (ii) multi-agent team planning and plan-execution in service of teamwork; and (iii) opponent modeling. RoboCup Challenge provides a novel opportunity for machine learning, planning, and multi-agent researchers --- it not only supplies a concrete domain to evalute their techniques, but also challenges researchers to evolve these techniques to face key constraints fundamental to this domain: real-time, uncertainty, and teamwork. 1 Introduction RoboCup (The World Cup Robot Soccer) is an attempt to promote AI and robotics research by providing a common task, Soccer, for evaluation of various theories, algorithms, and agent architectur...\n\n2-Hop Neighbour:\nUsing an Explicit Teamwork Model and Learning in RoboCup: An Extended Abstract Stacy Marsella, Jafar Adibi, Yaser Al-Onaizan, Ali Erdem, Randall Hill Gal A. Kaminka, Zhun Qiu, Milind Tambe Information Sciences Institute and Computer Science Department University of Southern California 4676 Admiralty Way, Marina del Rey, CA 90292, USA [email protected] 1 Introduction The RoboCup research initiative has established synthetic and robotic soccer as testbeds for pursuing research challenges in Artificial Intelligence and robotics. This extended abstract focuses on teamwork and learning, two of the multiagent research challenges highlighted in RoboCup. To address the challenge of teamwork, we discuss the use of a domain-independent explicit model of teamwork, and an explicit representation of team plans and goals. We also discuss the application of agent learning in RoboCup. The vehicle for our research investigations in RoboCup is ISIS (ISI Synthetic), a team of synthetic soccer-players that successfully participated in the simulation league of RoboCup'97, by win...\n\n2-Hop Neighbour:\nInvestigating Interactions Between Agent Conversations and Agent Control Components Exploring agent conversation in the context of fine-grained agent coordination research has raised several intellectual questions. The major issues pertain to interactions between different agent conversations, the representations chosen for different classes of conversations, the explicit modeling of interactions between the conversations, and how to address these interactions. This paper is not so ambitious as to attempt to address these questions, only frame them in the context of quantified, scheduling-centric multi-agent coordination. research. 1 Introduction Based on a long history of work in agents and agent control components for building distributed AI and multi-agent systems, we are attempting to frame and address a set of intellectual questions pertaining to agent conversation. Interaction lies at the heart of the matter; the issue is interaction between different agent conversations, that possibly occur at different levels of abstraction, but also interaction between the m...\n\n2-Hop Neighbour:\nThe Cooperative Problem-Solving Process We present a model of cooperative problem solving that describes the process from its beginning, with some agent recognizing the potential for cooperation with respect to one of its goals, through to team action. Our approach is to characterize the mental states of the agents that lead them to solicit, and take part in, cooperative action. The model is formalized by expressing it as a theory in a quantified multi-modal logic. Keywords: Multi-agent systems, cooperation, modal logic, temporal logic. 1 Introduction Agents --- both human and artificial --- can engage in many and varied types of social interaction, ranging from altruistic cooperation through to open conflict. However, perhaps the paradigm example of social interaction is cooperative problem solving (CPS), in which a group of autonomous agents choose to work together to achieve a common goal. For example, we might find a group of people working together to move a heavy object, play a symphony, build a house, or write a jo...\n\n2-Hop Neighbour:\nAutonomous Agents For Business Process Management : Traditional approaches to managing business processes are often inadequate for large-scale, organisation -wide, dynamic settings. However since Internet and Intranet technologies have become widespread, an increasing number of business processes exhibit these properties. Therefore a new approach is needed. To this end, we describe the motivation, conceptualisation, design and implementation of a novel agent-based business process management system. The key advance of our system is that responsibility for enacting various components of the business process is delegated to a number of autonomous problem solving agents. To enact their role, these agents typically interact and negotiate with other agents in order to coordinate their actions and to buy in the services they require. This approach leads to a system that is significantly more agile and robust than its traditional counterparts. To help demonstrate these benefits, a companion paper describes the application of our system to a ...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Meeting Plan Recognition Requirements for Real-Time Air-Mission Simulations In this paper, the potential synergy between instancebased pattern recognition and means-end (possible world) reasoning is explored, for supporting plan recognition in multi-aeroplane air-mission simulations. A combination of graph matching, induction, probabilistic principles and dynamic programming are applied to traces of aeroplane behaviour during flight manoeuvres. These satisfy the real-time constraints of the simulation. This enables the agents to recognise what other agents are doing and to abstract about their activity, at the instrumentation level. A means-end-reasoning model is then used to deliberate about and invoke standard operating procedures, based on recognised activity. The reasoning model constrains the recognition process by framing queries according to what a pilot would expect during the execution of the current plan(s). Results from experiments involving the dMARS procedural reasoning system and the CLARET pattern matching and induction system are described for ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nTowards Flexible Multi-Agent Decision-Making Under Time Pressure Abstract — Autonomous agents need considerable computational resources to perform rational decision-making. These demands are even more severe when other agents are present in the environment. In these settings, the quality of an agent’s alternative behaviors depends not only on the state of the environment, but also on the actions of other agents, which in turn depend on the others ’ beliefs about the world, their preferences, and further on the other agents’ beliefs about others, and so on. The complexity becomes prohibitive when large number of agents are present and when decisions have to be made under time pressure. In this paper we investigate strategies intended to tame the computational burden by using off-line computation in conjunction with on-line reasoning. We investigate two approaches. First, we use rules compiled off-line to constrain alternative actions considered during on-line reasoning. This method minimizes overhead, but is not sensitive to changes in realtime demands of the situation at hand. Second, we use performance profiles computed off-line and the notion of urgency (i.e., the value of time) computed on-line to choose the amount of information to be included during on-line deliberation. This method can adjust to various levels of real-time demands, but incurs some overhead associated with iterative deepening. We test our framework with experiments in a simulated anti-air defense domain. The experiments show that both procedures are effective in reducing computation time while offering good performance under time pressure.\n\n1-Hop Neighbour:\nBDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...\n\n1-Hop Neighbour:\nPlan Recognition in Military Simulation: Incorporating Machine Learning with Intelligent Agents A view of plan recognition shaped by both operational and computational requirements is presented. Operational requirements governing the level of fidelity and nature of the reasoning process combine with computational requirements including performance speed and software engineering effort to constrain the types of solutions available to the software developer. By adopting machine learning to provide spatio-temporal recognition of environmental events and relationships, an agent can be provided with a mechanism for mental state recognition qualitatively different from previous research. An architecture for integrating machine learning into a BDI agent is suggested and the results from the development of a prototype provide proof-of-concept. 1 Introduction This paper proposes machine-learning as a tool to assist in the construction of agents capable of plan recognition. This paper focuses on the beliefs-desires-intentions (BDI) class of agents. These agents have been ...\n\n2-Hop Neighbour:\nAn approach to using degrees of belief in BDI agents : The past few years have seen a rise in the popularity of the use of mentalistic attitudes such as beliefs, desires and intentions to describe intelligent agents. Many of the models which formalise such attitudes do not admit degrees of belief, desire and intention. We see this as an understandable simplification, but as a simplification which means that the resulting systems cannot take account of much of the useful information which helps to guide human reasoning about the world. This paper starts to develop a more sophisticated system based upon an existing formal model of these mental attributes. 1 Introduction In the past few years there has been a lot of attention given to building formal models of autonomous software agents; pieces of software which operate to some extent independently of human intervention and which therefore may be considered to have their own goals and the ability to determine how to achieve those goals. Many of these formal models are based on the use of ...\n\n2-Hop Neighbour:\nUpdating Mental States from Communication . In order to perform effective communication agents must be able to foresee the effects of their utterances on the addressee's mental state. In this paper we investigate on the update of the mental state of an hearer agent as a consequence of the utterance performed by a speaker agent. Given an agent communication language with a STRIPSlike semantics, we propose a set of criteria that allow to bind the speaker's mental state to its uttering of a certain sentence. On the basis of these criteria, we give an abductive procedure that the hearer can adopt to partially recognize the speaker's mental state that led to a specific utterance. This procedure can be adopted by the hearer to update its own mental state and its image of the speaker's mental state. 1 Introduction In multi-agent systems, communication is necessary for the agents to cooperate and coordinate their activities or simply to avoid interfering with one another. If agents are not designed with embedded pre-compiled...\n\n2-Hop Neighbour:\nA Hybrid Model For Sharing Information Between Fuzzy, Uncertain And Default Reasoning Models In Multi-Agent Systems This paper develops a hybrid model which provides a unified framework for the fol- lowing four kinds of reasoning: 1) Zadeh's fuzzy approximate reasoning; 2) truthqualification uncertain reasoning with respect to fuzzy propositions; 3) fuzzy default reasoning (proposed, in this paper, as an extension of Reiter's default reasoning); and 4) truth-qualification uncertain default reasoning associated with fuzzy statements (developed in this paper to enrich fuzzy default reasoning with uncertain information). Our hybrid model has the following characteristics: 1) basic uncertainty is estimated in terms of words or phrases in natural language and basic propositions are fuzzy; 2) uncertainty, linguistically expressed, can be handled in default reasoning; and 3) the four kinds of rea- soning models mentioned above and their combination models will be the special cases of our hybrid model. Moreover, our model allows the reasoning to be performed in the case in which the information is fuzzy, uncertain and partial. More importantly, the problems of sharing the information among heterogeneous fuzzy, uncertain and default reasoning models can be solved efficiently by using our model. Given this, our framework can be used as a basis for information sharing and exchange in knowledge-based multi-agent systems for practical applications such as automated group negotiations. Actually, to build such a foundation is the motivation of this paper\n\n2-Hop Neighbour:\nTowards Active Logic Programming In this paper we present the new logic programming language DALI, aimed at defining agents and agent systems. A main design objective for DALI has been that of introducing in a declarative fashion all the essential features, while keeping the language as close as possible to the syntax and semantics of the plain Horn--clause language. Special atoms and rules have been introduced, for representing: external events, to which the agent is able to respond (reactivity); actions (reactivity and proactivity); internal events (previous conclusions which can trigger further activity); past and present events (to be aware of what has happened). An extended resolution is provided, so that a DALI agent is able to answer queries like in the plain Horn--clause language, but is also able to cope with the different kinds of events, and exhibit a (rational) reactive and proactive behaviour. 1 Introduction In this paper we address the issue of defining a logic programming language for reac...\n\n2-Hop Neighbour:\nA Knowledge-Based Approach for Designing Intelligent Team Training Systems This paper presents a knowledge approach to designing team training systems using intelligent agents. We envision a computer-based training system in which teams are trained by putting them through scenarios, which allow them to practice their team skills. There are two important roles that intelligent agents can play; these are virtual team members, and tutors. To carry out these functions, these agents must be equipped with an understanding of the task domain, the team structure, the selected decision-making process and their beliefs about other team members' mental states. Even though existing agent teamwork models incorporate many of the elements listed above, they have not focused on analyzing information needs of team members to support proactive agent interactions. To encode the team knowledge, we have developed a representation language, based on the BDI model, called MALLET. A Petri Net model of an individual agent's plans and information needs can be derived from the role des...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Shape of the Web and Its Implications for Searching the Web With the rapid growth of the number of web pages, designing a search engine that can retrieve high quality information in response to a user query is a challenging task. Automated search engines that rely on keyword matching usually return too many low quality matches and they take a long time to run. It is argued in the literature that link-following search methods can substantially increase the search quality, provided that these methods use an accurate assumption about useful patterns in the hyperlink topology of the web. Recent work in the field has focused on detecting identi able patterns in the web graph and exploiting this information to improve the performance of search algorithms. We survey relevant work in this area and comment on the implications of these patterns for other areas such as advertisement and marketing.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Web Graph: an Overview this paper, a study is made on a 200 millions vertices graph obtained from a crawl of the Web, and it appears that is is composed of four parts of equivalent sizes. See Figure 3. The first part is the largest strongly connected component of the graph (the second largest is much smaller), which composes the core of the well connected pages. The second part, called IN, is composed of those pages from which the core is reachable, but which are not reachable from the core. Conversly, the third part, called OUT, is the set of pages reachable from the core but from which the core is unreachable. Finally, the dendrites are the pages reachable from one of the three first parts, or from which one of the three first parts is reachable, but which belong to none of the previous parts. Only ten percent of the whole graph do not belong to one of these four parts which compose the bow-tie\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nMethods for Sampling Pages Uniformly from the World Wide Web We present two new algorithms for generating uniformly random samples of pages from the World Wide Web, building upon recent work by Henzinger et al. (Henzinger et al. 2000) and Bar-Yossef et al. (Bar-Yossef et al. 2000). Both algorithms are based on a weighted random-walk methodology. The first algorithm (DIRECTED-SAMPLE) operates on arbitrary directed graphs, and so is naturally applicable to the web. We show that, in the limit, this algorithm generates samples that are uniformly random. The second algorithm (UNDIRECTED-SAMPLE) operates on undirected graphs, thus requiring a mechanism for obtaining inbound links to web pages (e.g., access to a search engine). With this additional knowledge of inbound links, the algorithm can arrive at a uniform distribution faster than DIRECTEDSAMPLE, and we derive explicit bounds on the time to convergence. In addition, we evaluate the two algorithms on simulated web data, showing that both yield reliably uniform samples of pages. We also compare our results with those of previous algorithms, and discuss the theoretical relationships among the various proposed methods.\n\n2-Hop Neighbour:\nInCommonSense - Rethinking Web Search Results The World Wide Web is a rich annotation system which allows people to relate to documents and sites from different perspectives. People describe, comment, relate or mock other Web pages in the context of their document. This richness is currently not reflected in snippets presented by Web search engines, where a search result is represented by the text found in the Web document alone. This paper proposes a new method for representing documents in Web search engines' results. This method is based on recent trends in search engine technology and provides descriptions of the retrieved documents, assembled from people's commentary and annotations on the Web. This paper suggests a new way for automatically retrieving and reusing people's annotations on the Web, incorporating these annotations into a search engine for creating a hybrid directory-search-engine, allowing for both automatic retrieval and on-the-fly human authored summaries. I. INTRODUCTION It is common knowledge that many use...\n\n2-Hop Neighbour:\nTechniques for Specialized Search Engines It is emerging that it is very difficult for the major search engines to provide a comprehensive and up-to-date search service of the Web. Even the largest search engines index only a small proportion of static Web pages and do not search the Web' s backend databases that are estimated to be 500 times larger than the static Web. The scale of such searching introduces both technical and economic problems. What is more, in many cases users are not able to retrieve the information they desire because of the simple and generic search interface provided by the major search engines. A necessary response to these search problems is the creation of specialized search engines. These search engines search just for information in a particular topic or category on the Web. Such search engines will have smaller and more manageable indexes and have a powerful domainspecific search interface. This paper discusses the issues in this area and gives an overview of the techniques for building specialized search engines. Keywords: specialized search engine, information retrieval, focused crawling, taxonomy, Web search. 1.\n\n2-Hop Neighbour:\nYarrow: A Real-Time Client Side Meta-Search Learner In this paper we report our research on building Yarrow - an intelligent web meta-search engine. The predominant feature of Yarrow is that in contrast to the lack of adaptive learning features in existing metasearch engines, Yarrow is equipped with a practically efficient on-line learning algorithm so that it is capable of helping the user to search for the desired documents with as little feedback as possible. Currently, Yarrow can query eight of the most popular search engines and is able to perform document parsing and indexing, and learning in real-time on client side. Its architecture and performance are also discussed. 1. Introduction As the world wide web evolves and grows so rapidly, web search, an interface between the human users and the vast information gold mine of the web, is becoming a necessary part of people's daily life. Designing and implementing practically effective web search tools is a challenging task. It calls for innovative methods and strategies f...\n\n2-Hop Neighbour:\nTowards a Highly-Scalable Metasearch Engine The World Wide Web has been expanding in a very fast rate. The coverage of the Web by each of the major search engines has been steadily decreasing despite their effort to index more web pages. Worse yet, as these search engines get larger, higher percentages of their indexed information are becoming obsolete. More and more people are having doubt about the scalability of centralized search engine technology. A more scalable alternative to search the Web is the metasearch engine approach. A metasearch engine can be considered as an interface on top of multiple local search engines to provide uniform access to many local search engines. Database selection is one of the main challenges in building a large-scale metasearch engine. The problem is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. In order to enable accurate selection, metadata that reflect the content of each search engine need to be co...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Yarrow: A Real-Time Client Side Meta-Search Learner In this paper we report our research on building Yarrow - an intelligent web meta-search engine. The predominant feature of Yarrow is that in contrast to the lack of adaptive learning features in existing metasearch engines, Yarrow is equipped with a practically efficient on-line learning algorithm so that it is capable of helping the user to search for the desired documents with as little feedback as possible. Currently, Yarrow can query eight of the most popular search engines and is able to perform document parsing and indexing, and learning in real-time on client side. Its architecture and performance are also discussed. 1. Introduction As the world wide web evolves and grows so rapidly, web search, an interface between the human users and the vast information gold mine of the web, is becoming a necessary part of people's daily life. Designing and implementing practically effective web search tools is a challenging task. It calls for innovative methods and strategies f...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nUsing Reinforcement Learning to Spider the Web Efficiently Consider the task of exploring the Web in order to find pages of a particular kind or on a particular topic. This task arises in the construction of search engines and Web knowledge bases. This paper argues that the creation of efficient web spiders is best framed and solved by reinforcement learning, a branch of machine learning that concerns itself with optimal sequential decision making. One strength of reinforcement learning is that it provides a formalism for measuring the utility of actions that give benefit only in the future. We present an algorithm for learning a value function that maps hyperlinks to future discounted reward by using naive Bayes text classifiers. Experiments on two real-world spidering tasks show a three-fold improvement in spidering efficiency over traditional breadth-first search, and up to a two-fold improvement over reinforcement learning with immediate reward only. Keywords: reinforcement learning, text classification, World Wide Web, spidering, crawlin...\n\n1-Hop Neighbour:\nCiteSeer: An Autonomous Web Agent for Automatic Retrieval and Identification of Interesting Publications Published research papers available on the World Wide Web (WWW or Web) are often poorly organized, often exist in non-text form (e.g. Postscript) documents, and increase in quantity daily. Significant amounts of time and effort are commonly needed to find interesting and relevant publications on the Web. We have developed a Web based information agent that assists the user in the process of performing a scientific literature search. Given a set of keywords, the agent uses Web search engines and heuristics to locate and download papers. The papers are parsed in order to extract information features such as the abstract and individually identified citations which are placed into an SQL database. The agent's Web interface can be used to find relevant papers in the database using keyword searches, or by navigating the links between papers formed by the citations. Links to both \"citing\" and \"cited\" publications can be followed. In addition to simple browsing and keyword searches, the agent ...\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nBuilding Domain-Specific Search Engines with Machine Learning Techniques Domain-specific search engines are becoming increasingly popular because they offer increased accuracy and extra features not possible with the general, Web-wide search engines. For example, www.campsearch.com allows complex queries by agegroup, size, location and cost over summer camps. Unfortunately, these domain-specific search engines are difficult and time consuming to maintain. This paper proposes the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific search engines. We describe new research in reinforcement learning, text classification and information extraction that automates efficient spidering, populating topic hierarchies, and identifying informative text segments. Using these techniques, we have built a demonstration system: a search engine for computer science research papers. It already contains over 33,000 papers and is publicly available at www.cora.jprc.com. 1 Introduction As the amount of information on the World ...\n\n2-Hop Neighbour:\nTarget Seeking Crawlers and their Topical Performance Topic driven crawlers can complement search engines by targeting relevant portions of the Web. A topic driven crawler must exploit the information available about the topic and its underlying context. In this paper we extend our previous research on the design and evaluation of topic driven crawlers by comparing seven different crawlers on a harder problem, namely, seeking highly relevant target pages. We find that exploration is an important aspect of a crawling strategy. We also study how the performance of crawler strategies depends on a number of topical characteristics based on notions of topic generality, cohesiveness, and authoritativeness. Our results reveal that topic generality is an obstacle for most crawlers, that three crawlers tend to perform better when the target pages are clustered together, and that two of these also display better performance when topic targets are highly authoritative.\n\n2-Hop Neighbour:\nWho do you want to be today? Web Personae for personalised information access Personalised context sensitivity is the Holy Grail of web information retrieval. As a first step towards this goal, we present the Web Personae personalised search and browsing system. We use well-known information retrieval techniques to develop and track user models. Web Personae differ from previous approaches in that we model users with multiple profiles, each corresponding to a distinct topic or domain. Such functionality is essential in heterogeneous environments such as the Web. We introduce Web Personae, describe an algorithm for learning such models from browsing data, and discuss applications and evaluation methods.\n\n2-Hop Neighbour:\nFocused Crawls, Tunneling, and Digital Libraries Crawling the Web to build collections of documents related to pre-specified topics became an active area of research during the late 1990's after crawler technology was developed for the benefit of search engines. Now, Web crawling is being seriously considered as an important strategy for building large scale digital libraries. This paper considers some of the crawl technologies that might be exploited for collection building. For example, to make such collection-building crawls more effective, focused crawling was developed, in which the goal was to make a \"best-first\" crawl of the Web. We are using powerful crawler software to implement a focused crawl but use tunneling to overcome some of the limitations of a pure best-first approach. Tunneling has been described by others as not only prioritizing links from pages according to the page's relevance score, but also estimating the value of each link and prioritizing on that as well. We add to this mix by devising a tunneling focused crawling strategy which evaluates the current crawl direction on the fly to determine when to terminate a tunneling activity. Results indicate that a combination of focused crawling and tunneling could be an e#ective tool for building digital libraries.\n\n2-Hop Neighbour:\nText and Image Metasearch on the Web As the Web continues to increase in size, the relative coverage of Web search engines is decreasing, and search tools that combine the results of multiple search engines are becoming more valuable. This paper provides details of the text and image metasearch functions of the Inquirus search engine developed at the NEC Research Institute. For text metasearch, we describe features including the use of link information in metasearch, and provide statistics on the usage and performance of Inquirus and the Web search engines. For image metasearch, Inquirus queries multiple image search engines on the Web, downloads the actual images, and creates image thumbnails for display to the user. Inquirus handles image search engines that return direct links to images, and engines that return links to HTML pages. For the engines that return HTML pages, Inquirus analyzes the text on the pages in order to predict which images are most likely to correspond to the query. The individual image search engin...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Experiences with Selecting Search Engines Using Metasearch Search engines are among the most useful and high profile resources on the Internet. The problem of finding information on the Internet has been replaced with the problem of knowing where search engines are, what they are designed to retrieve and how to use them. This paper describes and evaluates SavvySearch, a meta-search engine designed to intelligently select and interface with multiple remote search engines. The primary meta-search issue examined is the importance of carefully selecting and ranking remote search engines for user queries. We studied the efficacy of SavvySearch's incrementally acquired meta-index approach to selecting search engines by analyzing the effect of time and experience on performance. We also compared the meta-index approach to the simpler categorical approach and showed how much experience is required to surpass the simple scheme. 1 Introduction Search engines are powerful tools for assisting the otherwise unmanageable task of navigating the rapidly ex...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAn Overview of World Wide Web Search Technologies With over 800 million pages covering most areas of human endeavor, the World Wide Web is fertile ground for information retrieval. Numerous search technologies have been applied to Web searches, and the dominant search method has yet to be identified. This chapter provides an overview of existing Web search technologies and classifies them into six categories: (i) hyperlink exploration, (ii) information retrieval, (iii) metasearches, (iv) SQL approaches, (v) content-based multimedia searches, and (vi) others. A comparative study of some major commercial and experimental search services is presented, and some future research directions for Web searches are suggested. Keywords: Survey, World Wide Web, Searches, Search Engines, and Information Retrieval. 1.\n\n1-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n1-Hop Neighbour:\nConcept Hierarchy Based Text Database Categorization Document categorization as a technique to improve the retrieval of useful documents has been extensively investigated. One important issue in a large-scale metasearch engine is to select text databases that are likely to contain useful documents for a given query. We believe that database categorization can be a potentially effective technique for good database selection, especially in the Internet environment where short queries are usually submitted. In this paper, we propose and evaluate several database categorization algorithms. This study indicates that while some document categorization algorithms could be adopted for database categorization, algorithms that take into consideration the special characteristics of databases may be more effective. Preliminary experimental results are provided to compare the proposed database categorization algorithms. A prototype database categorization system based on one of the proposed algorithms has been developed.\n\n2-Hop Neighbour:\nUsing Reinforcement Learning to Spider the Web Efficiently Consider the task of exploring the Web in order to find pages of a particular kind or on a particular topic. This task arises in the construction of search engines and Web knowledge bases. This paper argues that the creation of efficient web spiders is best framed and solved by reinforcement learning, a branch of machine learning that concerns itself with optimal sequential decision making. One strength of reinforcement learning is that it provides a formalism for measuring the utility of actions that give benefit only in the future. We present an algorithm for learning a value function that maps hyperlinks to future discounted reward by using naive Bayes text classifiers. Experiments on two real-world spidering tasks show a three-fold improvement in spidering efficiency over traditional breadth-first search, and up to a two-fold improvement over reinforcement learning with immediate reward only. Keywords: reinforcement learning, text classification, World Wide Web, spidering, crawlin...\n\n2-Hop Neighbour:\nSearching the Web: General and Scientific Information Access he World Wide Web is revolutionizing the way people access information, and has opened up new possibilities in areas such as digital libraries, general and scientific information dissemination and retrieval, education, commerce, entertainment, government, and health care. The amount of publicly available information on the Web is increasing rapidly [1]. The Web is a gigantic digital library, a searchable 15 billion word encyclopedia [2]. It has stimulated research and development in information retrieval and dissemination, and fostered search engines such as AltaVista. These new developments are not limited to the Web, and can enhance access to virtually all forms of digital libraries. The revolution the Web has brought to information access is not so much due to the availability of information (huge amounts of information has long been available in libraries and elsewhere), but rather the increased efficiency of accessing information, which can make previously impractical tasks practical. There are many avenues for improvement in the efficiency of accessing information on the Web, for example, in the areas of locating and organizing information. This article discusses general and scientific information access on the Web, and many of our comments are applicable to digital libraries in general. The effectiveness of Web search engines is discussed, including results that show that the major search engines cover only a fraction of the “publicly indexable Web ” (the part of the Web which is considered for indexing by the major engines, which excludes pages hidden behind search forms, pages with authorization requirements, etc.). Current research into improved searching of the Web is discussed, including new techniques for ranking the relevance of results, and new techniques in metasearch that can improve the efficiency and effectiveness of Web search. The amount of scientific information and the number of electronic journals on the Internet continues to increase. Researchers are increasingly making their work available online. This article also discusses the creation of digital libraries of the scientific literature, incorporating autonomous citation indexing. The autonomous creation of citation indices\n\n2-Hop Neighbour:\nWeb Search -- Your Way We describe a metasearch engine architecture, in use at NEC Research Institute, that allows users to provide preferences in the form of an information need category. This extra information is used to direct the search process, providing more valuable results than by considering only the query. Using our architecture, identical keyword queries may be sent to different search engines, and results may be scored differently for different users.\n\n2-Hop Neighbour:\nDigital Libraries and Autonomous Citation Indexing The World Wide Web is revolutionizing the way that researchers access scientific information. Articles are increasingly being made available on the homepages of authors or institutions, at journal Web sites, or in online archives. However, scientific information on the Web is largely disorganized. This article introduces the creation of digital libraries incorporating Autonomous Citation Indexing (ACI). ACI autonomously creates citation indices similar to the Science Citation Index R . An ACI system autonomously locates articles, extracts citations, identifies identical citations that occur in different formats, and identifies the context of citations in the body of articles. ACI can organize the literature and provide most of the advantages of traditional citation indices, such as literature search using citation links, and the evaluation of articles based on citation statistics. Furthermore, ACI can provide significant advantages over traditional citation indices. No manual effort is required for indexing, which should result in a reduction in cost and an increase in the availability of citation indices. An ACI system can also provide more comprehensive and up-to-date indices of the literature by indexing articles on the Web, technical reports, conference papers, etc. Furthermore, ACI makes it easy to browse the context of citations to given articles, allowing researchers to quickly and easily see what subsequent researchers have said about a given article. digital libraries incorporating ACI may significantly improve scientific dissemination and feedback.\n\n2-Hop Neighbour:\nImproving Category Specific Web Search by Learning Query Modifications A user searching for documents within a specific category using a general purpose search engine might have a difficult time finding valuable documents. To improve category specific search, we show that a trained classifier can recognize pages of a specified category with high precision by using textual content, text location, and HTML structure. We show that query modifications to web search engines increase the probability that the documents returned are of the specific category. We evaluate the effectiveness of several query modifications on real search engines, showing that the approach is highly effective for locating personal homepages and calls for papers. 1: Introduction Typical web search engines index millions of pages across a variety of categories, and return results ranked by expected topical relevance. Only a small percentage of these pages may be of a specific category, for example, personal homepages, or calls for papers. A user may examine large numbers of pages abou...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Background Readings for Collection Synthesis' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n1-Hop Neighbour:\nBreadth-First Search Crawling Yields High-Quality Pages This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl.\n\n1-Hop Neighbour:\nWeb Document Clustering: A Feasibility Demonstration Abstract Users of Web search engines are often forced to sift through the long ordered list of document “snippets” returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial. 1\n\n2-Hop Neighbour:\nWorld Wide Web Information Retrieval Using Web Connectivity Information Gathering, processing and distributing information from the World Wide Web will be a vital technology for the next century. Web search techniques have played a critical role in the development of information systems. Due to the diverse nature of web documents, traditional search techniques must be improved. Hyperlink structure based methods have proved to be powerful ways of exploring the relationships between web documents. In this project, a prototype web search engine was developed to exploit the link structure of web documents, based on the use of the Companion algorithm. The prototype consists of a web spider, local database, and search software. The system was written using the Java programming language. Our spider crawls and downloads web pages using Lynx, then saves the hyperlinks into an Oracle database. JDBC is used to implement the database processing. Search software makes a vicinity graph for the query URL and returns the most related pages after calculating the hub and authority weights. Finally, HTML web pages provide user interfaces and communicate with CGI using the Perl language. iii ACKNOWLEDGMENTS The author would like to express thanks to all of the members of his M.S. committee for their useful comments on the thesis, assistance in scheduling the defense date and kind help during the final defense period. The author would like to express his deepest appreciation to Dr. Wen-Chen Hu, his thesis mentor, for the depth of the training and the appropriate guidance he has provided. The author would also like to acknowledge the Department of Computer Science and Software Engineering of Auburn University for financial support. Finally, thanks especially go to the author's wife Qifang, his son, Alex, and his father and mother for their support and love. ...\n\n2-Hop Neighbour:\nTopical Locality in the Web: Experiments and Observations Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable World-Wide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results present the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents. 1 Introduction Most web pages are linked to others with related content...\n\n2-Hop Neighbour:\nInformation Retrieval on the World Wide Web and Active Logic: A Survey and Problem Definition As more information becomes available on the World Wide Web (there are currently over 4 billion pages covering most areas of human endeavor), it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest). The first process is tentative and time consuming and the second may not satisfy the user because of many inaccurate and irrelevant results. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves.\n\n2-Hop Neighbour:\nWeb Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions This paper summarizes the different characteristics of web data, the basic components of web mining and its different types, and their current states of the art. The reason for considering web mining, a separate field from data mining, is explained. The limitations of some of the existing web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) highlighted. A survey of the existing literature on \"soft web mining\" is provided along with the commercially available systems. The prospective areas of web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing \"soft web mining\" systems is explained. An extensive bibliography is also provided.\n\n2-Hop Neighbour:\nFrom Resource Discovery to Knowledge Discovery on the Internet More than 50 years ago, at a time when modern computers didn't exist yet, Vannevar Bush wrote about a multimedia digital library containing human collective knowledge and filled with \"trails\" linking materials of the same topic. At the end of World War II, Vannevar urged scientists to build such a knowledge store and make it useful, continuously extendable and more importantly, accessible for consultation. Today, the closest to the materialization of Vannevar's dream is the World-Wide Web hypertext and multimedia document collection. However, the ease of use and accessibility of the knowledge described by Vannevar is yet to be realized. Since the 60s, extensive research has been accomplished in the information retrieval field, and free-text search was finally adopted by many text repository systems in the late 80s. The advent of the World-Wide Web in the 90s helped text search become routine as millions of users use search engines daily to pinpoint resources on the Internet. However, r...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'MARS: a Programmable Coordination Architecture for Mobile Agents Mobile agents represent a promising technology for the development of Internet applications. However, mobile computational entities introduce peculiar problems w.r.t. the coordination of the application components. The paper outlines the advantages of Linda-like coordination models, and shows how a programmable coordination model based on reactive tuple spaces can provide further desirable features for Internet applications based on mobile agents. Accordingly, the paper presents the design and the implementation of the MARS coordination architecture for Java-based mobile agents. MARS defines Linda-like tuple spaces, which can be programmed to react with specific actions to the accesses made by mobile agents.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAbstractions and Infrastructures for the Design and Development of Mobile Agent Organizations Internet applications can take advantage of a paradigm based on autonomous and mobile agents. However, suitable abstractions and infrastructures are required for the effective engineering of such applications. In this paper, we argue that a conceptual framework for context-dependent coordination, supported by an infrastructure based on programmable media, can promote a modular and easy to manage approach to the design and development of mobile agent applications in terms of computational organizations. The MARS coordination infrastructure is presented as an implementation of a coordination infrastructure promoting context-dependent coordination. A case study in the area of workflow management is introduced to clarify the concepts presented. Keywords: Mobile Agents, Agent Organizations, Coordination Infrastructures, Agent-oriented Software Engineering. 1\n\n1-Hop Neighbour:\nMobile Agent Organizations Mobile agents are a useful paradigm -- other than a useful technology -- for the development of complex Internet applications. However, the effective development of mobile agent applications requires suitable models and infrastructures. This paper proposes an organizational approach to the high-level design of mobile agent applications. The idea is to models the Internet as a multiplicity of local and active organizational contexts, intended as the places where coordination activities of application agents occur and are ruled. The paper discusses the advantages and the generality of such an approach, also with the help of a case study in the area of tourist assistance.\n\n1-Hop Neighbour:\nTowards a Layered Approach for Agent Infrastructure: The Right Tools for the Right Job It is clear by now that the take-up of agent technologies and the wide use of such technologies in open environments depends on the provision of appropriate infrastructure to support the rapid development of applications. In this paper, we argue that the elements required for the development of infrastructure span three different fields, which, nevertheless, have a great degree of overlap. Middleware technologies, mobile agent and intelligent agent research all have significant contributions to make towards a holistic approach to infrastructure development, but it is necessary to make clear distinctions between the requirements at each level and explain how they can be integrated so as to provide a clearer focus and allow the use of existing technologies. Our view of the requirements for infrastructure to support agent-based systems has been formed through experience with developing an agent implementation environment based on a formal agent framework. We argue that in order to provide support to developers, this infrastructure must address both conceptual concerns relating the different types of entities, and relationships between agent and non-agent entities in the environment, as well as more technical concerns. This paper describes the general requirements for infrastructure, the specific contributions from different areas, and our own efforts in progressing towards them. 1.\n\n2-Hop Neighbour:\nParadigma: Agent Implementation through Jini One of the key problems of recent years has been the divide between theoretical work in agent-based systems and its practical complement which have, to a large extent, developed along different paths. The Paradigma implementation framework has been designed with the aim of narrowing this gap. It relies on an extensive formal agent framework implemented using recent advances in Java technology. Specifically, Paradigma uses Jini connectivity technology to enable the creation of on-line communities in support of the development of agent-based systems. 1 Introduction In a networked environment that is highly interconnected, interdependent and heterogeneous, we are faced with an explosion of information and available services that are increasingly hard to manage. Agent-based systems can provide solutions to these problems as a consequence of their dynamics of social interaction; communication and cooperation can be used to effectively model problem domains through the interaction of agent...\n\n2-Hop Neighbour:\nAutomated Derivation of Complex Agent Architectures from Analysis Specifications . Multiagent systems have been touted as a way to meet the need for distributed software systems that must operate in dynamic and complex environments. However, in order for multiagent systems to be effective, they must be reliable and robust. Engineering multiagent systems is a non-trivial task, providing ample opportunity for even experts to make mistakes. Formal transformation systems can provide automated support for synthesizing multiagent systems, which can greatly improve their correctness and reliability. This paper describes a semi-automated transformation system that generates an agent's internal architecture from an analysis specification in the MaSE methodology. 1\n\n2-Hop Neighbour:\nAgent-Oriented Software Engineering Agent-oriented techniques represent an exciting new means of analysing, designing and building complex software systems. They have the potential to significantly improve current practice in software engineering and to extend the range of applications that can feasibly be tackled. Yet, to date, there have been few serious attempts to cast agent systems as a software engineering paradigm. This paper seeks to rectify this omission. Specifically, it will be argued that: (i) the conceptual apparatus of agent-oriented systems is well-suited to building software solutions for complex systems and (ii) agent-oriented approaches represent a genuine advance over the current state of the art for engineering complex systems. Following on from this view, the major issues raised by adopting an agent-oriented approach to software engineering are highlighted and discussed.\n\n2-Hop Neighbour:\nAgent-Oriented Software Engineering ion: The process of defining a simplified model of the system that emphasises some of the details or properties, while suppressing others. . Organisation 1 : The process of identifying and managing interrelationships between various problem solving components. Next, the characteristics of complex systems need to be enumerated [8]: . Complexity frequently takes the form of a hierarchy. That is, a system that is composed of inter-related sub-systems, each of which is in turn hierarchic in structure, until the lowest level of elementary sub-system is reached. The precise nature of these organisational relationships varies between sub-systems, however some generic forms (such as client-server, peer, team, etc.) can be identified. These relationships are not static: they often vary over time. . The choice of which components in the system are primitive is relatively arbitrary and is defined by the observer's aims and objectives. . Hierarchic systems evolve more quickly than non-hiera...\n\n2-Hop Neighbour:\nLIME: Linda Meets Mobility Lime is a system designed to assist in the rapid development of dependable mobile applications over both wired and ad hoc networks. Mobile agents reside on mobile hosts and all communication takes place via transiently shared tuple spaces distributed across the mobile hosts. The decoupled style of computing characterizing the Linda model is extended to the mobile environment. At the application level, both agents and hosts perceive movement as a sudden change of context. The set of tuples accessible by a particular agent residing on a given host is altered transparently in response to changes in the connectivity pattern among the mobile hosts. In this paper we present the key design concepts behind the Lime system. 1 INTRODUCTION Today's users demand ubiquitous network access independent of their physical location. This style of computation, often referred to as mobile computing, is enabled by rapid advances in the wireless communication technology. The networking scenarios enabled ...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Overview of Datalog Extensions with Tuples and Sets Datalog (with negation) is the most powerful query language for relational database with a well-defined declarative semantics based on the work in logic programming. However, Datalog only allows inexpressive flat structures and cannot directly support complex values such as nested tuples and sets common in novel database applications. For these reasons, Datalog has been extended in the past several years to incorporate tuple and set constructors. In this paper, we examine four different Datalog extensions: LDL, COL, Hilog and Relationlog. 1 Introduction Databases and logic programming are two independently developed areas in computer science. Database technology has evolved in order to effectively and efficiently organize, manage and maintain large volumes of ever increasingly complex data reliably in various memory devices. The underlying structure of databases has been the primary focus of research which leads to the development of data models. The most well-known and widely used da...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nRelationlog: A Typed Extension to Datalog with Sets and Tuples This paper presents a novel logic programming based language for nested relational and complex value models called Relationlog. It stands in the same relationship to the nested relational and complex value models as Datalog stands to the relational model. The main novelty of the language is the introduction of powerful mechanisms, namely, partial and complete set terms, for representing and manipulating both partial and complete information on nested sets, tuples and relations. They generalize the set grouping and set enumeration mechanisms of LDL and allow the user to directly encode the open and closed world assumptions on nested sets, tuples, and relations. They allow direct inference and access to deeply embedded values in a complex value relation as if the relation is normalized, which greatly increases the ease of use of the language. As a result, the extended relational algebra operations can be represented in Relationlog directly, and more importantly, recursively in a way similar to Datalog. Like Datalog, Relationlog has a well-defined Herbrand model-theoretic semantics, which captures the intended semantics of nested sets, tuples and relations, and also a well-defined proof-theoretic semantics which coincides with its model-theoretic semantics.\n\n1-Hop Neighbour:\nIntroduction to the Relationlog System Advanced applications require construction, efficient access and management of large databases with rich data structures and inference mechanisms. However, such capabilities are not directly supported by the existing database systems. In this paper, we describe Relationlog, a persistent deductive database system that is able to directly support the storage, efficient access and inference of data with complex structures. 1 Introduction Advanced applications require construction, efficient access and management of large databases with rich data structures and inference mechanisms. However, such capabilities are not directly supported by the existing database systems. Deductive databases have the potential to meet the demands of advanced applications. They grew out of the integration of logic programming and relational database technologies. They are intended to combine the best of the two approaches, such as representational and operational uniformity, inference capabilities, recursion,...\n\n2-Hop Neighbour:\nPartial and Complete Tuples and Sets in Deductive Databases . In a nested relational or complex object database, nested tuples and sets are used to represent real world objects. For various reasons, such tuples and sets can be partial or complete. In this paper, we discuss how to support them in deductive databases. In particular, we present a deductive database language RLOG II that supports partial and complete tuples and sets based on Relationlog. This work provides a firm logical foundation for nested relational and complex object databases that have both partial and complete tuples and sets. 1 Introduction In a nested relational database or complex object database, nested tuples and sets are used to represent real world objects. A relation is just a set of tuples and a database is a tuple of relations. For various reasons, the information about real world objects in a database may be incomplete. Hence, both tuples and sets can be partial or complete. In the past several years, a sub-problem, that is, a database which contains partial an...\n\n2-Hop Neighbour:\nLogical Semantics and Language for Databases with Partial and Complete Tuples and Sets (Extended Abstract) ) Mengchi Liu Department of Computer Science University of Regina, Regina, Saskatchewan, Canada S4S 0A2 Email: [email protected] Abstract We discuss the semantics of complex object databases with both partial and complete tuples and sets. We redefine the notion of database to reflect the existence of partial and complete tuples and sets and study how to integrate partial information about tuples and sets spread in the database and check consistency in the meantime. We also present a deductive language RLOG II for complex objects with null/unknown and inconsistent values based on Relationlog. The main novel feature of the language is that it is the only one that supports the null extended nested relational algebra operations directly and more importantly recursively. This work provides a firm logical foundation for nested relational and complex object databases that have both partial and complete tuples and sets and solves an open problem of supporting recursion with generic null/...\n\n2-Hop Neighbour:\nOLOG: A Deductive Object Database Language (Extended Abstract) ) Mengchi Liu Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 [email protected] http://www.cs.uregina.ca/mliu Abstract. Deductive object-oriented databases are intended to combine the best of the deductive and object-oriented approaches. However, some important object-oriented features are not properly supported in the existing proposals. This paper proposes a novel deductive language that supports important structurally object-oriented features such as object identity, complex objects, typing, classes, class hierarchies, multiple property inheritance with overriding, conict-handling, and blocking, and schema denitions in a uniform framework. The language eectively integrates useful features in deductive and object-oriented database languages. The main novel feature is the logical semantics that cleanly accounts for those structurally object-oriented features that are missing in object-oriented database languages. Therefor...\n\n2-Hop Neighbour:\nDesign and Implementation of the ROL System ROL is a deductive object-oriented database system developed at the University of Regina. It eectively integrates important features of deductive databases and object-oriented databases in a uniform framework and provides a uniform rule-based declarative language for dening, manipulating and querying a database. This paper describes the latest implementation of ROL. 1 Introduction In the past decade a lot of interests arose in integrating deductive and object-oriented databases to gain the best of the two approaches such as recursion, declarative querying, and rm logical foundations from deductive approaches, and object identity, complex objects, classes, class hierarchy, property inheritance with overriding and schema from object-oriented approach. A number of deductive object-oriented database languages have been proposed, such as O-logic [17], revised O-logic [11], C-logic [8], IQL [2], IQL2[1], Flogic [10], LOGRES [7], LLO [16], LOL [6], CORAL++[19], Datalog method [3], DLT ...\n\n2-Hop Neighbour:\nQuery Processing in Relationlog . Relationlog is a persistent deductive database system that supports eective storage, ecient access and inference of large amounts of data with complex structures. In this paper, we describe query processing in the Relationlog system. In particular, we illustrate the extended semi-naive and magic-set techniques used in Relationlog. 1 Introduction During the past decades, the nested relational and complex object models [1, 5, 11, 12, 15, 18, 21, 22] were developed to extend the applicability of the traditional relational model to more complex, non-business applications such as CAD, image processing and text retrieval [2]. Another important direction of intense research has been in using a logic programming based language Datalog [8, 23] as a database query language. Such a language provides a simple and natural way to express queries on a relational database and is more expressive than the traditional relational languages. In the past several years, there have been some eorts...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Combining Labeled and Unlabeled Data with Co-Training We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a setting in which the description of each example can be partitioned into two distinct views, motivated by the task of learning to classify web pages. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks that point to that page. We assume that either view of the example would be su cient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment amuch smaller set of labeled examples. Speci cally, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm's predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice. As part of our analysis, we provide new re-' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nLearning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.\n\n1-Hop Neighbour:\nUsing Unlabeled Data to Improve Text Classification One key difficulty with text classification learning algorithms is that they require many hand-labeled examples to learn accurately. This dissertation demonstrates that supervised learning algorithms that use a small number of labeled examples and many inexpensive unlabeled examples can create high-accuracy text classifiers. By assuming that documents are created by a parametric generative model, Expectation-Maximization (EM) finds local maximum a posteriori models and classifiers from all the data -- labeled and unlabeled. These generative models do not capture all the intricacies of text; however on some domains this technique substantially improves classification accuracy, especially when labeled data are sparse. Two problems arise from this basic approach. First, unlabeled data can hurt performance in domains where the generative modeling assumptions are too strongly violated. In this case the assumptions can be made more representative in two ways: by modeling sub-topic class structure, and by modeling super-topic hierarchical class relationships. By doing so, model probability and classification accuracy come into correspondence, allowing unlabeled data to improve classification performance. The second problem is that even with a representative model, the improvements given by unlabeled data do not sufficiently compensate for a paucity of labeled data. Here, limited labeled data provide EM initializations that lead to low-probability models. Performance can be significantly improved by using active learning to select high-quality initializations, and by using alternatives to EM that avoid low-probability local maxima.\n\n1-Hop Neighbour:\nUsing Labeled and Unlabeled Data to Learn Drifting Concepts For many learning tasks, where data is collected over an extended period of time, one has to cope two problems. The distribution underlying the data is likely to change and only little labeled training data is available at each point in time. A typical example is information filtering, i. e. the adaptive classification of documents with respect to a particular user interest. Both the interest of the user and the document content change over time. A filtering system should be able to adapt to such concept changes. Since users often give little feedback, a filtering system should also be able to achieve a good performance, even if only few labeled training examples are provided. This paper proposes a method to recognize and handle concept changes with support vector machines and to use unlabeled data to reduce the need for labeled data. The method maintains windows on the training data, whose size is automatically adjusted so that the estimated generalization error is minimized. The approach is both theoretically well-founded as well as effective and efficient in practice. Since it does not require complicated parameterization, it is simpler to use and more robust than comparable heuristics. Experiments with simulated concept drift scenarios based on real-world text data compare the new method with other window management approaches and show that it can effectively select an appropriate window size in a robust way. In order to achieve an acceptable performance with fewer labeled training examples, the proposed method exploits unlabeled examples in a transductive way. 1\n\n2-Hop Neighbour:\nContent-Based Book Recommending Using Learning for Text Categorization Recommender systems improve access to relevant products and information by making personalized suggestions based on previous examples of a user's likes and dislikes. Most existing recommender systems use collaborative filtering methods that base recommendations on other users' preferences. By contrast, content-based methods use information about an item itself to make suggestions. This approach has the advantage of being able to recommend previously unrated items to users with unique interests and to provide explanations for its recommendations. We describe a content-based book recommending system that utilizes information extraction and a machine-learning algorithm for text categorization. Initial experimental results demonstrate that this approach can produce accurate recommendations. KEYWORDS: Recommender systems, information filtering, machine learning, text categorization INTRODUCTION There is a growing interest in recommender systems that suggest music, films, books, and othe...\n\n2-Hop Neighbour:\nAnalyzing the Effectiveness and Applicability Of Co-Training Recently there has been significant interest in supervised learning algorithms that combine labeled and unlabeled data for text learning tasks. The co-training setting [1] applies to datasets that have a natural separation of their features into two disjoint sets. We demonstrate that when learning from labeled and unlabeled data, algorithms explicitly leveraging a natural independent split of the features outperform algorithms that do not. When a natural split does not exist, co-training algorithms that manufacture a feature split may out-perform algorithms not using a split. These results help explain why co-training algorithms are both discriminative in nature and robust to the assumptions of their embedded classifiers. Categories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning; H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval--- Information Filtering Keywords co-training, expectation-maximization, learning with labeled and unlabeled...\n\n2-Hop Neighbour:\nSupport Vector Machine Active Learning with Applications to Text Classification . Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based active learning. Instead of using a randomly selected training set, the learner has access to a pool of unlabeled instances and can request the labels for some number of them. We introduce a new algorithm for performing active learning with support vector machines, i.e., an algorithm for choosing which instances to request next. We provide a theoretical motivation for the algorithm using the notion of a version space. We present experimental results showing that employing our active learning method can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings. Keywords: Active Learning, Selective Sampling, Support Vector Machines, Classification, Relevance Feedback Abbreviations: SVM -- Support Vector Machine; TSVM -- Transductive Support Vector Machine 1.\n\n2-Hop Neighbour:\nAn Integrated Ontology for the WWW . Knowledge-intensive processing of WWW information should be founded on clear and uniform conceptualisation. An integrated ontology covering different aspects of the WWW (documents, sites, network addressing, HTML code) has been laid down, upon which a knowledge base of the WWW domain is being built. This knowledge base should support \"intelligent\" metasearch of the Web, in particular, postprocessing of hit-lists returned by external search engines. 1 Introduction During the last few years, the World-Wide Web has become one of the most widespread technologies of information presentation. It is thus not surprising that many Knowledge Engineering (KE) projects focus on it: some use HTML as a cheap, ready-made user-interface, other thrive to mine valuable information hidden inside existing WWW pages. A necessary prerequisite of mutual comprehensibility and knowledge reuse among different KE communities and projects dealing with the Web is a clear and unified conceptualisation, wh...\n\n2-Hop Neighbour:\nOZONE: A Zoomable Interface for Navigating Ontology Information We present OZONE (Zoomable Ontology Navigator), for searching and browsing ontological information. OZONE visualizes query conditions and provides interactive, guided browsing for DAML (DARPA Agent Markup Language) ontologies. To visually represent objects in DAML, we define a visual model for its classes, properties and relationships between them. Properties can be expanded into classes for query refinement. The visual query can be formulated incrementally as users explore class and property structures interactively. Zoomable interface techniques are employed for effective navigation and usability. Keywords: Ontology, DAML, Browsing, Zoomable User Interface (ZUI), Jazz, WWW.", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Adaptive Agent Architecture: Achieving FaultTolerance Using Persistent Broker Teams Brokers are used in many multi-agent systems for locating agents, for routing and sharing information, for managing the system, and for legal purposes, as independent third parties. However, these multi-agent systems can be incapacitated and rendered non-functional when the brokers become inaccessible due to failures such as machine crashes, network breakdowns, and process failures that can occur in any distributed software system. We propose that the theory of teamwork can be used to create robust brokered architectures that can recover from broker failures, and we present the Adaptive Agent Architecture (AAA) to show the feasibility of this approach. The AAA brokers form a team with a joint commitment to serve any agent that registers with the broker team as long as the agent remains registered with the team. This commitment enables the brokers to substitute for each other when needed. A multiagent system based on the AAA can continue to work despite broker failures as long...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nMultimodal System Processing in Mobile Environments One major goal of multimodal system design is to support more robust performance than can be achieved with a unimodal recognition technology, such as a spoken language system. In recent years, the multimodal literatures on speech and pen input and speech and lip movements have begun developing relevant performance criteria and demonstrating a reliability advantage for multimodal architectures. In the present studies, over 2,600 utterances processed by a multimodal pen/voice system were collected during both mobile and stationary use. A new data collection infrastructure was developed, including instrumentation worn by the user while roaming, a researcher field station, and a multimodal data logger and analysis tool tailored for mobile research. Although speech recognition as a stand-alone failed more often during mobile system use, the results confirmed that a more stable multimodal architecture decreased this error rate by 19-35%. Furthermore, these findings were replicated across different types of microphone technology. In large part this performance gain was due to significant levels of mutual disambiguation in the multimodal architecture, with higher levels occurring in the noisy mobile environment. Implications of these findings are discussed for expanding computing to support more challenging usage contexts in a robust manner.\n\n1-Hop Neighbour:\nMulti-Agent Architectures as Organizational Structures A Multi-Agent System (MAS) is an organization of coordinated autonomous agents that interact in order to achieve particular, possible common goals. Considering real world organizations as an analogy, this paper proposes architectural styles for MAS which adopt concepts from organizational theories. The styles are modeled using the i* framework which o#ers the notions of actor, goal and actor dependency and specified in Formal Tropos. They are evaluated with respect to a set of software quality attributes, such as predictability or adaptability. In addition, we conduct a comparative study of organizational and conventional software architectures using the mobile robot control example from the Software Engineering literature. The research is conducted in the context of Tropos, a comprehensive software system development methodology.\n\n1-Hop Neighbour:\nControlling Cooperative Problem Solving in Industrial Multi-Agent Systems using Joint Intentions One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparativ...\n\n2-Hop Neighbour:\nArchitectural Design Patterns for Multiagent Coordination This paper presents our first step towards agent-oriented software engineering, focusing on the area of coordinated multi-agent systems. In multi-agent systems, the interactions between the agents are crucial in determining the effectiveness of the system. Hence the adoption of an appropriate coordination mechanism is pivotal in the design of multi-agent system architectures. This paper does not focus on agent theory, rather on the development of an agent-oriented software engineering methodology, collaboration architectures and design patterns for collaboration. A catalog of coordination patterns inherent in multi-agent architectures is presented. Such patterns may be utilized in the architectural design for multiagent systems, allowing researchers and practitioners to improve the integrability and reusability properties of their systems.\n\n2-Hop Neighbour:\nTeam Formation by Self-Interested Mobile Agents . A process of team formation by autonomous agents in a distributed environment is presented. Since the environment is distributed, there are serious problems with communication and consistent decision making inside a team. To deal with these problems, the standard technique of token passing in a computer network is applied. The passing cycle of the token serves as the communication route. It assures consistent decision making inside the team maintaining its organizational integrity. On the other hand it constitutes a component of the plan of the cooperative work performed by a complete team. Two algorithms for team formation are given. The first one is based on simple self-interested agents that still can be viewed as reactive agents (see [14]) although augmented with knowledge, goal, and cooperation mechanisms. The second one is based on sophisticated self-interested agents. Moreover, the algorithm based on fully cooperative agents, which is an adaptation of the static ...\n\n2-Hop Neighbour:\nTen Myths of Multimodal Interaction\n\n2-Hop Neighbour:\nInvestigating Interactions Between Agent Conversations and Agent Control Components Exploring agent conversation in the context of fine-grained agent coordination research has raised several intellectual questions. The major issues pertain to interactions between different agent conversations, the representations chosen for different classes of conversations, the explicit modeling of interactions between the conversations, and how to address these interactions. This paper is not so ambitious as to attempt to address these questions, only frame them in the context of quantified, scheduling-centric multi-agent coordination. research. 1 Introduction Based on a long history of work in agents and agent control components for building distributed AI and multi-agent systems, we are attempting to frame and address a set of intellectual questions pertaining to agent conversation. Interaction lies at the heart of the matter; the issue is interaction between different agent conversations, that possibly occur at different levels of abstraction, but also interaction between the m...\n\n2-Hop Neighbour:\nSocial Mental Shaping: Modelling the Impact of Sociality on the Mental States of Autonomous Agents This paper presents a framework that captures how the social nature of agents that are situated in a multi-agent environment impacts upon their individual mental states. Roles and social relationships provide an abstraction upon which we develop the notion of social mental shaping. This allows us to extend the standard Belief-DesireIntention model to account for how common social phenomena (e.g. cooperation, collaborative problem-solving and negotiation) can be integrated into a unified theoretical perspective that reflects a fully explicated model of the autonomous agent's mental state. Keywords: Multi-agent systems, agent interactions, BDI models, social influence. 3 1.", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Potter's Wheel: An Interactive Data Cleaning System Cleaning data of errors in structure and content is important for data warehousing and integration. Current solutions for data cleaning involve many iterations of data \"auditing\" to find errors, and long-running transformations to fix them. Users need to endure long waits, and often write complex transformation scripts. We present Potter's Wheel, an interactive data cleaning system that tightly integrates transformation and discrepancy detection. Users gradually build transformations to clean the data by adding or undoing transforms on a spreadsheet-like interface; the effect of a transform is shown at once on records visible on screen. These transforms are specified either through simple graphical operations, or by showing the desired effects on example data values. In the background, Potter's Wheel automatically infers structures for data values in terms of user-defined domains, and accordingly checks for constraint violations. Thus users can gradually build a transformation as discrepancies are found, and clean the data without writing complex programs or enduring long delays. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nReal-world Data is Dirty: Data Cleansing and The Merge/Purge Problem The problem of merging multiple databases of information about common entities is frequently encountered in KDD and decision support applications in large commercial and government organizations. The problem we study is often called the Merge/Purge problem and is difficult to solve both in scale and accuracy. Large repositories of data typically have numerous duplicate information entries about the same entities that are difficult to cull together without an intelligent \"equational theory\" that identifies equivalent items by a complex, domain-dependent matching process. We have developed a system for accomplishing this Data Cleansing task and demonstrate its use for cleansing lists of names of potential customers in a direct marketing-type application. Our results for statistically generated data are shown to be accurate and effective when processing the data multiple times using different keys for sorting on each successive pass. Combing results of individual passes using transitive c...\n\n1-Hop Neighbour:\nWrapper Induction: Efficiency and Expressiveness The Internet presents numerous sources of useful information---telephone directories, product catalogs, stock quotes, event listings, etc. Recently, many systems have been built that automatically gather and manipulate such information on a user's behalf. However, these resources are usually formatted for use by people (e.g., the relevant content is embedded in HTML pages), so extracting their content is difficult. Most systems use customized wrapper procedures to perform this extraction task. Unfortunately, writing wrappers is tedious and error-prone. As an alternative, we advocate wrapper induction, a technique for automatically constructing wrappers. In this article, we describe six wrapper classes, and use a combination of empirical and analytical techniques to evaluate the computational tradeoffs among them. We first consider expressiveness: how well the classes can handle actual Internet resources, and the extent to which wrappers in one class can mimic those in another. We then...\n\n1-Hop Neighbour:\nWrapper Induction: Efficiency and Expressiveness (Extended Abstract) Recently, many systems have been built that automatically interact with Internet information resources. However, these resources are usually formatted for use by people; e.g., the relevant content is embedded in HTML pages. Wrappers are often used to extract a resource's content, but hand-coding wrappers is tedious and error-prone. We advocate wrapper induction, a technique for automatically constructing wrappers. We have identified several wrapper classes that can be learned quickly (most sites require only a handful of examples, consuming a few CPU seconds of processing), yet which are useful for handling numerous Internet resources (70% of surveyed sites can be handled by our techniques). Introduction The Internet presents a stunning variety of on-line information resources: telephone directories, retail product catalogs, weather forecasts, and many more. Recently, there has been much interest in systems (such as software agents (Etzioni & Weld 1994; Kwok & Weld 1996) or informati...\n\n2-Hop Neighbour:\nAdaptive information extraction: Core technologies for information agents Introduction For the purposes of this chapter, an information agent can be described as a distributed system that receives a goal through its user interface, gathers information relevant to this goal from a variety of sources, processes this content as appropriate, and delivers the results to the users. We focus on the second stage in this generic architecture. We survey a variety of information extraction techniques that enable information agents to automatically gather information from heterogeneous sources. For example, consider an agent that mediates package-delivery requests. To satisfy such requests, the agent might need to retrieve address information from geographic services, ask an advertising service for freight forwarders that serve the destination, request quotes from the relevant freight forwarders, retrieve duties and legal constraints from government sites, get weather information to estimate transportation delays, etc. Information extraction (IE) is a form of sh\n\n2-Hop Neighbour:\nFinite-state approaches to Web information extraction Introduction An information agent is a distributed system that receives a goal through its user interface, gathers information relevant to this goal from a variety of sources, processes this content as appropriate, and delivers the results to the users. We focus on the second stage in this generic architecture. We survey a variety of information extraction techniques that enable information agents to automatically gather information from heterogeneous sources. For example, consider an agent that mediates package-delivery requests. To satisfy such requests, the agent might need to retrieve address information from geographic services, ask an advertising service for freight forwarders that serve the destination, request quotes from the relevant freight forwarders, retrieve duties and legal constraints from government sites, get weather information to estimate transportation delays, etc. Information extraction (IE) is a form of shallow document processing that involves populating\n\n2-Hop Neighbour:\nMultistrategy Learning for Information Extraction Information extraction (IE) is the problem of filling out pre-defined structured summaries from text documents. We are interested in performing IE in non-traditional domains, where much of the text is often ungrammatical, such as electronic bulletin board posts and Web pages. We suggest that the best approach is one that takes into account many different kinds of information, and argue for the suitability of a multistrategy approach. We describe learners for IE drawn from three separate machine learning paradigms: rote memorization, term-space text classification, and relational rule induction. By building regression models mapping from learner confidence to probability of correctness and combining probabilities appropriately, it is possible to improve extraction accuracy over that achieved by any individual learner. We describe three different multistrategy approaches. Experiments on two IE domains, a collection of electronic seminar announcements from a university computer science de...\n\n2-Hop Neighbour:\nOntobroker: The Very High Idea The World Wide Web (WWW) is currently one of the most important electronic information sources. However, its query interfaces and the provided reasoning services are rather limited. Ontobroker consists of a number of languages and tools that enhance query access and inference service of the WWW. The technique is based on the use of ontologies. Ontologies are applied to annotate web documents and to provide query access and inference service that deal with the semantics of the presented information. In consequence, intelligent brokering services for web documents can be achieved without requiring to change the semiformal nature of web documents. Introduction The World Wide Web (WWW) contains huge amounts of knowledge about almost all subjects you can think of. HTML documents enriched by multi-media applications provide knowledge in different representations (i.e., text, graphics, animated pictures, video, sound, virtual reality, etc.). Hypertext links between web documents represent r...\n\n2-Hop Neighbour:\nA Data Preparation Framework based on a Multidatabase Language Integration and analysis of data from different sources have to deal with several problems resulting from potential heterogeneities. The activities addressing these problems are called data preparation and are supported by various available tools. However, these tools process mostly in a batch-like manner not supporting the iterative and explorative nature of the integration and analysis process. In this work we present a framework for important data preparation tasks based on a multidatabase language. This language offers features for solving common integration and cleaning problems as part of query processing. Combining data preparation mechanisms and multidatabase query facilities permits applying and evaluating different integration and cleaning strategies without explicit loading and materialization of data. The paper introduces the language concepts and discusses their application for individual tasks of data preparation.", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Benchmarking XML Management Systems: The XOO7 Way The effectiveness of existing XML query languages has been studied by many who focused on the comparison of linguistic features, implicitly reflecting the fact that most XML tools exist only on paper. In this paper, with a focus on efficiency and concreteness, we propose a pragmatic first step toward the systematic benchmarking of XML query processing platforms. We begin by identifying the necessary functionalities an XML data management system should support. We review existing approaches for managing XML data and the query processing capabilities of these approaches. We then compare three XML query benchmarks XMach-1, XMark and XOO7 and discuss the applicability, strengths and limitations of these benchmarks. We highlight the bias of these benchmarks towards the data centric view of XML and motivate our selection of XOO7 to extend with document centric queries. We complete XOO7 to capture the information retrieval capabilities of XML management systems. Finally we summarize our contributions and discuss future directions.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Performance Evaluation of Alternative Mapping Schemes for Storing XML Data in a Relational Database XML is emerging as one of the dominant data formats for data processing on the Internet. To query XML data, query languages likeXQL, Lorel, XML-QL, or XML-GL have been proposed. In this paper, we study how XML data can be stored and queried using a standard relational database system. For this purpose, we present alternative mapping schemes to store XML data in a relational database and discuss how XML-QL queries can be translated into SQL queries for every mapping scheme. We present the results of comprehensive performance experiments that analyze the tradeo#s of the alternative mapping schemes in terms of database size, query performance and update performance. While our discussion is focussed on XML and XML-QL, the results of this paper are relevant for most semi-structured data models and most query languages for semi-structured data. 1 Introduction It has become clear that not all applications are met by the relational, object-relational, or object-oriented data models. ...\n\n1-Hop Neighbour:\nQuilt: An XML Query Language for Heterogeneous Data Sources The World Wide Web promises to transform human society by making virtually all types of information instantly available everywhere. Two prerequisites for this promise to be realized are a universal markup language and a universal query language. The power and flexibility of XML make it the leading candidate for a universal markup language. XML provides a way to label information from diverse data sources including structured and semi-structured documents, relational databases, and object repositories. Several XML-based query languages have been proposed, each oriented toward a specific category of information. Quilt is a new proposal that attempts to unify concepts from several of these query languages, resulting in a new language that exploits the full versatility of XML. The name Quilt suggests both the way in which features from several languages were assembled to make a new query language, and the way in which Quilt queries can combine information from diverse data sources into a query result with a new structure of its own.\n\n1-Hop Neighbour:\nA Web Odyssey: from Codd to XML INTRODUCTION The Web presents the database area with vast opportunities and commensurate challenges. Databases and the Web are organically connected at many levels. Web sites are increasingly powered by databases. Collections of linked Web pages distributed across the Internet are themselves tempting targets for a database. The emergence of XML as the lingua franca of the Web brings some much needed order and will greatly facilitate the use of database techniques to manage Web information. This paper will discuss some of the developments related to the Web from the viewpoint of database theory. As we shall see, the Web scenario requires revisiting some of the basic assumptions of the area. To be sure, database theory remains as valid as ever in the classical setting, and the database industry will continue to representamulti-billion dollar target of applicability for the foreseeable future. But the Web represents an opportunityofanentirely di#erent scale. We are th\n\n2-Hop Neighbour:\nXML with Data Values: Typechecking Revisited We investigate the typechecking problem for XML queries: statically verifying that every answer to a query conforms to a given output DTD, for inputs satisfying a given input DTD. This problem had been studied by a subset of the authors in a simplified framework that captured the structure of XML documents but ignored data values. We revisit here the typechecking problem in the more realistic case when data values are present in documents and tested by queries. In this extended framework, typechecking quickly becomes undecidable. However, it remains decidable for large classes of queries and DTDs of practical interest. The main contribution of the present paper is to trace a fairly tight boundary of decidability for typechecking with data values. The complexity of typechecking in the decidable cases is also considered. 1.\n\n2-Hop Neighbour:\nEfficient Complex Query Support for Multiversion XML Documents Managing multiple versions of XML documents represents a critical requirement for many applications. Also, there has been much recent interest in supporting complex queries on XML data (e.g., regular path expressions, structural projections, DIFF queries). In this paper, we examine the problem of supporting efficiently complex queries on multiversioned XML documents. Our approach relies on a scheme based on durable node numbers (DNNs) that preserve the order among the XML tree nodes and are invariant with respect to updates. Using the document's DNNs various complex queries are reduced to combinations of partial version retrieval queries. We examine three indexing schemes to efficiently evaluate partial version retrieval queries in this environment. A thorough performance analysis is then presented to reveal the advantages of each scheme.\n\n2-Hop Neighbour:\nPushing Reactive Services to XML Repositories using Active Rules Push technology, i.e., the ability of sending relevant information to clients in reaction to new events, is a fundamental aspect of modern information systems; XML is rapidly emerging as the widely adopted standard for information exchange and representation and hence, several XML-based protocols have been defined and are the object of investigation at W3C and throughout commercial organizations. In this paper, we propose the new concept of active XML rules for \"pushing\" reactive services to XML-enabled repositories. Rules operate on XML documents and deliver information to interested remote users in reaction to update events occurring at the repository site. The proposed mechanism assumes the availability of XML repositories supporting a standard XML query language, such as XQuery that is being developed by the W3C; for the implementation of the reactive components, it capitalizes on the use of standard DOM events and of the SOAP interchange standard to enable the remote installation of active rules. A simple protocol is proposed for subscribing and unsubscribing remote rules.\n\n2-Hop Neighbour:\nStoring and Querying Multiversion XML Documents using Durable Node Numbers Managing multiple versions of XML documents represents an important problem for many traditional applications, such as software configuration control, as well as new ones, such as link permanence of web documents. Research on managing multiversion XML documents seeks to provide efficient and robust techniques for storing, retrieving and querying such documents. In this paper, we present a novel approach to version management that achieves these objectives by a scheme based on Durable Node Numbers and timestamps for the elements of XML documents. We first present efficient storage and retrieval techniques for multiversion documents. Then, we explore the indexing and clustering strategies needed to assure efficient support for complex queries on content and on document evolution.\n\n2-Hop Neighbour:\nAdaptive Query Processing for Internet Applications As the area of data management for the Internet has gained in popularity, recent work has focused on effectively dealing with unpredictable, dynamic data volumes and transfer rates using adaptive query processing techniques. Important requirements of the Internet domain include: (1) the ability to process XML data as it streams in from the network, in addition to working on locally stored data; (2) dynamic scheduling of operators to adjust to I/O delays and flow rates; (3) sharing and re-use of data across multiple queries, where possible; (4) the ability to output results and later update them. An equally important consideration is the high degree of variability in performance needs for different query processing domains: perhaps an ad-hoc query application should optimize for display of incomplete and partial incremental results, whereas a corporate data integration application may need the best time-to-completion and may have very strict data \"freshness\" guarantees. The goal of...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Abstractions and Infrastructures for the Design and Development of Mobile Agent Organizations Internet applications can take advantage of a paradigm based on autonomous and mobile agents. However, suitable abstractions and infrastructures are required for the effective engineering of such applications. In this paper, we argue that a conceptual framework for context-dependent coordination, supported by an infrastructure based on programmable media, can promote a modular and easy to manage approach to the design and development of mobile agent applications in terms of computational organizations. The MARS coordination infrastructure is presented as an implementation of a coordination infrastructure promoting context-dependent coordination. A case study in the area of workflow management is introduced to clarify the concepts presented. Keywords: Mobile Agents, Agent Organizations, Coordination Infrastructures, Agent-oriented Software Engineering. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nLIME: Linda Meets Mobility Lime is a system designed to assist in the rapid development of dependable mobile applications over both wired and ad hoc networks. Mobile agents reside on mobile hosts and all communication takes place via transiently shared tuple spaces distributed across the mobile hosts. The decoupled style of computing characterizing the Linda model is extended to the mobile environment. At the application level, both agents and hosts perceive movement as a sudden change of context. The set of tuples accessible by a particular agent residing on a given host is altered transparently in response to changes in the connectivity pattern among the mobile hosts. In this paper we present the key design concepts behind the Lime system. 1 INTRODUCTION Today's users demand ubiquitous network access independent of their physical location. This style of computation, often referred to as mobile computing, is enabled by rapid advances in the wireless communication technology. The networking scenarios enabled ...\n\n1-Hop Neighbour:\nAgent-Oriented Software Engineering Agent-oriented techniques represent an exciting new means of analysing, designing and building complex software systems. They have the potential to significantly improve current practice in software engineering and to extend the range of applications that can feasibly be tackled. Yet, to date, there have been few serious attempts to cast agent systems as a software engineering paradigm. This paper seeks to rectify this omission. Specifically, it will be argued that: (i) the conceptual apparatus of agent-oriented systems is well-suited to building software solutions for complex systems and (ii) agent-oriented approaches represent a genuine advance over the current state of the art for engineering complex systems. Following on from this view, the major issues raised by adopting an agent-oriented approach to software engineering are highlighted and discussed.\n\n1-Hop Neighbour:\nAgent-Oriented Software Engineering ion: The process of defining a simplified model of the system that emphasises some of the details or properties, while suppressing others. . Organisation 1 : The process of identifying and managing interrelationships between various problem solving components. Next, the characteristics of complex systems need to be enumerated [8]: . Complexity frequently takes the form of a hierarchy. That is, a system that is composed of inter-related sub-systems, each of which is in turn hierarchic in structure, until the lowest level of elementary sub-system is reached. The precise nature of these organisational relationships varies between sub-systems, however some generic forms (such as client-server, peer, team, etc.) can be identified. These relationships are not static: they often vary over time. . The choice of which components in the system are primitive is relatively arbitrary and is defined by the observer's aims and objectives. . Hierarchic systems evolve more quickly than non-hiera...\n\n2-Hop Neighbour:\nAn Application of Agent UML to Supply Chain Management Agent UML is certainly the most well-known graphical modeling language for describing multiagent systems but until now, it is not applied to real-world applications. The aim of our project is to apply Agent UML to the Supply Chain Management. This project has several objectives: (1) it allows to prove that Agent UML can be applied to real-world applications, (2) it allows to discover what is missing in Agent UML and what is wrong and finally, (3) it allows to define a methodology based on Agent UML and several tools. The aim of this paper is to sum up our first results on appying Agent UML to the Supply Chain Management and especially, the paper sketches what diagrams are interesting and what could be done after in our project.\n\n2-Hop Neighbour:\nModel Checking Agent UML Protocol Diagrams Agents in multiagent systems use protocols in order to exchange messages and to coordinate together. Since agents and objects are not exactly the same, designers do not use directly communication protocols used in distributed systems but a new type called interaction protocols encompassing agent features such as richer messages and the ability to cooperate and to coordinate. Obviously, designers consider formal description techniques used for communication protocols. New graphical modeling languages based on UML appeared several years ago. Agent UML is certainly the best known. Until now, no validation is given for Agent UML. The aim of this paper is to present how to model check Agent UML protocol diagrams.\n\n2-Hop Neighbour:\nCoordination Infrastructure for Virtual Enterprises Virtual Enterprises (VE) and Workflow Management Systems (WFMS) require deployable and flexible infrastructures, promoting the integration of heterogenous resources and services, as well as the development of new VE's business processes in terms of workflow (WF) rules coordinating the activities of VE's component enterprises. In this paper, we argue that a suitable general-purpose coordination infrastructure may well fit the needs of VE management in a highly dynamic and unpredictable environment like the Internet, by providing engineers with the abstractions and run-time support to address heterogeneity of different sorts, and to represent WF rules as coordination laws. We discuss the requirements for VE infrastructures, and suggest why VE management and WFMS may be seen as coordination problems. Then, we introduce the TuCSoN coordination model and technology, and show, both in principle and in a simple case study, how such a coordination infrastructure can support the design and deve...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Managing Data Quality in Cooperative Information Systems (Extended Abstract) Massimo Mecella 1, Monica Scannapieco 1'2, Antonino Virgillito 1, Roberto Baldoni I , Tiziana Catarci 1, and Carlo Batini 3 i Universirk di Roma \"La Sapienza\" Dipartimento di Informatica e Sistemistica {mecella, monscan, virgi, baldoni, catarci}dis. uniromal. it 2 Consiglio Nazionale delle Ricerche Istituto di Analisi dei Sistemi ed Informatica (IASI-CNR) 3 Universirk di Milano \"Bicocca\" Dipartimento di Informatica, Sistemistica e Comunicazione batinidisco. unimib. it Abstract. Current approaches to the development of cooperative information systems are based on services to be offered by cooperating organizations, and on the opportunity of building coordinators and brokers on top of such services. The quality of data exchanged and provided by different services hampers such approaches, as data of low quality can spread all over the cooperative system. At the same time, improvement can be based on comparing data, correcting them and disseminating high quality data. In this paper, a service-based framework for managing data quality in cooperative information systems is presented. An XML-based model for data and quality data is proposed, and the design of a broker for data, which selects the best available data from different services, is presented. Such a broker also supports the improvement of data based on feedbacks to source services.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBusiness Process Coordination: State of the Art, Trends, and Open Issues Over the past decade, there has been a lot of work in developing middleware for integrating and automating enterprise business processes. Today, with the growth in e-commerce and the blurring of enterprise boundaries, there is renewed interest in business process coordination, especially for inter-organizational processes. This paper provides a historical perspective on technologies for intra- and interenterprise business processes, reviews the state of the art, and exposes some open research issues. We include a discussion of process-based coordination and event/rule-based coordination, and corresponding products and standards activities. We provide an overview of the rather extensive work that has been done on advanced transaction models for business processes, and of the fledgling area of business process intelligence. 1.\n\n1-Hop Neighbour:\nAn Extensible Framework for Data Cleaning Data integration solutions dealing with large amounts of data have been strongly required in the last few years. Besides the traditional data integration problems (e.g. schema integration, local to global schema mappings), three additional data problems have to be dealt with: (1) the absence of universal keys across dierent databases that is known as the object identity problem, (2) the existence of keyboard errors in the data, and (3) the presence of inconsistencies in data coming from multiple sources. Dealing with these problems is globally called the data cleaning process. In this work, we propose a framework which oers the fundamental services required by this process: data transformation, duplicate elimination and multi-table matching. These services are implemented using a set of purposely designed macro-operators. Moreover, we propose an SQL extension for specifying each of the macro-operators. One important feature of the framework is the ability of explicitly includ...\n\n2-Hop Neighbour:\nA Framework for Ontology Integration One of the basic problems in the development of techniques for the semantic web is the integration of ontologies. Indeed, the web is constituted by a variety of information sources, each expressed over a certain ontology, and in order to extract information from such sources, their semantic integration and reconciliation in terms of a global ontology is required. In this paper, we address the fundamental problem of how to specify the mapping between the global ontology and the local ontologies. We argue that for capturing such mapping in an appropriate way, the notion of query is a crucial one, since it is very likely that a concept in one ontology corresponds to a view (i.e., a query) over the other ontologies. As a result query processing in ontology integration systems is strongly related to view-based query answering in data integration. 1\n\n2-Hop Neighbour:\nAccessing Data Integration Systems through Conceptual Schemas Data integration systems provide access to a set of heterogeneous, autonomous data sources through a so-called global, or mediated view. There is a general consensus that the best way to describe the global view is through a conceptual data model, and that there are basically two approaches for designing a data integration system. In the global-as-view approach, one defines the concepts in the global schema as views over the sources, whereas in the local-as-view approach, one characterizes the sources as views over the global schema. It is well known that processing queries in the latter approach is similar to query answering with incomplete information, and, therefore, is a complex task. On the other hand, it is a common opinion that query processing is much easier in the former approach. In this paper we show the surprising result that, when the global schema is expressed in terms of a conceptual data model, even a very simple one, query processing becomes di#cult in the global-as-view approach also. We demonstrate that the problem of incomplete information arises in this case too, and we illustrate some basic techniques for e#ectively answering queries posed to the global schema of the data integration system. 1\n\n2-Hop Neighbour:\nAssessment Methods for Information Quality Criteria Information quality (IQ) is one of the most important aspects of information integration on the Internet. Many projects realize and address this fact by gathering and classifying IQ criteria. Hardly ever do the projects address the immense difficulty of assessing scores for the criteria. This task must precede any usage of criteria for qualifying and integrating information. After reviewing previous attempts to classify IQ criteria, in this paper we also classify criteria, but in a new, assessment-oriented way. We identify three sources for IQ scores and thus, three IQ criterion classes, each with different general assessment possibilities. Additionally, for each criterion we give detailed assessment methods. Finally, we consider confidence measures for these methods. Confidence expresses the accuracy, lastingness, and credibility of the individual assessment methods. 1 Introduction Low information quality is one of the most pressing problems for consume rs of information that is di...\n\n2-Hop Neighbour:\nAssessment Methods for Information Quality Criteria Information quality (IQ) is one of the most important aspects of information integration on the Internet. Many projects realize and address this fact by gathering and classifying IQ criteria. Hardly ever do the projects address the immense difficulty of assessing scores for the criteria. This task must precede any usage of criteria for qualifying and integrating information. After reviewing previous attempts to classify IQ criteria, in this paper we also classify criteria, but in a new, assessment-oriented way. We identify three sources for IQ scores and thus, three IQ criterion classes, each with different general assessment possibilities. Additionally, for each criterion we give detailed assessment methods. Finally, we consider confidence measures for these methods. Confidence expresses the accuracy, lastingness, and credibility of the individual assessment methods. 1 Introduction Low information quality is one of the most pressing problems for consume rs of information that is di...\n\n2-Hop Neighbour:\nData Quality in e-Business Applications In e-Business scenarios, an evaluation of the quality of exchanged data is essential for developing service-based applications and correctly performing cooperative activities. Data of low quality can spread all over the cooperative system, but at the same time, improvement can be based on comparing data, correcting them and disseminating high quality data. In this paper, an XML-based broker service for managing data quality in cooperative systems is presented, which selects the best available data from different services. Such a broker also supports data quality improvements based on feedbacks to source services.", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'FEATURES: Real-time Adaptive Feature Learning and Document Learning for Web Search In this paper we report our research on building Features - an intelligent web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Not only does Features learn from the user's document relevance feedback, but also automatically extracts and suggests indexing keywords relevant to a search query and learns from the user's keyword relevance feedback so that it is able to speed up its search process and to enhance its search performance. We design two efficient and mutual-benefiting learning algorithms that work concurrently, one for feature learning and the other for document learning. Features employs these algorithms together with an internal index database and a real-time meta-searcher so to perform adaptive real-time learning to find desired documents with as little relevance feedback from the user as possible. The architecture and performance of Features are also discussed. 1 Introduction As the world wide web rapidly evo...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nYarrow: A Real-Time Client Side Meta-Search Learner In this paper we report our research on building Yarrow - an intelligent web meta-search engine. The predominant feature of Yarrow is that in contrast to the lack of adaptive learning features in existing metasearch engines, Yarrow is equipped with a practically efficient on-line learning algorithm so that it is capable of helping the user to search for the desired documents with as little feedback as possible. Currently, Yarrow can query eight of the most popular search engines and is able to perform document parsing and indexing, and learning in real-time on client side. Its architecture and performance are also discussed. 1. Introduction As the world wide web evolves and grows so rapidly, web search, an interface between the human users and the vast information gold mine of the web, is becoming a necessary part of people's daily life. Designing and implementing practically effective web search tools is a challenging task. It calls for innovative methods and strategies f...\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nWebSail: From On-line Learning to Web Search In this paper we investigate the applicability of on-line learning algorithms to the real-world problem of web search. Consider that web documents are indexed using n Boolean features. We first present a practically efficient on-line learning algorithm TW2 to search for web documents represented by a disjunction of at most k relevant features. We then design and implement WebSail, a real-time adaptive web search learner, with TW2 as its learning component. WebSail learns from the user's relevance feedback in real-time and helps the user to search for the desired web documents. The architecture and performance of WebSail are also discussed.\n\n2-Hop Neighbour:\nAnalysis and extraction of useful information across networks of Web databases Contents 1 Introduction 2 2 Problem Statement 2 3 Literature Review 3 3.1 Retrieving Text . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.2 Understanding Music . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Identifying Images . . . . . . . . . . . . . . . . . . . . . . . . 9 3.4 Extracting Video . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Work Completed and in Progress 12 5 Research Plan and Time-line 14 A List of Published Work 15 1 1 INTRODUCTION 2 1 Introduction The World Wide Web of documents on the Internet contains a huge amount of information and resources. It has been growing at a rapid rate for nearly a decade and is now one of the main resources of information for many people. The large interest in the Web is due to the fact that it is uncontrolled and easily accessible, no single person owns it and anyone can add to it. The Web has also brought with it a lot of controversy, also due to the\n\n2-Hop Neighbour:\nC4-1: Building a community hierarchy for the Web based on Bipartite Graphs In this paper we propose an approach to extract and relate the communities by considering a community signature as a group of content creators that manifests itself as a set of interlinked pages. We abstract a community signature as a group of pages that form a dense bipartite graph (DBG), and proposed an algorithm to extract the DBGs from the given data set. Also, using the proposed approach, the extracted communities can be grouped to form a high-level communities. We apply the proposed algorithm on 10 GB TREC (Text REtrieval Conference) data set and extract a three-level community hierarchy. The extracted community hierarchy facilitates an easy analysis of low-level communities and provides a way to understand the sociology of the Web.\n\n2-Hop Neighbour:\nWebBase : A repository of web pages In this paper, we study the problem of constructing and maintaining a large shared repository of web pages. We discuss the unique characteristics of such a repository, propose an architecture, and identify its functional modules. We focus on the storage manager module, and illustrate how traditional techniques for storage and indexing can be tailored to meet the requirements of a web repository. To evaluate design alternatives, we also present experimental results from a prototype repository called WebBase, that is currently being developed at Stanford University. Keywords : Repository, WebBase, Architecture, Storage management 1 Introduction A number of important applications require local access to substantial portions of the web. Examples include traditional text search engines [Google] [Avista], related page services [Google] [Alexa], and topic-based search and categorization services [Yahoo]. Such applications typically access, mine or index a local cache or repository of web...\n\n2-Hop Neighbour:\nDocument Classification as an Internet service: Choosing the best classifier This project investigates some of the issues involved in a new proposal for expanding the scope of the field of Data Mining by providing mining models as services on the Internet. This idea can widely increase the reach and accessibility of Data Mining to common people because one of the primary stumbling blocks in the adoption of mining is the extremely high level of expertise and data resources needed in building a robust mining model. We feel this task should be left to the specialists with access to data and resources, who can provide their most up to date model as a service on the Internet for public use.\n\n2-Hop Neighbour:\nDiscovering Seeds of New Interest Spread from Premature Pages Cited by Multiple Communities The World Wide Web is a great source of new topics significant for trend birth and creation. In this paper, we propose a method for discovering topics, which stimulate communities of people into earnest communications on the topics' meaning, and grow into a trend of popular interest. Here, the obtained are web pages which absorb attentions of people from multiple interest-communities. It is shown by a experiments to a small group of people, that topics in such pages can trigger the growth of peoples' interests, beyond the bounds of existing communities.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Who do you want to be today? Web Personae for personalised information access Personalised context sensitivity is the Holy Grail of web information retrieval. As a first step towards this goal, we present the Web Personae personalised search and browsing system. We use well-known information retrieval techniques to develop and track user models. Web Personae differ from previous approaches in that we model users with multiple profiles, each corresponding to a distinct topic or domain. Such functionality is essential in heterogeneous environments such as the Web. We introduce Web Personae, describe an algorithm for learning such models from browsing data, and discuss applications and evaluation methods.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nWebMate: A Personal Agent for Browsing and Searching The World-Wide Web is developing very fast. Currently, #nding useful information on the Web is a time consuming process. In this paper, we presentWebMate, an agent that helps users to e#ectively browse and search the Web. WebMate extends the state of the art in Web-based information retrieval in manyways. First, it uses multiple TF-IDF vectors to keep track of user interests in di#erent domains. These domains are automatically learned byWebMate. Second, WebMate uses the Trigger Pair Model to automatically extract keywords for re#ning document search. Third, during search, the user can provide multiple pages as similarity#relevance guidance for the search. The system extracts and combines relevantkeywords from these relevant pages and uses them for keyword re#nement. Using these techniques, WebMate provides e#ective browsing and searching help and also compiles and sends to users personal newspaper by automatically spiding news sources. Wehave experimentally evaluated the performance of the system.\n\n2-Hop Neighbour:\nBackground Readings for Collection Synthesis\n\n2-Hop Neighbour:\nPicASHOW: Pictorial Authority Search by Hyperlinks on the Web We describe PicASHOW, a fully automated WWW image retrieval system that is based on several link-structure analyzing algorithms. Our basic premise is that a page # displays (or links to) an image when the author of # considers the image to be of value to the viewers of the page. Wethus extend some well known link-based WWW #### ######### schemes to the context of image retrieval. PicASHOW's analysis of the link structure enables it to retrieve relevant images even when those are stored in les with meaningless names. The same analysis also allows it to identify ##### ########## and ##### ####. We dene these as Web pages that are rich in relevant images, or from which many images are readily accessible. PicASHOW requires no image analysis whatsoever and no creation of taxonomies for pre-classication of the Web's images. It can be implemented by standard WWW search engines with reasonable overhead, in terms of both computations and storage, and with no change to user query formats. It can thus be used to easily add image retrieving capabilities to standard search engines. Our results demonstrate that PicASHOW, while relying almost exclusively on link analysis, compares well with dedicated WWW image retrieval systems. We conclude that link analysis, a bona-de eective technique for Web page search, can improve the performance of Web image retrieval, as well as extend its denition to include the retrieval of image hubs and containers. Keywords Image Retrieval; Link Structure Analysis; Hubs and Authorities; Image Hubs. 1.\n\n2-Hop Neighbour:\nText Database Selection for Longer Queries A metasearch engine is a system that supports unified access to multiple local search engines. One of the main challenges in building a large-scale metasearch engine is to solve the database (search engine) selection problem, which is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. For the database of each search engine, a representative which indicates approximately the contents of the database is created in advance to enable database selection. The representatives of all databases can be integrated into a single representative to make the selection process more scalable. While an integrated representative with high scalability has just been proposed and has been found to be effective for short queries, its effectiveness for longer queries is significantly lower. In the Internet environment, most queries initially submitted by users are short queries. However, it has been found that better search effectiveness can often be achieved when additional terms are added to the initial queries through query expansion or relevance feedback. The resulting queries are usually longer than the initial queries. In this paper, we propose a new method to construct database representatives and to decide which databases to select for longer queries. Experimental results are given to compare the performance of the new method with that of a previous method.\n\n2-Hop Neighbour:\nWebBase : A repository of web pages In this paper, we study the problem of constructing and maintaining a large shared repository of web pages. We discuss the unique characteristics of such a repository, propose an architecture, and identify its functional modules. We focus on the storage manager module, and illustrate how traditional techniques for storage and indexing can be tailored to meet the requirements of a web repository. To evaluate design alternatives, we also present experimental results from a prototype repository called WebBase, that is currently being developed at Stanford University. Keywords : Repository, WebBase, Architecture, Storage management 1 Introduction A number of important applications require local access to substantial portions of the web. Examples include traditional text search engines [Google] [Avista], related page services [Google] [Alexa], and topic-based search and categorization services [Yahoo]. Such applications typically access, mine or index a local cache or repository of web...\n\n2-Hop Neighbour:\nCollection Synthesis The invention of the hyperlink and the HTTP transmission protocol caused an amazing new structure to appear on the Internet -- the World Wide Web. With the Web, there came spiders, robots, and Web crawlers, which go from one link to the next checking Web health, ferreting out information and resources, and imposing organization on the huge collection of information (and dross) residing on the net. This paper reports on the use of one such crawler to synthesize document collections on various topics in science, mathematics, engineering and technology. Such collections could be part of a digital library.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'An Autonomous Page Ranking Method for Metasearch Engines this paper, the topics are derived from the user's query; the reputation of each result page on the query topic is computed, and the value used to rank the result pages across all participating search engines, without biasing the ranking towards any of the sources' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nWeb Search -- Your Way We describe a metasearch engine architecture, in use at NEC Research Institute, that allows users to provide preferences in the form of an information need category. This extra information is used to direct the search process, providing more valuable results than by considering only the query. Using our architecture, identical keyword queries may be sent to different search engines, and results may be scored differently for different users.\n\n1-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nTopic-Driven Crawlers: Machine Learning Issues Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers.\n\n2-Hop Neighbour:\nTechniques for Specialized Search Engines It is emerging that it is very difficult for the major search engines to provide a comprehensive and up-to-date search service of the Web. Even the largest search engines index only a small proportion of static Web pages and do not search the Web' s backend databases that are estimated to be 500 times larger than the static Web. The scale of such searching introduces both technical and economic problems. What is more, in many cases users are not able to retrieve the information they desire because of the simple and generic search interface provided by the major search engines. A necessary response to these search problems is the creation of specialized search engines. These search engines search just for information in a particular topic or category on the Web. Such search engines will have smaller and more manageable indexes and have a powerful domainspecific search interface. This paper discusses the issues in this area and gives an overview of the techniques for building specialized search engines. Keywords: specialized search engine, information retrieval, focused crawling, taxonomy, Web search. 1.\n\n2-Hop Neighbour:\nEngineering a multi-purpose test collection for Web retrieval experiments Past research into text retrieval methods for the Web has been restricted by the lack of a test collection capable of supporting experiments which are both realistic and reproducible. The 1.69 million document WT10g collection is proposed as a multi-purpose testbed for experiments with these attributes, in distributed IR, hyperlink algorithms and conventional ad hoc retrieval. WT10g was constructed by selecting from a superset of documents in such a way that desirable corpus properties were preserved or optimised. These properties include: a high degree of inter-server connectivity, integrity of server holdings, inclusion of documents related to a very wide spread of likely queries, and a realistic distribution of server holding sizes. We conrm that WT10g contains exploitable link information using a site (homepage) nding experiment. Our results show that, on this task, Okapi BM25 works better on propagated link anchor text than on full text. Keywords: Web retrieval; Link-based ranking; Distributed information retrieval; Test collections 1\n\n2-Hop Neighbour:\nBreadth-First Search Crawling Yields High-Quality Pages This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl.\n\n2-Hop Neighbour:\nTopical Locality in the Web Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable WorldWide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Case Study in Web Search using TREC Algorithms Web search engines rank potentially relevant pages/sites for a user query. Ranking documents for user queries has also been at the heart of the Text REtrieval Conference (TREC in short) under the label ###### retrieval. The TREC community has developed document ranking algorithms that are known to be the best for searching the document collections used in TREC, which are mainly comprised of newswire text. However, the web search community has developed its own methods to rank web pages/sites, many of which use link structure on the web, and are quite dierentfrom the algorithms developed at TREC. This study evaluates the performance of a state-of-the-art keyword-based document ranking algorithm (coming out of TREC) on a popular web search task: nding the web page/site of an entity, #### companies, universities, organizations, individuals, etc. This form of querying is quite prevalentonthe web. The results from the TREC algorithms are compared to four commercial web search engines. Results show that for nding the web page/site of an entity, commercial web search engines are notably better than a state-of-the-art TREC algorithm. These results are in sharp contrast to results from several previous studies. Keywords Search engines, TREC ad-hoc, keyword-based ranking, linkbased ranking 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nResults and Challenges in Web Search Evaluation A frozen 18.5 million page snapshot of part of the Web has been created to enable and encourage meaningful and reproducible evaluation of Web search systems and techniques. This collection is being used in an evaluation framework within the Text Retrieval Conference (TREC) and will hopefully provide convincing answers to questions such as, \"Can link information result in better rankings?\", \"Do longer queries result in better answers?\", and, \"Do TREC systems work well on Web data?\" The snapshot and associated evaluation methods are described and an invitation is extended to participate. Preliminary results are presented for an effectivess comparison of six TREC systems working on the snapshot collection against five well-known Web search systems working over the current Web. These suggest that the standard of document rankings produced by public Web search engines is by no means state-of-the-art. 1999 Published by Elsevier Science B.V. All rights reserved. Keywords: Evaluation; Search...\n\n2-Hop Neighbour:\nCollection Synthesis The invention of the hyperlink and the HTTP transmission protocol caused an amazing new structure to appear on the Internet -- the World Wide Web. With the Web, there came spiders, robots, and Web crawlers, which go from one link to the next checking Web health, ferreting out information and resources, and imposing organization on the huge collection of information (and dross) residing on the net. This paper reports on the use of one such crawler to synthesize document collections on various topics in science, mathematics, engineering and technology. Such collections could be part of a digital library.\n\n2-Hop Neighbour:\nA Meta-search Method Reinforced by Cluster Descriptors A meta-search engine acts as an agent for the participant search engines. It receives queries from users and redirects them to one or more of the participant search engines for processing. A meta-search engine incorporating many participant search engines is better than a single global search engine in terms of the number of pages indexed and the freshness of the indexes. The meta-search engine stores descriptive data (i.e., descriptors) about the index maintained by each participant search engine so that it can estimate the relevance of each search engine when a query is received. The ability for the meta-search engine to select the most relevant search engines determines the quality of the final result. To facilitate the selection process, the document space covered by each search engine must be described not only concisely but also precisely. Existing methods tend to focus on the conciseness of the descriptors by keeping a descriptor for a search engine 's entire index. This paper proposes to cluster a search engine's document space into clusters and keep a descriptor for each cluster. We show that cluster descriptors can provide a finer and more accurate representation of the document space, and hence enable the meta-search engine to improve the selection of relevant search engines. Two cluster-based search engine selection scenarios (i.e., independent and high-correlation) are discussed in this paper. Experiments verify that the cluster-based search engine selection can effectively identify the most relevant search engines and improve the quality of the search results consequently. 1\n\n2-Hop Neighbour:\nLearning Search Engine Specific Query Transformations for Question Answering We introduce a method for learning query transformations that improves the ability to retrieve answers to questions from an information retrieval system. During the training stage the method involves automatically learning phrase features for classifying questions into different types, automatically generating candidate query transformations from a training set of question/answer pairs, and automatically evaluating the candidate transforms on target information retrieval systems such as real-world general purpose search engines. At run time, questions are transformed into a set of queries, and re-ranking is performed on the documents retrieved. We present a prototype search engine, Tritus, that applies the method to web search engines. Blind evaluation on a set of real queries from a web search engine log shows that the method significantly outperforms the underlying web search engines as well as a commercial search engine specializing in question answering. Keywords Web search, quer...\n\n2-Hop Neighbour:\nChart of Darkness: Mapping a Large Intranet We introduce and de ne the concept of dark matter on the Web. Dark matter for a person or Web crawler consists of pages that they cannot reach and view, but which another observer can. Dark matter is important to our understanding of the Web in that the portion of the Web any of us can see depends on our viewpoint. Diffrent observers see different overlapping sections of the Web. However, no one can see all of the Web, even if they want to. We categorise the various types of dark matter that exist and how they may be discovered. Formal definitions of what constitutes lightness and darkness on the Web are formulated in terms of reachability. Our case study of dark matter within the Australian National University's intranet is reported. We estimate that 87% of the ANU intranet's information is dark to our local search service, and 37% is potentially loadable Web data unreachable to almost every Web user. Finally, we discuss some of the implications of dark matter for estimating the size of the Web and for general Web searching.\n\n2-Hop Neighbour:\nInferring Web Communities Through Relaxed Cocitation and Dense Bipartite Graphs Community forming is one of the important activity in the Web. The Web harbors a large number of communities. A community is a group of content creators that manifests itself as a set of interlinked pages. Given a large collection of pages our aim is to find potential communities in the Web. In the literature, Ravi Kumar et al. [18] proposed a trawling method to find potential communities by abstracting a core of the community as a group of pages that form a complete bipartite graph (CBG) (web-page as a node and link as an edge between two nodes). The trawling approach extracts a small group of pages that form a CBG, which is a signature of a potential community.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Workshop on Intelligent Information Integration (III'99)' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Hierarchical Approach to Wrapper Induction With the tremendous amount of information that becomes available on the Web on a daily basis, the ability to quickly develop information agents has become a crucial problem. A vital component of any Web-based information agent is a set of wrappers that can extract the relevant data from semistructured information sources. Our novel approach to wrapper induction is based on the idea of hierarchical information extraction, which turns the hard problem of extracting data from an arbitrarily complex document into a series of easier extraction tasks. We introduce an inductive algorithm, stalker, that generates high accuracy extraction rules based on user-labeled training examples. Labeling the training data represents the major bottleneck in using wrapper induction techniques, and our experimental results show that stalker does significantly better then other approaches; on one hand, stalker requires up to two orders of magnitude fewer examples than other algorithms, while on the other hand...\n\n1-Hop Neighbour:\nWeb Mining Research: A Survey With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.\n\n2-Hop Neighbour:\nWeb Usage Mining - Languages and Algorithms We propose two new XML applications, XGMML and LOGML. XGMML is a graph description language and LOGML is a web-log report description language. We generate a web graph in XGMML format for a web site using the web robot of the WWWPal system (developed for web visualization and organization). We generate web-log reports in LOGML format for a web site from web log files and the web graph. In this paper, we further illustrate the usefulness of these two XML applications with a web data mining example. Moreover, we show the simplicity with which this mining algorithm can be specified and implemented efficiently using our two XML applications. We provide sample results, namely frequent patterns of users in a web site, with our web data mining algorithm.\n\n2-Hop Neighbour:\nAuthoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative \"authority\" of pages in such environments. This issue is central to a number of basic hypertext search t...\n\n2-Hop Neighbour:\nA Machine Learning Approach to Building Domain-Specific Search Engines Domain-specific search engines are becoming increasingly popular because they offer increased accuracy and extra features not possible with general, Web-wide search engines. Unfortunately, they are also difficult and timeconsuming to maintain. This paper proposes the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific search engines. We describe new research in reinforcement learning, text classification and information extraction that enables efficient spidering, populates topic hierarchies, and identifies informative text segments. Using these techniques, we have built a demonstration system: a search engine for computer science research papers available at www.cora.justresearch.com. 1 Introduction As the amount of information on the World Wide Web grows, it becomes increasingly difficult to find just what wewant. While general-purpose search engines suchas AltaVista and HotBot offer high coverage, they often provi...\n\n2-Hop Neighbour:\nA Web-based Information System that Reasons with Structured Collections of Text The degree to which information sources are pre-processed by Web-based information systems varies greatly. In search engines like Altavista, little pre-processing is done, while in \"knowledge integration\" systems, complex site-specific \"wrappers\" are used integrate different information sources into a common database representation. In this paper we describe an intermediate between these two models. In our system, information sources are converted into a highly structured collection of small fragments of text. Databaselike queries to this structured collection of text fragments are approximated using a novel logic called WHIRL, which combines inference in the style of deductive databases with ranked retrieval methods from information retrieval. WHIRL allows queries that integrate information from multiple Web sites, without requiring the extraction and normalization of object identifiers that can be used as keys; instead, operations that in conventional databases require equality tests...\n\n2-Hop Neighbour:\nWeb Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions This paper summarizes the different characteristics of web data, the basic components of web mining and its different types, and their current states of the art. The reason for considering web mining, a separate field from data mining, is explained. The limitations of some of the existing web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) highlighted. A survey of the existing literature on \"soft web mining\" is provided along with the commercially available systems. The prospective areas of web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing \"soft web mining\" systems is explained. An extensive bibliography is also provided.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Accelerated Focused Crawling through Online Relevance Feedback The organization of HTML into a tag tree structure, which is rendered by browsers as roughly rectangular regions with embedded text and HREF links, greatly helps surfers locate and click on links that best satisfy their information need. Can an automatic program emulate this human behavior and thereby learn to predict the relevance of an unseen HREF target page w.r.t. an information need, based on information limited to the HREF source page? Such a capability would be of great interest in focused crawling and resource discovery, because it can fine-tune the priority of unvisited URLs in the crawl frontier, and reduce the number of irrelevant pages which are fetched and discarded.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n1-Hop Neighbour:\nTopical Locality in the Web Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable WorldWide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.\n\n1-Hop Neighbour:\nIntegrating the Document Object Model with Hyperlinks for Enhanced Topic Distillation and Information Extraction Topic distillation is the process of finding authoritative Web pages a comprehensive \"hubs\" which reciprocally endorse each other and are relevant to a given query. Hyperlink-based topic distillation has been traditionally applied to a macroscopic Web model where documents are nodes in a directed graph and hyperlinks are edges.Mas.M::[KP models miss va lua44 clues such aba4'::M na viga::M paa els,as templa]M2'0]K inclusions, whicha: embedded in HTML paLM using ma0KP taKP Consequently, results of ma:]6:1M2' distillaKP] atillaKP have been deterioraKP] inqua:1 ya s Webpa0: a becoming more complex. We propose a uniformfine-gra'K] model for the Web in which pa:] a represented by theirta trees (aes caesM their Document Object Models or DOMs)aM these DOM trees ar interconnected by ordinaM hyperlinks. Surprisingly, ma]6:[M2K' distillaKKP atillaKK do not work in the finegra -M: scena:]6 We present a new awM0PK1P suitaK1 for the fine-gra2K0 model. It can dis-aggregate hubs into coherent regions by segmenting their DO trees.utua endorsement between hubs as aM0[1['M2K involve these regions, rans, tha single nodes representing complete hubs. Anecdotesae meatesMP' ts using a 28-query, 366000-document benchmark suite, used in ea0]K4 topic distilla[M2 reseai h, reveal two benefits from the new aM:0KK6M2 distillastion quati y improves, a,a by-product of distillation is the aeM14 y to extra0 relevat snippets from hubs which a: nonly payM40[K relevant to the query.\n\n2-Hop Neighbour:\nInformation Retrieval on the Web In this paper we review studies on the growth of the Internet and technologies which are useful for information search and retrieval on the Web. We present data on the Internet from several dierent sources, e.g., current as well as projected number of users, hosts and Web sites. Although numerical gures vary, overall trends cited by the sources are consistent and point to exponential growth in the past and in the coming decade. As such, it is not surprising that about 85% of Internet users surveyed claim to be using search engines and search services to nd speci c information of interest. The same surveys show, however, that users are not satis ed with the performance of the current generation of search engines; the slow speed of retrieval, communication delays, and poor quality of retrieved results (e.g., noise and broken links) are commonly cited problems. We discuss the development of new techniques which are targeted to resolve some of the problems associated with Web-...\n\n2-Hop Neighbour:\nThe Intelligent Surfer: Probabilistic Combination of Link and Content Information in PageRank The PageRank algorithm, used in the Google search engine, greatly improves the results of Web search by taking into account the link structure of the Web. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. We propose to improve PageRank by using a more intelligent surfer, one that is guided by a probabilistic model of the relevance of a page to a query. Efficient execution of our algorithm at query time is made possible by precomputing at crawl time (and thus once for all queries) the necessary terms. Experiments on two large subsets of the Web indicate that our algorithm significantly outperforms PageRank in the (human -rated) quality of the pages returned, while remaining efficient enough to be used in today's large search engines.\n\n2-Hop Neighbour:\nUsing Common Hypertext Links to Identify the Best Phrasal Description of Target Web Documents This paper describes previous work which studied and compared the distribution of words in web documents with the distribution of words in \"normal\" flat texts. Based on the findings from this study it is suggested that the traditional IR techniques cannot be used for web search purposes the same way they are used for \"normal\" text collections, e.g. news articles. Then, based on these same findings, I will describe a new document description model which exploits valuable anchor text information provided on the web that is ignored by the traditional techniques. The problem Amitay (1997) has found, through a corpus analysis of a 1000 web pages that the lexical distribution in documents which were written especially for the web (home pages), is significantly different than the lexical distribution observed in a corpus of \"normal\" English language (the British National Corpus - 100,000,000 words). For example, in the web documents collection there were some HTML files which contained no v...\n\n2-Hop Neighbour:\nC4-1: Building a community hierarchy for the Web based on Bipartite Graphs In this paper we propose an approach to extract and relate the communities by considering a community signature as a group of content creators that manifests itself as a set of interlinked pages. We abstract a community signature as a group of pages that form a dense bipartite graph (DBG), and proposed an algorithm to extract the DBGs from the given data set. Also, using the proposed approach, the extracted communities can be grouped to form a high-level communities. We apply the proposed algorithm on 10 GB TREC (Text REtrieval Conference) data set and extract a three-level community hierarchy. The extracted community hierarchy facilitates an easy analysis of low-level communities and provides a way to understand the sociology of the Web.\n\n2-Hop Neighbour:\nWebSail: From On-line Learning to Web Search In this paper we investigate the applicability of on-line learning algorithms to the real-world problem of web search. Consider that web documents are indexed using n Boolean features. We first present a practically efficient on-line learning algorithm TW2 to search for web documents represented by a disjunction of at most k relevant features. We then design and implement WebSail, a real-time adaptive web search learner, with TW2 as its learning component. WebSail learns from the user's relevance feedback in real-time and helps the user to search for the desired web documents. The architecture and performance of WebSail are also discussed.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Focused Crawls, Tunneling, and Digital Libraries Crawling the Web to build collections of documents related to pre-specified topics became an active area of research during the late 1990's after crawler technology was developed for the benefit of search engines. Now, Web crawling is being seriously considered as an important strategy for building large scale digital libraries. This paper considers some of the crawl technologies that might be exploited for collection building. For example, to make such collection-building crawls more effective, focused crawling was developed, in which the goal was to make a \"best-first\" crawl of the Web. We are using powerful crawler software to implement a focused crawl but use tunneling to overcome some of the limitations of a pure best-first approach. Tunneling has been described by others as not only prioritizing links from pages according to the page's relevance score, but also estimating the value of each link and prioritizing on that as well. We add to this mix by devising a tunneling focused crawling strategy which evaluates the current crawl direction on the fly to determine when to terminate a tunneling activity. Results indicate that a combination of focused crawling and tunneling could be an e#ective tool for building digital libraries.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nHigh-Performance Web Crawling SRC’s charter is to advance the state of the art in computer systems by doing basic and applied research in support of our company’s business objectives. Our interests and projects span scalable systems (including hardware, networking, distributed systems, and programming-language technology), the Internet (including the Web, e-commerce, and information retrieval), and human/computer interaction (including user-interface technology, computer-based appliances, and mobile computing). SRC was established in 1984 by Digital Equipment Corporation. We test the value of our ideas by building hardware and software prototypes and assessing their utility in realistic settings. Interesting systems are too complex to be evaluated solely in the abstract; practical use enables us to investigate their properties in depth. This experience is useful in the short term in refining our designs and invaluable in the long term in advancing our knowledge. Most of the major advances in information systems have come through this approach, including personal computing, distributed systems, and the Internet. We also perform complementary work of a more mathematical character. Some of\n\n1-Hop Neighbour:\nCentroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...\n\n2-Hop Neighbour:\nUsing Web Structure for Classifying and Describing Web Pages The structure of the web is increasingly being used to improve organization, search, and analysis of information on the web. For example, Google uses the text in citing documents (documents that link to the target document) for search. We analyze the relative utility of document text, and the text in citing documents near the citation, for classification and description. Results show that the text in citing documents, when available, often has greater discriminative and descriptive power than the text in the target document itself. The combination of evidence from a document and citing documents can improve on either information source alone. Moreover, by ranking words and phrases in the citing documents according to expected entropy loss, we are able to accurately name clusters of web pages, even with very few positive examples. Our results confirm, quantify, and extend previous research using web structure in these areas, introducing new methods for classification and description of pages.\n\n2-Hop Neighbour:\nCategorisation by Context Assistance in retrieving of documents on the World Wide Web is provided either by search engines, through keyword based queries, or by catalogues, which organise documents into hierarchical collections. Maintaining catalogues manually is becoming increasingly difficult due to the sheer amount of material, and therefore it will be necessary to resort to techniques for automatic classification of documents. Classification is traditionally performed by extracting information for indexing a document from the document itself. The paper describes the technique of categorisation by context, which exploits the context perceivable from the structure of HTML documents to extract useful information for classifying the documents they refer to. We present the results of experiments with a preliminary implementation of the technique. 1. INTRODUCTION Most Web search engines (e.g. Altavista^TM [Altavista], HotBot^TM [HotBot], Excite^TM [Excite]) perform search based on the content of docume...\n\n2-Hop Neighbour:\nExploiting Structure for Intelligent Web Search Together with the rapidly growing amount of online data we register an immense need for intelligent search engines that access a restricted amount of data as found in intranets or other limited domains. This sort of search engines must go beyond simple keyword indexing/matching, but they also have to be easily adaptable to new domains without huge costs. This paper presents a mechanism that addresses both of these points: first of all, the internal document structure is being used to extract concepts which impose a directorylike structure on the documents similar to those found in classified directories. Furthermore, this is done in an efficient way which is largely language independent and does not make assumptions about the document structure.\n\n2-Hop Neighbour:\nDetection of Heterogeneities in a Multiple Text Database Environment As the number of text retrieval systems (search engines) grows rapidly on the World Wide Web, there is an increasing need to build search brokers (metasearch engines) on top of them. Often, the task of building an effective and efficient metasearch engine is hindered by the heterogeneities among the underlying local search engines. In this paper, we first analyze the impact of various heterogeneities on building a metasearch engine. We then present some techniques that can be used to detect the most prominent heterogeneities among multiple search engines. Applications of utilizing the detected heterogeneities in building better metasearch engines will be provided.\n\n2-Hop Neighbour:\nWeb Genre Visualization Web users vary widely in terms of their expertise on the topics for which they search, the amount of detail they seek, etc. Unfortunately, today's one-size-fits-all Web search services do not cater to such individual preferences. For example, it is difficult to query for documents that give extensive detail but assume modest prior expertise. We describe how shallow text classification techniques can be used to sort the documents returned by Web search services according to genre dimensions such as level of expertise and amount of detail, and propose a simple visualization interface that helps users rapidly find appropriate documents. (Keywords: document genre; information retrieval, visualisation, text classification, shallow linguistic processing) Motivation Consider two users seeking information about Pearson' correlation coefficient. Alice is writing a data-analysis program and needs a web page to remind her of the equations. Bob, a teacher, wants to point his pupils to an overview that isn't bogged down in equations.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Towards Web-Scale Web Archeology Web-scale Web research is difficult. Information on the Web is vast in quantity, unorganized and uncatalogued, and available only over a network with varying reliability. Thus, Web data is difficult to collect, to store, and to manipulate efficiently. Despite these difficulties, we believe performing Web research at Web-scale is important. We have built a suite of tools that allow us to experiment on collections that are an order of magnitude or more larger than are typically cited in the literature. Two key components of our current tool suite are a fast, extensible Web crawler and a highly tuned, in-memory database of connectivity information. A Web page repository that supports easy access to and storage for billions of documents would allow us to study larger data sets and to study how the Web evolves over time.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nFinding Related Pages in the World Wide Web When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach toweb searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related web pages. A related web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers. We describe two algorithms to identify related web pages. These algorithms use only the connectivity information in the web (i.e., the links between pages) and not the content of pages or usage information. We haveimplemented both algorithms and measured their runtime performance. To evaluate the e ectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's \\What's Related \" service [12]. Our study showed that the precision at 10 for our two algorithms are 73 % better and 51 % better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.\n\n1-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n2-Hop Neighbour:\nClustering Categorical Data: An Approach Based on Dynamical Systems We describe a novel approach for clustering collections of sets, and its application to the analysis and mining of categorical data. By \"categorical data,\" we mean tables with fields that cannot be naturally ordered by a metric --- e.g., the names of producers of automobiles, or the names of products offered by a manufacturer. Our approach is based on an iterative method for assigning and propagating weights on the categorical values in a table; this facilitates a type of similarity measure arising from the cooccurrence of values in the dataset. Our techniques can be studied analytically in terms of certain types of non-linear dynamical systems. We discuss experiments on a variety of tables of synthetic and real data; we find that our iterative methods converge quickly to prominently correlated values of various categorical fields. 1 Introduction Much of the data in databases is categorical: fields in tables whose attributes cannot naturally be ordered as numerical values can. The pro...\n\n2-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n2-Hop Neighbour:\nSynchronizing a database to Improve Freshness In this paper we study how to refresh a local copy of an autonomous data source to maintain the copy up-to-date. As the size of the data grows, it becomes more difficult to maintain the copy \"fresh,\" making it crucial to synchronize the copy effectively. We define two freshness metrics, change models of the underlying data, and synchronization policies. We analytically study how effective the various policies are. We also experimentally verify our analysis, based on data collected from 270 web sites for more than 4 months, and we show that our new policy improves the \"freshness\" very significantly compared to current policies in use.\n\n2-Hop Neighbour:\nAnalysis and extraction of useful information across networks of Web databases Contents 1 Introduction 2 2 Problem Statement 2 3 Literature Review 3 3.1 Retrieving Text . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.2 Understanding Music . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Identifying Images . . . . . . . . . . . . . . . . . . . . . . . . 9 3.4 Extracting Video . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Work Completed and in Progress 12 5 Research Plan and Time-line 14 A List of Published Work 15 1 1 INTRODUCTION 2 1 Introduction The World Wide Web of documents on the Internet contains a huge amount of information and resources. It has been growing at a rapid rate for nearly a decade and is now one of the main resources of information for many people. The large interest in the Web is due to the fact that it is uncontrolled and easily accessible, no single person owns it and anyone can add to it. The Web has also brought with it a lot of controversy, also due to the\n\n2-Hop Neighbour:\nWeb Crawling Agents for Retrieving Biomedical Information Autonomous agents for topic driven retrieval of information from the Web are currently a very active area of research. The ability to conduct real time searches for information is important for many users including biomedical scientists, health care professionals and the general public. We present preliminary research on different retrieval agents tested on their ability to retrieve biomedical information, whose relevance is assessed using both genetic and ontological expertise. In particular, the agents are judged on their performance in fetching information about diseases when given information about genes. We discuss several key insights into the particular challenges of agent based retrieval learned from our initial experience in the biomedical domain.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nClustering Categorical Data: An Approach Based on Dynamical Systems We describe a novel approach for clustering collections of sets, and its application to the analysis and mining of categorical data. By \"categorical data,\" we mean tables with fields that cannot be naturally ordered by a metric --- e.g., the names of producers of automobiles, or the names of products offered by a manufacturer. Our approach is based on an iterative method for assigning and propagating weights on the categorical values in a table; this facilitates a type of similarity measure arising from the cooccurrence of values in the dataset. Our techniques can be studied analytically in terms of certain types of non-linear dynamical systems. We discuss experiments on a variety of tables of synthetic and real data; we find that our iterative methods converge quickly to prominently correlated values of various categorical fields. 1 Introduction Much of the data in databases is categorical: fields in tables whose attributes cannot naturally be ordered as numerical values can. The pro...\n\n1-Hop Neighbour:\nInCommonSense - Rethinking Web Search Results The World Wide Web is a rich annotation system which allows people to relate to documents and sites from different perspectives. People describe, comment, relate or mock other Web pages in the context of their document. This richness is currently not reflected in snippets presented by Web search engines, where a search result is represented by the text found in the Web document alone. This paper proposes a new method for representing documents in Web search engines' results. This method is based on recent trends in search engine technology and provides descriptions of the retrieved documents, assembled from people's commentary and annotations on the Web. This paper suggests a new way for automatically retrieving and reusing people's annotations on the Web, incorporating these annotations into a search engine for creating a hybrid directory-search-engine, allowing for both automatic retrieval and on-the-fly human authored summaries. I. INTRODUCTION It is common knowledge that many use...\n\n1-Hop Neighbour:\nThe Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis.\n\n2-Hop Neighbour:\nAuthoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative \"authority\" of pages in such environments. This issue is central to a number of basic hypertext search t...\n\n2-Hop Neighbour:\nLearning Probabilistic Relational Models A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with &quot;flat &quot; data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend well-known statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning — the automatic induction of the dependency structure in a model. Moreover, we show how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets. We present experimental results on both real and synthetic relational databases. 1\n\n2-Hop Neighbour:\nProbabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two--mode and co-occurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of experiments.\n\n2-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n2-Hop Neighbour:\nProbabilistic Models for Unified Collaborative and Content-Based Recommendation in Sparse-Data Environments Recommender systems leverage product and community information to target products to consumers. Researchers have developed collaborative recommenders, content-based recommenders, and a few hybrid systems. We propose a unified probabilistic framework for merging collaborative and content-based recommendations. We extend Hofmann's (1999) aspect model to incorporate three-way co-occurrence data among users, items, and item content. The relative influence of collaboration data versus content data is not imposed as an exogenous parameter, but rather emerges naturally from the given data sources. However, global probabilistic models coupled with standard EM learning algorithms tend to drastically overfit in the sparsedata situations typical of recommendation applications. We show that secondary content information can often be used to overcome sparsity. Experiments on data from the ResearchIndex library of Computer Science publications show that appropriate mixture models incorporating secondary data produce significantly better quality recommenders than k-nearest neighbors (k-NN). Global probabilistic models also allow more general inferences than local methods like k-NN.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Yarrow: A Real-Time Client Side Meta-Search Learner In this paper we report our research on building Yarrow - an intelligent web meta-search engine. The predominant feature of Yarrow is that in contrast to the lack of adaptive learning features in existing metasearch engines, Yarrow is equipped with a practically efficient on-line learning algorithm so that it is capable of helping the user to search for the desired documents with as little feedback as possible. Currently, Yarrow can query eight of the most popular search engines and is able to perform document parsing and indexing, and learning in real-time on client side. Its architecture and performance are also discussed. 1. Introduction As the world wide web evolves and grows so rapidly, web search, an interface between the human users and the vast information gold mine of the web, is becoming a necessary part of people's daily life. Designing and implementing practically effective web search tools is a challenging task. It calls for innovative methods and strategies f...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA System For Automatic Personalized Tracking of Scientific Literature on the Web We introduce a system as part of the CiteSeer digital library project for automatic tracking of scientific literature that is relevant to a user’s research interests. Unlike previous systems that use simple keyword matching, CiteSeer is able to track and recommend topically relevant papers even when keyword based query profiles fail. This is made possible through the use of a heterogenous profile to represent user interests. These profiles include several representations, including content based relatedness measures. The CiteSeer tracking system is well integrated into the search and browsing facilities of CiteSeer, and provides the user with great flexibility in tuning a profile to better match his or her interests. The software for this system is available, and a sample database is online as a public service.\n\n2-Hop Neighbour:\nAmalthaea: Information Discovery and Filtering using a Multiagent Evolving Ecosystem Agents are semi-intelligent programs that assist the user in performing repetitive and time-consuming tasks. Information discovery and information filtering are a suitable domain for applying agent technology. Ideas drawn from the field of autonomous agents and artificial life are combined in the creation of an evolving ecosystem composed of competing and cooperating agents. A co-evolution model of information filtering agents that adapt to the various user's interests and information discovery agents that monitor and adapt to the various on-line information sources is analyzed. Results from a number of experiments are presented and discussed. Keywords: Agents, Information Filtering, Evolution, World-Wide-Web 1 Introduction The exponential increase of computer systems that are interconnected in on-line networks has resulted in a corresponding exponential increase in the amount of information available on-line. This information is distributed among heterogeneous sources and is...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video Hidden Markov models (HMM's) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that demonstrate a realtime HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon. 1 Introduction While there are many different types of gestures, the most structured sets belong to the sign languages. In sign language, each gesture already has assigned meaning, and strong rules of context and grammar may be applied to make recognition tractable. To date, most work on sign language recognition has employed expensi...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nVisual Contextual Awareness in Wearable Computing Small, body-mounted video cameras enable a different style of wearable computing interface. As processing power increases, a wearable computer can spend more time observing its user to provide serendipitous information, manage interruptions and tasks, and predict future needs without being directly commanded by the user. This paper introduces an assistant for playing the real-space game Patrol. This assistant tracks the wearer's location and current task through computer vision techniques and without off-body infrastructure. In addition, this paper continues augmented reality research, started in 1995, for binding virtual data to physical locations. 1. Introduction For most computer systems, even virtual reality systems, sensing techniques are a means of getting input directly from the user. However, wearable computers offer a unique opportunity to re-direct sensing technology towards recovering more general user context. Wearable computers have the potential to \"see\" as the user sees...\n\n1-Hop Neighbour:\n3D Hand Pose Reconstruction Using Specialized Mappings A system for recovering 3D hand pose from monocular color sequences is proposed. The system employs a non-linear supervised learning framework, the specialized mappings architecture (SMA), to map image features to likely 3D hand poses. The SMA's fundamental components are a set of specialized forward mapping functions, and a single feedback matching function. The forward functions are estimated directly from training data, which in our case are examples of hand joint configurations and their corresponding visual features. The joint angle data in the training set is obtained via a CyberGlove, a glove with 22 sensors that monitor the angular motions of the palm and fingers. In training, the visual features are generated using a computer graphics module that renders the hand from arbitrary viewpoints given the 22 joint angles. The viewpoint is encoded by two real values, therefore 24 real values represent a hand pose. We test our system both on synthetic sequences and on sequences taken with a color camera. The system automatically detects and tracks both hands of the user, calculates the appropriate features, and estimates the 3D hand joint angles and viewpoint from those features. Results are encouraging given the complexity of the task.\n\n1-Hop Neighbour:\nAdaptive Fuzzy Expert System for Sign Recognition The Hand Motion Understanding (HMU) system is a vision-based Australian sign language recognition system that recognises static and dynamic hand signs. It uses a visual hand tracker to extract 3D hand configuration data from a visual motion sequence, and a classifier that recognises the changes of these 3D kinematic data as a sign. This paper presents the HMU classifier that uses an adaptive fuzzy inference engine for sign recognition. Fuzzy set theory allows the system to express the sign knowledge in natural and imprecise descriptions. The HMU classifier has an adaptive engine that trains the system to be adaptive to the errors caused by the tracker or the motion variations exhibited amongst the signers. The HMU system is evaluated with 22 static and dynamic Auslan signs, and recognised 20 signs before training, and 21 signs after training of the HMU classifier. Keywords: Sign Language, Sign Recognition, Fuzzy Logic, Adaptive Fuzzy System, Expert System. 1. Introduction ...\n\n2-Hop Neighbour:\nWearable Computing Meets Ubiquitous Computing: Reaping the best of both worlds This paper describes what we see as fundamental diculties in both the pure ubiquitous computing and pure wearable computing paradigms when applied to context-aware applications. In particular, ubiquitous computing and smart room systems tend to have dif- culties with privacy and personalization, while wearable systems have trouble with localized information, localized resource control, and resource management between multiple people. These diculties are discussed, and a peer-to-peer network of wearable and ubiquitous computing components is proposed as a solution. This solution is demonstrated through several implemented applications. 1 Introduction Ubiquitous computing and wearable computing have been posed as polar opposites even though they are often applied in very similar applications. Here we rst outline the advantages and disadvantages of each and propose that the two perspectives have complementary problems. We then attempt to demonstrate that the failing of both ubiquitous...\n\n2-Hop Neighbour:\nMulti-Sensor Context Aware Clothing Inspired by perception in biological systems, distribution of a massive amount of simple sensing devices is gaining more support in detection applications. A focus on fusion of sensor signals instead of strong analysis algorithms, and a scheme to distribute sensors, results in new issues. Especially in wearable computing, where sensor data continuously changes, and clothing provides an ideal supporting structure for simple sensors, this approach may prove to be favourable. Experiments with a body-distributed sensor system investigate the influence of two factors that affect classification of what has been sensed: an increase in sensors enhances recognition, while adding new classes or contexts depreciates the results. Finally, a wearable computing related scenario is discussed that exploits the presence of many sensors.\n\n2-Hop Neighbour:\nAdding Some Smartness to Devices and Everyday Things In mobile computing, context-awareness indicates the ability of a system to obtain and use information on aspects of the system environment. To implement contextawareness, mobile system components have to be augmented with the ability to capture aspects of their environment. Recent work has mostly considered locationawareness, and hence augmentation of mobile artifacts with locality. In this paper we discuss augmentation of mobile artifacts with diverse sets of sensors and perception techniques for awareness of context beyond location. We report experience from two projects, one on augmentation of mobile phones with awareness technologies, and the other on embedding of awareness technology in everyday non-digital artifacts.\n\n2-Hop Neighbour:\nAdvanced Interaction in Context . Mobile information appliances are increasingly used in numerous different situations and locations, setting new requirements to their interaction methods. When the user's situation, place or activity changes, the functionality of the device should adapt to these changes. In this work we propose a layered real-time architecture for this kind of context-aware adaptation based on redundant collections of low-level sensors. Two kinds of sensors are distinguished: physical and logical sensors, which give cues from environment parameters and host information. A prototype board that consists of eight sensors was built for experimentation. The contexts are derived from cues using real-time recognition software, which was constructed after experiments with Kohonen's Self-Organizing Maps and its variants. A personal digital assistant (PDA) and a mobile phone were used with the prototype to demonstrate situational awareness. On the PDA font size and backlight were changed depending...\n\n2-Hop Neighbour:\nView-independent Recognition of Hand Postures Since human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research of view-independent object recognition. Due to the difficulties of the modelbased approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set o...", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Evaluating Guidelines for Reducing User Disorientation When Navigating in Virtual Environments Navigation in virtual environments can be difficult. One contributing factor is user disorientation.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nUsing the Resources Model in Virtual Environment Design this paper we take a step back from the formal specification of VEs to investigate where requirements and design information are located within these environments and how it can be structured and analysed. More specifically, we are interested in considering VEs in terms of distributed cognition (DC) [5, 7, 14, 20].\n\n1-Hop Neighbour:\nCo-operative Evaluation of a Desktop Virtual Reality System A summative usability evaluation of a desktop virtual reality (VR) system was developed and a preliminary study then conducted. The purpose of the study was twofold. Firstly, to test whether the traditional evaluation technique, co-operative evaluation, is effective in the evaluation of desktop VR systems. Co-operative evaluation is a variation on a `think-aloud' verbal protocol, whereby, in addition to concurrently 'thinking-aloud', users are encouraged to ask any questions about an evaluation, relating to the computer-based system, the application, or the tasks that they are required to perform during the evaluation. As well as this, the evaluator may ask questions of the user at any time during the evaluation. Results from the study indicate that this additional probing technique enables an evaluator to elicit further usability problems that may not have otherwise been exteriorized by the user. Additionally, a method is developed which attempts to turn round the qualitative 'think-aloud' type data into quantitative data. This provides a way of evaluating empirical 'think-aloud' evaluation methods and will be useful for comparing their effectiveness to evaluate 3D virtual reality systems.\n\n1-Hop Neighbour:\nUsing Cinematography Conventions to Inform Guidelines For the Design and Evaluation of Virtual Off-Screen Space Many usability problems are associated with navigation and exploration of virtual space. In an attempt to find methods that support navigation within virtual space, this paper describes an investigation of cinematography conventions. In particular, this will focus on conventions that suggest to spectators the existence of additional space other than that contained within the confines or borders of the projection screen. Referred to as off-screen space, this paper builds upon these conventions and proposes guidelines to inform the design of visual cues to suggest virtual off-screen space. Visual cues will appear natural and transparent, they will help to guide participants through the smooth and continuously animated VE, and thus, maintain the illusion of interacting within a larger 3D virtual space than that contained within the restricted Field-Of-View (FOV) of the display screen. Introduction The 3 rd dimension of a Virtual Environment (VE) creates a space. Within ...\n\n2-Hop Neighbour:\nGuiding User Navigation in Virtual Environments Using Awareness of Virtual Off-Screen Space Navigation in virtual environments can be difficult. One contributing factor is the problem of user disorientation. Two major causes of this are the lack of navigation cues in the environment and problems with navigating too close to or through virtual world objects. Previous work has developed guidelines, informed by cinematography conventions, for the construction of virtual environments to aid user comprehension of virtual \"space\" to reduce user disorientation. This paper describes the validation of these guidelines via a user study involving a navigation task in a virtual \"maze\". Results suggest that the use of the guidelines can help reduce the incidences of user disorientation. However, the guidelines seemed to have little impact on users' abilities to construct 'cognitive maps' of the environment.\n\n2-Hop Neighbour:\nMaintaining the Illusion of Interacting Within a 3D Virtual Space It is widely thought to more or less a degree, that a sense of presence may be induced in users of new and emerging media technologies, such as, the Internet, digital television and cinema (supporting interaction), teleconferencing and 3D virtual reality systems. In this paper, it is argued that presence presupposes that participants are absorbed in the illusion of interacting within the visual spaces created by these media. That is, prior to the possibility of any inducement of presence, participants need to be absorbed in the illusion conveyed by the media. Without this, participants' attention is broken and the illusion is lost. Hence, the potential to induce presence in participants ceases. To encourage participants to lose sight of the means of representation and be drawn into the illusion conveyed by these media, this paper proposes the development of design principles to increase participants' experience. In an attempt to inform design principles, this paper focuses on another artificial although highly successful visual medium - film. By way of example, this paper concentrates on one medium, virtual reality, and proposes design principles that attempt to maintain the illusion of interacting within 3D virtual space. This attempts to provide a platform through the resourceful blend of hardware and software Virtual Reality (VR) enabling technologies on which to support a well designed virtual environment and hence, from which the inducement of presence in participants may develop.\n\n2-Hop Neighbour:\nTowards Hybrid Interface Specification for Virtual Environments . Many new multi-modal interaction techniques have been proposed for interaction in a virtual world. Often these techniques are of a hybrid nature combining continuous interaction, such as gestures and moving video, with discrete interaction, such as pushing buttons to select items. Unfortunately the description of the behavioural aspects of these interaction techniques found in the literature is informal and incomplete. This can make it hard to compare and evaluate their usability. This paper investigates the use of HyNet to give concise and precise specifications of hybrid interaction techniques. HyNet is an extension of high-level Petri Nets developed for specification and verification of hybrid systems, i.e. mathematical models including both continuous and discrete elements. 1 Introduction New technologies for virtual environments (VEs) have been eagerly embraced by VE users and developers. The process of diffusing this technology into a wider range of products has, i...", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Path Constraints on Deterministic Graphs We study path constraints for deterministic graph model [9], a variation of semistructured data model in which data is represented as a rooted edge-labeled directed graph with deterministic edge relations. The path constraint languages considered include the class of word constraints introduced in [4], the language P c investigated in [8], and an extension of P c defined in terms of regular expressions. Complexity results on the implication and finite implication problems for these constraint languages are established. 1 Introduction Semistructured data is characterized as having no type constraints, irregular structure and rapidly evolving or missing schema [1, 6]. Examples of such data can be found on the WorldWide -Web, in biological databases and after data integration. In particular, documents of XML (eXtensible Markup Language [5]) can also be viewed as semistructured data [10]. The unifying idea in modeling semistructured data is the representation of data as an edge-labeled, r...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nPath Constraints on Semistructured and Structured Data We present a class of path constraints of interest in connection with both structured and semi-structured databases, and investigate their associated implication problems. These path constraints are capable of expressing natural integrity constraints that are not only a fundamental part of the semantics of the data, but are also important in query optimization. We show that in semistructured databases, despite the simple syntax of the constraints, their associated implication problem is r.e. complete and finite implication problem is co-r.e. complete. However, we establish the decidability of the implication problems for several fragments of the path constraint language, and demonstrate that these fragments suffice to express important semantic information such as inverse relationships and local database constraints commonly found in object-oriented databases. We also show that in the presence of types, the analysis of path constraint implication becomes more delicate. We demonstrate so...\n\n1-Hop Neighbour:\nQuery Optimization for Semistructured Data using Path Constraints in a Deterministic Data Model . Path constraints have been studied for semistructured data modeled as a rooted edge-labeled directed graph [4, 11--13]. In this model, the implication problems associated with many natural path constraints are undecidable [11, 13]. A variant of the graph model, called the deterministic data model , was recently proposed in [10]. In this model, data is represented as a graph with deterministic edge relations, i.e., the edges emanating from any node in the graph have distinct labels. This model is more appropriate for representing, e.g., ACeDB [27] databases and Web sites. This paper investigates path constraints for the deterministic data model. It demonstrates the application of path constraints to, among others, query optimization. Three classes of path constraints are considered: the language Pc introduced in [11], an extension of Pc , denoted by P w c , by including wildcards in path expressions, and a generalization of P w c , denoted by P c , by representing pa...\n\n2-Hop Neighbour:\nOn Bounding-Schemas for LDAP Directories . As our world gets more networked, ever increasing amounts of information are being stored in LDAP directories. While LDAP directories have considerable flexibility in the modeling and retrieval of information for network applications, the notion of schema they provide for enabling consistent and coherent representation of directory information is rather weak. In this paper, we propose an expressive notion of bounding-schemas for LDAP directories, and illustrate their practical utility. Bounding-schemas are based on lower bound and upper bound specifications for the content and structure of an LDAP directory. Given a bounding-schema specification, we present algorithms to efficiently determine: (i) if an LDAP directory is legal w.r.t. the bounding-schema, and (ii) if directory insertions and deletions preserve legality. Finally, we show that the notion of bounding-schemas has wider applicability, beyond the specific context of LDAP directories. 1 Introduction X.500 styl...\n\n2-Hop Neighbour:\nWhat can Knowledge Representation do for Semi-Structured Data? The problem of modeling semi-structured data is important in many application areas such as multimedia data management, biological databases, digital libraries, and data integration. Graph schemas (Buneman et al. 1997) have been proposed recently as a simple and elegant formalism for representing semistructured data. In this model, schemas are represented as graphs whose edges are labeled with unary formulae of a theory, and the notions of conformance of a database to a schema and of subsumption between two schemas are defined in terms of a simulation relation. Several authors have stressed the need of extending graph schemas with various types of constraints, such as edge existence and constraints on the number of outgoing edges. In this paper we analyze the appropriateness of various knowledge representation formalisms for representing and reasoning about graph schemas extended with constraints. We argue that neither First Order Logic, nor Logic Programming nor Frame-based languages ...\n\n2-Hop Neighbour:\nInteraction between Path and Type Constraints XML [7], which is emerging as an important standard for data exchange on the World-Wide Web, highlights the importance of semistructured data. Although the XML standard itself does not require any schema or type system, a number of proposals [6, 17, 19] have been developed that roughly correspond to data definition languages. These allow one to constrain the structure of XML data by imposing a schema on it. These and other proposals also advocate the need for integrity constraints, another form of constraints that should, for example, be capable of expressing inclusion constraints and inverse relationships. The latter have recently been studied as path constraints in the context of semistructured data [4, 9]. It is likely that future XML proposals will involve both forms of constraints, and it is therefore appropriate to understand the interaction between them. This paper investigates that interaction. In particular it studies constraint implication problems, which are important both i...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Reasoning over Conceptual Schemas and Queries in Temporal Databases This paper introduces a new logical formalism, intended for temporal conceptual modelling, as a natural combination of the wellknown description logic DLR and pointbased linear temporal logic with Since and Until. The expressive power of the resulting DLRUS logic is illustrated by providing a systematic formalisation of the most important temporal entity-relationship data models appeared in the literature. We define a query language (where queries are nonrecursive Datalog programs and atoms are complex DLRUS expressions) and investigate the problem of checking query containment under the constraints defined by DLRUS conceptual schemas, as well as the problems of schema satisfiability and logical implication. Although it is shown that reasoning in full DLRUS is undecidable, we identify the decidable (in a sense, maximal) fragment DLR US by allowing applications of temporal operators to formulas and entities only (but not to relation expressions). We obtain the following hierarchy of complexity results: (a) reasoning in DLR US with atomic formulas is EXPTIME-complete, (b) satisfiability and logical implication of arbitrary DLR US formulas is EXPSPACE-complete, and (c) the problem of checking query containment of non-recursive Datalog queries under DLR US constraints is decidable in 2EXPTIME.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nConcept Based Design of Data Warehouses: The DWQ Demonstrators The ESPRIT Project DWQ (Foundations of Data Warehouse Quality) aimed at improving the quality of DW design and operation through systematic enrichment of the semantic foundations of data warehousing. Logic-based knowledge representation and reasoning techniques were developed to control accuracy, consistency, and completeness via advanced conceptual modeling techniques for source integration, data reconciliation, and multi-dimensional aggregation. This is complemented by quantitative optimization techniques for view materialization, optimizing timeliness and responsiveness without losing the semantic advantages from the conceptual approach. At the operational level, query rewriting and materialization refreshment algorithms exploit the knowledge developed at design time. The demonstration shows the interplay of these tools under a shared metadata repository, based on an example extracted from an application at Telecom Italia. 1 Overview of the Demonstration The demonstration follows ...\n\n1-Hop Neighbour:\nA Tableau Calculus for Temporal Description Logic: The Expanding Domain Case . In this paper we present a tableau calculus for a temporal extension of the description logic ALC, called T LALC . This logic is based on the temporal language with `Until' interpreted over the natural numbers with expanding ALC-domains. The tableau calculus forms an elaborated combination of Wolper's tableau calculus for propositional linear temporal logic, the standard tableau-algorithm for ALC, and the method of quasimodels as it has been introduced by Wolter and Zakharyaschev. Based on those three ingredients the paper provides a new method of how tableau-based decision procedures can be constructed for many-dimensional logics which lack the finite model property. The method can be applied to deal with other temporalized formalisms as well. 1 Introduction In many application domains of logic in Computer Science and Artificial Intelligence it is no longer enough to describe the static aspect of the world. In particular, there is a need to formalize its temporal evolution...\n\n2-Hop Neighbour:\nA Temporal Description Logic for Reasoning about Actions and Plans A class of interval-based temporal languages for uniformly representing and reasoning about actions and plans is presented. Actions are represented by describing what is true while the action itself is occurring, and plans are constructed by temporally relating actions and world states. The temporal languages are members of the family of Description Logics, which are characterized by high expressivity combined with good computational properties. The subsumption problem for a class of temporal Description Logics is investigated and sound and complete decision procedures are given. The basic languageTL-F is considered rst: it is the composition of a temporal logicTL { able to express interval temporal networks { together with the non-temporal logicF { a Feature Description Logic. It is proven that subsumption in this language is an NP-complete problem. Then it is shown how to reason with the more expressive languagesTLU-FU andTL-ALCF. The former adds disjunction both at the temporal and non-temporal sides of the language, the latter extends the non-temporal side with set-valued features (i.e., roles) and a propositionally complete language. 1.\n\n2-Hop Neighbour:\nReasoning over Conceptual Schemas and Queries in Temporal Databases This paper introduces a new logical formalism, intended for temporal conceptual modelling, as a natural combination of the wellknown description logic DLR and pointbased linear temporal logic with Since and Until. The expressive power of the resulting DLRUS logic is illustrated by providing a systematic formalisation of the most important temporal entity-relationship data models appeared in the literature. We define a query language (where queries are nonrecursive Datalog programs and atoms are complex DLRUS expressions) and investigate the problem of checking query containment under the constraints defined by DLRUS conceptual schemas, as well as the problems of schema satisfiability and logical implication. Although it is shown that reasoning in full DLRUS is undecidable, we identify the decidable (in a sense, maximal) fragment DLR US by allowing applications of temporal operators to formulas and entities only (but not to relation expressions). We obtain the following hierarchy of complexity results: (a) reasoning in DLR US with atomic formulas is EXPTIME-complete, (b) satisfiability and logical implication of arbitrary DLR US formulas is EXPSPACE-complete, and (c) the problem of checking query containment of non-recursive Datalog queries under DLR US constraints is decidable in 2EXPTIME.\n\n2-Hop Neighbour:\nThe i.com Tool for Intelligent Conceptual Modelling In this paper we present i.com, a tool for intelligent conceptual modelling. i.com allows for the specification of multiple EER diagrams and inter- and intra-schema constraints. Complete logical reasoning is employed by the tool to verify the specification, infer implicit facts, and manifest any inconsistencies. 1 Introduction i.com is a tool supporting the conceptual design phase of an information system, and in particular of an integration information system -- such as a data warehouse. The tool is an evolution of part of the conceptual modelling demonstrators suite [Jarke et al., 2000] developed within the European ESPRIT Long Term Research Data Warehouse Quality (DWQ) project [Jarke et al., 1999] . i.com adopts an extended Entity-Relationship (EER) conceptual data model, enriched with multidimensional aggregations and interschema constraints. i.com is fully integrated with a very powerful description logics reasoning server which acts as a background inference engine. The co...\n\n2-Hop Neighbour:\nA Tableau-Based Proof Method for Temporal Logics of Knowledge and Belief . In this paper we define two logics, KLn and BLn , and present tableau-based decision procedures for both. KLn is a temporal logic of knowledge. Thus, in addition to the usual connectives of linear discrete temporal logic, it contains a set of unary modal connectives for representing the knowledge possessed by agents. The logic BLn is somewhat similar; it is a temporal logic that contains connectives for representing the beliefs of agents. In addition to a complete formal definition of the two logics and their decision procedures, the paper includes a brief review of their applications in AI and mainstream computer science, correctness proofs for the decision procedures, a number of worked examples illustrating the decision procedures, and some pointers to further work. KEYWORDS:Temporal logics of knowledge and belief, theorem proving, tableau. 1 Introduction This paper presents two logics, called KLn and BLn respectively, and gives tableau-based decision procedures for both. The l...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Integrity Constraints and Constraint Logic Programming It is shown that constraint logic is useful for evaluation of integrity constraints in deductive databases. Integrity constraints are represented as calls to a metainterpreter for negation-as-failure implemented as a constraint solver. This procedure, called lazy negationas -failure, yields an incremental evaluation: It starts checking the existing database and each time an update request occurs, simplified constraints are produced for checking the particular update and new constraints corresponding to specialized integrity constraints are generated for the updated database. 1 Introduction There is a relationship between integrity constraints in databases and the constraints of constraint logic programming going beyond the partial overlap of the names applied for these phenomena. Both concern conditions that should be ensured for systems of interdependent entities: the different tuples in a database, and the set of variables in a program execution state. Both relate to problems that e...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAn Experimental CLP Platform for Integrity Constraints and Abduction Integrity constraint and abduction are important in query-answering systems for enhanced query processing and for expressing knowledge in databases. A straightforward characterization of the two is given in a subset of the language CHR _ , originally intended for writing constraint solvers to be applied for CLP languages. This subset has a strikingly simple computational model that can be executed using existing, Prolog-based technology. Together with earlier results, this confirms CHR _ as a multiparadigm platform for experimenting with combinations of top-down and bottom-up evaluation, disjunctive databases and, as shown here, integrity constraint and abduction 1 Introduction Constraint logic programming (CLP) [10] is established as an extension to logic programming that adds higher expressibility and in some cases more efficient query evaluation. CLP has also given rise to a field of constraint databases [14]. In the present paper, we suggest CLP techniques applied for defini...\n\n2-Hop Neighbour:\nExecuting Suspended Logic Programs . We present an extension of Logic Programming (LP) which, in addition to ordinary LP clauses, also includes integrity constraints, explicit representation of disjunction in the bodies of clauses and in goals, and suspension of atoms as in concurrent logic languages. The resulting framework aims to unify Constraint Logic Programming (CLP), Abductive Logic Programming (ALP) and Semantic Query Optimisation (SQO) in deductive databases. We present a proof procedure for the new framework, simplifying and generalising previously proposed proof procedures for ALP. We discuss applications of the framework, formulating traditional problems from LP, ALP, CLP and SQO. Keywords: Logic Programming (LP), Constraint Logic Programming (CLP), Abductive Logic Programming (ALP), Semantic Query Optimisation (SQO) in Deductive Databases. The second author is supported by the EPSRC project \"Logic-based multi-agent systems\". The third author is supported by ONR grant N00014-96-1-1057. The authors are grat...\n\n2-Hop Neighbour:\nSemantic Query Optimization through Abduction and Constraint Handling . The use of integrity constraints to perform Semantic Query Optimization (SQO) in deductive databases can be formalized in a way similar to the use of integrity constraints in Abductive Logic Programming (ALP) and the use of Constraint Handling Rules in Constraint Logic Programming (CLP). Based on this observation and on the similar role played by, respectively, extensional, abducible and constraint predicates in SQO, ALP and CLP, we present a unified framework from which (variants of) SQO, ALP and CLP can be obtained as special instances. The framework relies on a proof procedure which combines backward reasoning with logic programming clauses and forward reasoning with integrity constraints. 1 Introduction Semantic Query Optimization (SQO) in deductive databases uses implicit knowledge coded in Integrity Constraints (ICs) to transform queries into new queries that are easier to evaluate and ideally contain only atoms of extensional predicates. SQO sometimes allows for unsatisfiable...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Information Retrieval on the Web In this paper we review studies on the growth of the Internet and technologies which are useful for information search and retrieval on the Web. We present data on the Internet from several dierent sources, e.g., current as well as projected number of users, hosts and Web sites. Although numerical gures vary, overall trends cited by the sources are consistent and point to exponential growth in the past and in the coming decade. As such, it is not surprising that about 85% of Internet users surveyed claim to be using search engines and search services to nd speci c information of interest. The same surveys show, however, that users are not satis ed with the performance of the current generation of search engines; the slow speed of retrieval, communication delays, and poor quality of retrieved results (e.g., noise and broken links) are commonly cited problems. We discuss the development of new techniques which are targeted to resolve some of the problems associated with Web-...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nInformation Retrieval on the World Wide Web and Active Logic: A Survey and Problem Definition As more information becomes available on the World Wide Web (there are currently over 4 billion pages covering most areas of human endeavor), it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest). The first process is tentative and time consuming and the second may not satisfy the user because of many inaccurate and irrelevant results. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves.\n\n1-Hop Neighbour:\nContext and Page Analysis for Improved Web Search NEC Research Institute has developed a metasearch engine that improves the efficiency of Web searches by downloading and analyzing each document and then displaying results that show the query terms in context. Several popular and useful search engines such as AltaVista, Excite, HotBot, Infoseek, Lycos, and Northern Light attempt to maintain full-text indexes of the World Wide Web. However, relying on a single standard search engine has limitations. The standard search engines have limited coverage, 1,2 outdated databases, and are sometimes unavailable due to problems with the network or the engine itself. The precision of standard engine results can also vary because they generally focus on handling queries quickly and use relatively simple ranking schemes. 3 Rankings can be further muddled by keyword spamming to increase a page's rank order. Often, the relevance of a particular page is obvious only after loading it and finding the query terms. Metasearch engines, such as MetaCrawler and SavvySearch, attempt to contend with the problem of limited coverage by submitting queries to several standard search engines at once. 4,5 The primary advantages of metasearch engines are that they combine the results of several search engines and present a consistent user interface. 5 However, most metasearch engines rely on the documents and summaries returned by standard search engines and so inherit their limited precision and vulnerability to keyword spamming. We developed the NEC Research Institute (NECI) metasearch engine [now called Inquirus] to improve the efficiency and precision of Web search by downloading and analyzing each document and then displaying results that show the query terms in\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nSynchronizing a database to Improve Freshness In this paper we study how to refresh a local copy of an autonomous data source to maintain the copy up-to-date. As the size of the data grows, it becomes more difficult to maintain the copy \"fresh,\" making it crucial to synchronize the copy effectively. We define two freshness metrics, change models of the underlying data, and synchronization policies. We analytically study how effective the various policies are. We also experimentally verify our analysis, based on data collected from 270 web sites for more than 4 months, and we show that our new policy improves the \"freshness\" very significantly compared to current policies in use.\n\n2-Hop Neighbour:\nRank Aggregation Methods for the Web We consider the problem of combining ranking results from various sources. In the context of the Web, the main applications include building meta-search engines, combining ranking functions, selecting documents based on multiple criteria, and improving search precision through word associations. Wedevelop a set of techniques for the rank aggregation problem and compare their performance to that of well-known methods. A primary goal of our work is to design rank aggregation techniques that can effectively combat \"spam,\" a serious problem in Web searches. Experiments show that our methods are simple, efficient, and effective. Keywords: rank aggregation, ranking functions, metasearch, multi-word queries, spam 1.\n\n2-Hop Neighbour:\nA Methodology to Retrieve Text Documents from Multiple Databases In this paper, we present a methodology for finding the n most similar documents across multiple text databases for any given query and for any positive integer n. This methodology consists of two steps. First, the contents of databases are indicated approximately by database representatives. Databases are ranked using their representatives in a certain order with respect to the given query. We provide a necessary and sufficient condition to rank the databases optimally. In order to satisfy this necessary and sufficient condition, we provide three estimation methods. One estimation method is intended for short queries; the other two are for all queries. Second, we provide an algorithm, OptDocRetrv, to retrieve documents from the databases according to their rank and in a particular way. We show that if the databases containing the n most similar documents for a given query are ranked ahead of other databases, our methodology will guarantee the retrieval of the n most similar d...\n\n2-Hop Neighbour:\nText Database Selection for Longer Queries A metasearch engine is a system that supports unified access to multiple local search engines. One of the main challenges in building a large-scale metasearch engine is to solve the database (search engine) selection problem, which is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. For the database of each search engine, a representative which indicates approximately the contents of the database is created in advance to enable database selection. The representatives of all databases can be integrated into a single representative to make the selection process more scalable. While an integrated representative with high scalability has just been proposed and has been found to be effective for short queries, its effectiveness for longer queries is significantly lower. In the Internet environment, most queries initially submitted by users are short queries. However, it has been found that better search effectiveness can often be achieved when additional terms are added to the initial queries through query expansion or relevance feedback. The resulting queries are usually longer than the initial queries. In this paper, we propose a new method to construct database representatives and to decide which databases to select for longer queries. Experimental results are given to compare the performance of the new method with that of a previous method.\n\n2-Hop Neighbour:\nThe PageRank Citation Ranking: Bringing Order to the Web The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Privacy-preserving Distributed Mining of Association Rules on Horizontally Partitioned Data Abstract—Data mining can extract important knowledge from large data collections—but sometimes these collections are split among various parties. Privacy concerns may prevent the parties from directly sharing the data and some types of information about the data. This paper addresses secure mining of association rules over horizontally partitioned data. The methods incorporate cryptographic techniques to minimize the information shared, while adding little overhead to the mining task. Index Terms—Data mining, security, privacy. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nPrivacy Preserving Association Rule Mining in Vertically Partitioned Data Privacy considerations often constrain data mining projects. This paper addresses the problem of association rule mining where transactions are distributed across sources. Each site holds some attributes of each transaction, and the sites wish to collaborate to identify globally valid association rules. However, the sites must not reveal individual transaction data. We present a two-party algorithm for efficiently discovering frequent itemsets with minimum support levels, without either site revealing individual transaction values.\n\n2-Hop Neighbour:\nMeta-Learning in Distributed Data Mining Systems: Issues and Approaches Data mining systems aim to discover patterns and extract useful information from facts recorded in databases. A widely adopted approach to this objective is to apply various machine learning algorithms to compute descriptive models of the available data. Here, we explore one of the main challenges in this research area, the development of techniques that scale up to large and possibly physically distributed databases. Meta-learning is a technique that seeks to compute higher-level classifiers (or classification models), called meta-classifiers, that integrate in some principled fashion multiple classifiers computed separately over different databases. This study, describes meta-learning and presents the JAM system (Java Agents for Meta-learning), an agent-based meta-learning system for large-scale data mining applications. Specifically, it identifies and addresses several important desiderata for distributed data mining systems that stem from their additional complexity co...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Salticus: Guided Crawling for Personal Digital Libraries In this paper, we describe Salticus, a web crawler that learns from users' web browsing activity. Salticus enables users to build a personal digital library by collecting documents and generalizing over the user's choices. Keywords personal digital library, business intelligence, web crawling, document acquisition 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBackground Readings for Collection Synthesis\n\n2-Hop Neighbour:\nMySpiders : Evolve your own intelligent Web crawlers Abstract. The dynamic nature of the World Wide Web makes it a challenge to find information that is bothrelevant and recent. Intelligent agents can complement the power of searchengines to meet this challenge. We present a Web tool called MySpiders, which implements an evolutionary algorithm managing a population of adaptive crawlers who browse the Web autonomously. Each agent acts as an intelligent client on behalf of the user, driven by a user query and by textual and linkage clues in the crawled pages. Agents autonomously decide which links to follow, which clues to internalize, when to spawn offspring to focus the search near a relevant source, and when to starve. The tool is available to the public as a threaded Java applet. We discuss the development and deployment of such a system. Keywords: web informational retrieval, topic-driver crawlers, online search, InfoSpiders, MySpiders, applet\n\n2-Hop Neighbour:\nEvaluating Topic-Driven Web Crawlers Due to limited bandwidth, storage, and computational resources, and to the dynamic nature of the Web, search engines cannot index every Web page, and even the covered portion of the Web cannot be monitored continuously for changes. Therefore it is essential to develop effective crawling strategies to prioritize the pages to be indexed. The issue is even more important for topic-specific search engines, where crawlers must make additional decisions based on the relevance of visited pages. However, it is difficult to evaluate alternative crawling strategies because relevant sets are unknown and the search space is changing. We propose three different methods to evaluate crawling strategies. We apply the proposed metrics to compare three topic-driven crawling algorithms based on similarity ranking, link analysis, and adaptive agents.\n\n2-Hop Neighbour:\nAuthoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative \"authority\" of pages in such environments. This issue is central to a number of basic hypertext search t...\n\n2-Hop Neighbour:\nCentroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...\n\n2-Hop Neighbour:\nBreadth-First Search Crawling Yields High-Quality Pages This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Is Machine Colour Constancy Good Enough? . This paper presents a negative result: current machine colour constancy algorithms are not good enough for colour-based object recognition. This result has surprised us since we have previously used the better of these algorithms successfully to correct the colour balance of images for display. Colour balancing has been the typical application of colour constancy, rarely has it been actually put to use in a computer vision system, so our goal was to show how well the various methods would do on an obvious machine colour vision task, namely, object recognition. Although all the colour constancy methods we tested proved insufficient for the task, we consider this an important finding in itself. In addition we present results showing the correlation between colour constancy performance and object recognition performance, and as one might expect, the better the colour constancy the better the recognition rate. 1 Introduction We set out to show that machine colour constancy had matured to...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Exploiting Geographical Location Information of Web Pages Many information resources on the web are relevant primarily to limited geographical communities. For instance, web sites containing information on restaurants, theaters, and apartment rentals are relevant primarily to web users in geographical proximity to these locations. In contrast, other information resources are relevant to a broader geographical community. For instance, an on-line newspaper may be relevant to users across the United States. Unfortunately, the geographical scope of web resources is largely ignored by web search engines. We make the case for identifying and exploiting the geographical location information of web sites so that web search engines can rank resources in a geographically sensitive fashion, in addition to using more traditional information-retrieval strategies. In this paper, we first consider how to compute the geographical location of web pages. Subsequently, we consider how to exploit such information in one specific \"proof-of-concept\" appl...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nComputing Geographical Scopes of Web Resources Many information resources on the web are relevant primarily to limited geographical communities. For instance, web sites containing information on restaurants, theaters, and apartment rentals are relevant primarily to web users in geographical proximity to these locations. In contrast, other information resources are relevant to a broader geographical community. For instance, an on-line newspaper may be relevant to users across the United States. Unfortunately, most current web search engines largely ignore the geographical scope of web resources. In this paper, we introduce techniques for automatically computing the geographical scope of web resources, based on the textual content of the resources, as well as on the geographical distribution of hyperlinks to them. We report an extensive experimental evaluation of our strategies using real web data. Finally, we describe a geographically-aware search engine that we have built using our techniques for determining the geographical scope of web resources. 1\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n2-Hop Neighbour:\nDesign and Implementation of a High-Performance Distributed Web Crawler Broad web search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. Such a web crawler may interact with millions of hosts over a period of weeks or months, and thus issues of robustness, flexibility, and manageability are of major importance. In addition, I/O performance, network resources, and OS limits must be taken into account in order to achieve high performance at a reasonable cost. In this paper, we describe the design and implementation of a distributed web crawler that runs on a network of workstations. The crawler scales to (at least) several hundred pages per second, is resilient against system crashes and other events, and can be adapted to various crawling applications. We present the software architecture of the system, discuss the performance bottlenecks, and describe efficient techniques for achieving high performance. We also report preliminary experimental results based on a crawl of million pages on million hosts. Work supported by NSF CAREER Award NSF CCR-0093400, Intel Corporation, and the New York State Center for Advanced Technology in Telecommunications (CATT) at Polytechnic University, and by equipment grants from Intel Corporation and Sun Microsystems. 1 1\n\n2-Hop Neighbour:\nWorld Wide Web Information Retrieval Using Web Connectivity Information Gathering, processing and distributing information from the World Wide Web will be a vital technology for the next century. Web search techniques have played a critical role in the development of information systems. Due to the diverse nature of web documents, traditional search techniques must be improved. Hyperlink structure based methods have proved to be powerful ways of exploring the relationships between web documents. In this project, a prototype web search engine was developed to exploit the link structure of web documents, based on the use of the Companion algorithm. The prototype consists of a web spider, local database, and search software. The system was written using the Java programming language. Our spider crawls and downloads web pages using Lynx, then saves the hyperlinks into an Oracle database. JDBC is used to implement the database processing. Search software makes a vicinity graph for the query URL and returns the most related pages after calculating the hub and authority weights. Finally, HTML web pages provide user interfaces and communicate with CGI using the Perl language. iii ACKNOWLEDGMENTS The author would like to express thanks to all of the members of his M.S. committee for their useful comments on the thesis, assistance in scheduling the defense date and kind help during the final defense period. The author would like to express his deepest appreciation to Dr. Wen-Chen Hu, his thesis mentor, for the depth of the training and the appropriate guidance he has provided. The author would also like to acknowledge the Department of Computer Science and Software Engineering of Auburn University for financial support. Finally, thanks especially go to the author's wife Qifang, his son, Alex, and his father and mother for their support and love. ...\n\n2-Hop Neighbour:\nInverted files and dynamic signature files for optimisation of Web Directories Web directories are taxonomies for the classification of Web documents. This kind of IR systems present a specific type of search where the document collection is restricted to one area of the category graph. This paper introduces a specific data architecture for Web directories which improves the performance of restricted searches. That architecture is based on a hybrid data structure composed of an inverted file with multiple embedded signature files. Two variants based on the proposed model are presented: hybrid architecture with total information and hybrid architecture with partial information. The validity of this architecture has been analysed by means of developing both variants to be compared with a basic model. The performance of the restricted queries was clearly improved, specially the hybrid model with partial information, which yielded a positive response under any load of the search system.\n\n2-Hop Neighbour:\nWho do you want to be today? Web Personae for personalised information access Personalised context sensitivity is the Holy Grail of web information retrieval. As a first step towards this goal, we present the Web Personae personalised search and browsing system. We use well-known information retrieval techniques to develop and track user models. Web Personae differ from previous approaches in that we model users with multiple profiles, each corresponding to a distinct topic or domain. Such functionality is essential in heterogeneous environments such as the Web. We introduce Web Personae, describe an algorithm for learning such models from browsing data, and discuss applications and evaluation methods.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Using Web Structure for Classifying and Describing Web Pages The structure of the web is increasingly being used to improve organization, search, and analysis of information on the web. For example, Google uses the text in citing documents (documents that link to the target document) for search. We analyze the relative utility of document text, and the text in citing documents near the citation, for classification and description. Results show that the text in citing documents, when available, often has greater discriminative and descriptive power than the text in the target document itself. The combination of evidence from a document and citing documents can improve on either information source alone. Moreover, by ranking words and phrases in the citing documents according to expected entropy loss, we are able to accurately name clusters of web pages, even with very few positive examples. Our results confirm, quantify, and extend previous research using web structure in these areas, introducing new methods for classification and description of pages.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nDigital Libraries and Autonomous Citation Indexing The World Wide Web is revolutionizing the way that researchers access scientific information. Articles are increasingly being made available on the homepages of authors or institutions, at journal Web sites, or in online archives. However, scientific information on the Web is largely disorganized. This article introduces the creation of digital libraries incorporating Autonomous Citation Indexing (ACI). ACI autonomously creates citation indices similar to the Science Citation Index R . An ACI system autonomously locates articles, extracts citations, identifies identical citations that occur in different formats, and identifies the context of citations in the body of articles. ACI can organize the literature and provide most of the advantages of traditional citation indices, such as literature search using citation links, and the evaluation of articles based on citation statistics. Furthermore, ACI can provide significant advantages over traditional citation indices. No manual effort is required for indexing, which should result in a reduction in cost and an increase in the availability of citation indices. An ACI system can also provide more comprehensive and up-to-date indices of the literature by indexing articles on the Web, technical reports, conference papers, etc. Furthermore, ACI makes it easy to browse the context of citations to given articles, allowing researchers to quickly and easily see what subsequent researchers have said about a given article. digital libraries incorporating ACI may significantly improve scientific dissemination and feedback.\n\n1-Hop Neighbour:\nCombining Labeled and Unlabeled Data with Co-Training We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a setting in which the description of each example can be partitioned into two distinct views, motivated by the task of learning to classify web pages. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks that point to that page. We assume that either view of the example would be su cient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment amuch smaller set of labeled examples. Speci cally, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm's predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice. As part of our analysis, we provide new re-\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nTarget Seeking Crawlers and their Topical Performance Topic driven crawlers can complement search engines by targeting relevant portions of the Web. A topic driven crawler must exploit the information available about the topic and its underlying context. In this paper we extend our previous research on the design and evaluation of topic driven crawlers by comparing seven different crawlers on a harder problem, namely, seeking highly relevant target pages. We find that exploration is an important aspect of a crawling strategy. We also study how the performance of crawler strategies depends on a number of topical characteristics based on notions of topic generality, cohesiveness, and authoritativeness. Our results reveal that topic generality is an obstacle for most crawlers, that three crawlers tend to perform better when the target pages are clustered together, and that two of these also display better performance when topic targets are highly authoritative.\n\n2-Hop Neighbour:\nFEATURES: Real-time Adaptive Feature Learning and Document Learning for Web Search In this paper we report our research on building Features - an intelligent web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Not only does Features learn from the user's document relevance feedback, but also automatically extracts and suggests indexing keywords relevant to a search query and learns from the user's keyword relevance feedback so that it is able to speed up its search process and to enhance its search performance. We design two efficient and mutual-benefiting learning algorithms that work concurrently, one for feature learning and the other for document learning. Features employs these algorithms together with an internal index database and a real-time meta-searcher so to perform adaptive real-time learning to find desired documents with as little relevance feedback from the user as possible. The architecture and performance of Features are also discussed. 1 Introduction As the world wide web rapidly evo...\n\n2-Hop Neighbour:\nFocused Crawls, Tunneling, and Digital Libraries Crawling the Web to build collections of documents related to pre-specified topics became an active area of research during the late 1990's after crawler technology was developed for the benefit of search engines. Now, Web crawling is being seriously considered as an important strategy for building large scale digital libraries. This paper considers some of the crawl technologies that might be exploited for collection building. For example, to make such collection-building crawls more effective, focused crawling was developed, in which the goal was to make a \"best-first\" crawl of the Web. We are using powerful crawler software to implement a focused crawl but use tunneling to overcome some of the limitations of a pure best-first approach. Tunneling has been described by others as not only prioritizing links from pages according to the page's relevance score, but also estimating the value of each link and prioritizing on that as well. We add to this mix by devising a tunneling focused crawling strategy which evaluates the current crawl direction on the fly to determine when to terminate a tunneling activity. Results indicate that a combination of focused crawling and tunneling could be an e#ective tool for building digital libraries.\n\n2-Hop Neighbour:\nUser Modeling for Information Access Based on Implicit Feedback User modeling can be used in information filtering and retrieval systems to improve the representation of a user's information needs. User models can be constructed by hand, or learned automatically based on feedback provided by the user about the relevance of documents that they have examined. By observing user behavior, it is possible to infer implicit feedback without requiring explicit relevance judgments. Previous studies based on Internet discussion groups (USENET news) have shown reading time to be a useful source of implicit feedback for predicting a user's preferences. The study reported in this paper extends that work by providing framework for considering alternative sources of implicit feedback, examining whether reading time is useful for predicting a user's preferences for academic and professional journal articles, and exploring whether retention behavior can usefully augment the information that reading time provides. Two user studies were conducted in which undergradua...\n\n2-Hop Neighbour:\nExploiting Geographical Location Information of Web Pages Many information resources on the web are relevant primarily to limited geographical communities. For instance, web sites containing information on restaurants, theaters, and apartment rentals are relevant primarily to web users in geographical proximity to these locations. In contrast, other information resources are relevant to a broader geographical community. For instance, an on-line newspaper may be relevant to users across the United States. Unfortunately, the geographical scope of web resources is largely ignored by web search engines. We make the case for identifying and exploiting the geographical location information of web sites so that web search engines can rank resources in a geographically sensitive fashion, in addition to using more traditional information-retrieval strategies. In this paper, we first consider how to compute the geographical location of web pages. Subsequently, we consider how to exploit such information in one specific \"proof-of-concept\" appl...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Web Genre Visualization Web users vary widely in terms of their expertise on the topics for which they search, the amount of detail they seek, etc. Unfortunately, today's one-size-fits-all Web search services do not cater to such individual preferences. For example, it is difficult to query for documents that give extensive detail but assume modest prior expertise. We describe how shallow text classification techniques can be used to sort the documents returned by Web search services according to genre dimensions such as level of expertise and amount of detail, and propose a simple visualization interface that helps users rapidly find appropriate documents. (Keywords: document genre; information retrieval, visualisation, text classification, shallow linguistic processing) Motivation Consider two users seeking information about Pearson' correlation coefficient. Alice is writing a data-analysis program and needs a web page to remind her of the equations. Bob, a teacher, wants to point his pupils to an overview that isn't bogged down in equations.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nFact or fiction: Content classification for digital libraries The World-Wide Web (WWW) is a vast repository of information, much of which is valuable but very often hidden to the user. The anarchic nature of the WWW presents unique challenges when it comes to information extraction and categorization. We view the WWW as a valuable resource for the gathering of information for Digital Libraries. In this paper we will describe the process of extracting and classifying information from the WWW for the purpose of integrating it into digital libraries. Our eorts focus on ways to automatically classify news articles according to whether they present opinions or reported facts. We describe and evaluate a system in development that automatically classies and recommends Web news articles from sports and politics domains. 1\n\n1-Hop Neighbour:\nGleaning Answers From the Web Introduction This position paper summarizes my recent and ongoing research on Web information extraction and retrieval. I describe wrapper induction and verification techniques for extracting data from structured sources; boosted wrapper induction, an extension of these techniques to handle natural text; ELIXIR, our e#cient and expressive language for XML information retrieval ; techniques and applications for text genre classification; and stochastic models for XML schema alignment. The unifying theme of these various research projects is to develop enabling technologies that facilitate the rapid development of large Web services for data access and integration. 2 Wrapper induction and verification A wide variety of valuable textual information resides on the Web, but very little is in a machineunderstandable form such as XML. Instead, the content\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nWeb Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions This paper summarizes the different characteristics of web data, the basic components of web mining and its different types, and their current states of the art. The reason for considering web mining, a separate field from data mining, is explained. The limitations of some of the existing web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) highlighted. A survey of the existing literature on \"soft web mining\" is provided along with the commercially available systems. The prospective areas of web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing \"soft web mining\" systems is explained. An extensive bibliography is also provided.\n\n2-Hop Neighbour:\nAn Overview of World Wide Web Search Technologies With over 800 million pages covering most areas of human endeavor, the World Wide Web is fertile ground for information retrieval. Numerous search technologies have been applied to Web searches, and the dominant search method has yet to be identified. This chapter provides an overview of existing Web search technologies and classifies them into six categories: (i) hyperlink exploration, (ii) information retrieval, (iii) metasearches, (iv) SQL approaches, (v) content-based multimedia searches, and (vi) others. A comparative study of some major commercial and experimental search services is presented, and some future research directions for Web searches are suggested. Keywords: Survey, World Wide Web, Searches, Search Engines, and Information Retrieval. 1.\n\n2-Hop Neighbour:\nSearching the Web: General and Scientific Information Access he World Wide Web is revolutionizing the way people access information, and has opened up new possibilities in areas such as digital libraries, general and scientific information dissemination and retrieval, education, commerce, entertainment, government, and health care. The amount of publicly available information on the Web is increasing rapidly [1]. The Web is a gigantic digital library, a searchable 15 billion word encyclopedia [2]. It has stimulated research and development in information retrieval and dissemination, and fostered search engines such as AltaVista. These new developments are not limited to the Web, and can enhance access to virtually all forms of digital libraries. The revolution the Web has brought to information access is not so much due to the availability of information (huge amounts of information has long been available in libraries and elsewhere), but rather the increased efficiency of accessing information, which can make previously impractical tasks practical. There are many avenues for improvement in the efficiency of accessing information on the Web, for example, in the areas of locating and organizing information. This article discusses general and scientific information access on the Web, and many of our comments are applicable to digital libraries in general. The effectiveness of Web search engines is discussed, including results that show that the major search engines cover only a fraction of the “publicly indexable Web ” (the part of the Web which is considered for indexing by the major engines, which excludes pages hidden behind search forms, pages with authorization requirements, etc.). Current research into improved searching of the Web is discussed, including new techniques for ranking the relevance of results, and new techniques in metasearch that can improve the efficiency and effectiveness of Web search. The amount of scientific information and the number of electronic journals on the Internet continues to increase. Researchers are increasingly making their work available online. This article also discusses the creation of digital libraries of the scientific literature, incorporating autonomous citation indexing. The autonomous creation of citation indices\n\n2-Hop Neighbour:\nPicASHOW: Pictorial Authority Search by Hyperlinks on the Web We describe PicASHOW, a fully automated WWW image retrieval system that is based on several link-structure analyzing algorithms. Our basic premise is that a page # displays (or links to) an image when the author of # considers the image to be of value to the viewers of the page. Wethus extend some well known link-based WWW #### ######### schemes to the context of image retrieval. PicASHOW's analysis of the link structure enables it to retrieve relevant images even when those are stored in les with meaningless names. The same analysis also allows it to identify ##### ########## and ##### ####. We dene these as Web pages that are rich in relevant images, or from which many images are readily accessible. PicASHOW requires no image analysis whatsoever and no creation of taxonomies for pre-classication of the Web's images. It can be implemented by standard WWW search engines with reasonable overhead, in terms of both computations and storage, and with no change to user query formats. It can thus be used to easily add image retrieving capabilities to standard search engines. Our results demonstrate that PicASHOW, while relying almost exclusively on link analysis, compares well with dedicated WWW image retrieval systems. We conclude that link analysis, a bona-de eective technique for Web page search, can improve the performance of Web image retrieval, as well as extend its denition to include the retrieval of image hubs and containers. Keywords Image Retrieval; Link Structure Analysis; Hubs and Authorities; Image Hubs. 1.\n\n2-Hop Neighbour:\nFocused Crawls, Tunneling, and Digital Libraries Crawling the Web to build collections of documents related to pre-specified topics became an active area of research during the late 1990's after crawler technology was developed for the benefit of search engines. Now, Web crawling is being seriously considered as an important strategy for building large scale digital libraries. This paper considers some of the crawl technologies that might be exploited for collection building. For example, to make such collection-building crawls more effective, focused crawling was developed, in which the goal was to make a \"best-first\" crawl of the Web. We are using powerful crawler software to implement a focused crawl but use tunneling to overcome some of the limitations of a pure best-first approach. Tunneling has been described by others as not only prioritizing links from pages according to the page's relevance score, but also estimating the value of each link and prioritizing on that as well. We add to this mix by devising a tunneling focused crawling strategy which evaluates the current crawl direction on the fly to determine when to terminate a tunneling activity. Results indicate that a combination of focused crawling and tunneling could be an e#ective tool for building digital libraries.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Focused Crawls, Tunneling, and Digital Libraries Crawling the Web to build collections of documents related to pre-specified topics became an active area of research during the late 1990's after crawler technology was developed for the benefit of search engines. Now, Web crawling is being seriously considered as an important strategy for building large scale digital libraries. This paper considers some of the crawl technologies that might be exploited for collection building. For example, to make such collection-building crawls more effective, focused crawling was developed, in which the goal was to make a \"best-first\" crawl of the Web. We are using powerful crawler software to implement a focused crawl but use tunneling to overcome some of the limitations of a pure best-first approach. Tunneling has been described by others as not only prioritizing links from pages according to the page's relevance score, but also estimating the value of each link and prioritizing on that as well. We add to this mix by devising a tunneling focused crawling strategy which evaluates the current crawl direction on the fly to determine when to terminate a tunneling activity. Results indicate that a combination of focused crawling and tunneling could be an e#ective tool for building digital libraries.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nEvaluating Topic-Driven Web Crawlers Due to limited bandwidth, storage, and computational resources, and to the dynamic nature of the Web, search engines cannot index every Web page, and even the covered portion of the Web cannot be monitored continuously for changes. Therefore it is essential to develop effective crawling strategies to prioritize the pages to be indexed. The issue is even more important for topic-specific search engines, where crawlers must make additional decisions based on the relevance of visited pages. However, it is difficult to evaluate alternative crawling strategies because relevant sets are unknown and the search space is changing. We propose three different methods to evaluate crawling strategies. We apply the proposed metrics to compare three topic-driven crawling algorithms based on similarity ranking, link analysis, and adaptive agents.\n\n1-Hop Neighbour:\nTopical Locality in the Web Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable WorldWide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.\n\n1-Hop Neighbour:\nUsing Reinforcement Learning to Spider the Web Efficiently Consider the task of exploring the Web in order to find pages of a particular kind or on a particular topic. This task arises in the construction of search engines and Web knowledge bases. This paper argues that the creation of efficient web spiders is best framed and solved by reinforcement learning, a branch of machine learning that concerns itself with optimal sequential decision making. One strength of reinforcement learning is that it provides a formalism for measuring the utility of actions that give benefit only in the future. We present an algorithm for learning a value function that maps hyperlinks to future discounted reward by using naive Bayes text classifiers. Experiments on two real-world spidering tasks show a three-fold improvement in spidering efficiency over traditional breadth-first search, and up to a two-fold improvement over reinforcement learning with immediate reward only. Keywords: reinforcement learning, text classification, World Wide Web, spidering, crawlin...\n\n2-Hop Neighbour:\nBuilding Domain-Specific Search Engines with Machine Learning Techniques Domain-specific search engines are becoming increasingly popular because they offer increased accuracy and extra features not possible with the general, Web-wide search engines. For example, www.campsearch.com allows complex queries by agegroup, size, location and cost over summer camps. Unfortunately, these domain-specific search engines are difficult and time consuming to maintain. This paper proposes the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific search engines. We describe new research in reinforcement learning, text classification and information extraction that automates efficient spidering, populating topic hierarchies, and identifying informative text segments. Using these techniques, we have built a demonstration system: a search engine for computer science research papers. It already contains over 33,000 papers and is publicly available at www.cora.jprc.com. 1 Introduction As the amount of information on the World ...\n\n2-Hop Neighbour:\nBackground Readings for Collection Synthesis\n\n2-Hop Neighbour:\nGrouper: A Dynamic Clustering Interface to Web Search Results Users of Web search engines are often forced to sift through the long ordered list of document \"snippets\" returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on most major search engines. The NorthernLight search engine organizes its output into \"custom folders\" based on pre-computed document labels, but does not reveal how the folders are generated or how well they correspond to users' interests. In this paper, we introduce Grouper -- an interface to the results of the HuskySearch meta-search engine, which dynamically groups the search results into clusters labeled by phrases extracted from the snippets. In addition, we report on the first empirical comparison of user Web search behavior on a standard ranked-list presentation versus a clustered presentation. By analyzing HuskySearch logs, we are able to demonstrate substantial differences in the number of documents f...\n\n2-Hop Neighbour:\nDesign and Implementation of a High-Performance Distributed Web Crawler Broad web search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. Such a web crawler may interact with millions of hosts over a period of weeks or months, and thus issues of robustness, flexibility, and manageability are of major importance. In addition, I/O performance, network resources, and OS limits must be taken into account in order to achieve high performance at a reasonable cost. In this paper, we describe the design and implementation of a distributed web crawler that runs on a network of workstations. The crawler scales to (at least) several hundred pages per second, is resilient against system crashes and other events, and can be adapted to various crawling applications. We present the software architecture of the system, discuss the performance bottlenecks, and describe efficient techniques for achieving high performance. We also report preliminary experimental results based on a crawl of million pages on million hosts. Work supported by NSF CAREER Award NSF CCR-0093400, Intel Corporation, and the New York State Center for Advanced Technology in Telecommunications (CATT) at Polytechnic University, and by equipment grants from Intel Corporation and Sun Microsystems. 1 1\n\n2-Hop Neighbour:\nSavvySearch: A Meta-Search Engine that Learns which Search Engines to Query Search engines are among the most successful applications on the Web today. So many search engines have been created that it is difficult for users to know where they are, how to use them and what topics they best address. Meta-search engines reduce the user burden by dispatching queries to multiple search engines in parallel. The SavvySearch meta-search engine is designed to efficiently query other search engines by carefully selecting those search engines likely to return useful results and by responding to fluctuating load demands on the Web. SavvySearch learns to identify which search engines are most appropriate for particular queries, reasons about resource demands and represents an iterative parallel search strategy as a simple plan. 1 The Application: Meta-Search on the Web Companies, institutions and individuals must have a presence on the Web; each are vying for the attention of millions of people. Not too surprisingly then, the most successful applications on the Web to dat...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nDiscovering Informative Content Blocks from Web Documents In this paper, we propose a new approach to discover informative contents from a set of tabular documents (or Web pages) of a Web site. Our system, InfoDiscoverer, first partitions a page into several content blocks according to HTML tag <TABLE> in a Web page. Based on the occurrence of the features (terms) in the set of pages, it calculates entropy value of each feature. According to the entropy value of each feature in a content block, the entropy value of the block is defined. By analyzing the information measure, we propose a method to dynamically select the entropy-threshold that partitions blocks into either informative or redundant. Informative content blocks are distinguished parts of the page, whereas redundant content blocks are common parts. Based on the answer set generated from 13 manually tagged news Web sites with a total of 26,518 Web pages, experiments show that both recall and precision rates are greater than 0.956. That is, using the approach, informative blocks (news articles) of these sites can be automatically separated from semantically redundant contents such as advertisements, banners, navigation panels, news categories, etc. By adopting InfoDiscoverer as the preprocessor of information retrieval and extraction applications, the retrieval and extracting precision will be increased, and the indexing size and extracting complexity will also be reduced.\n\n1-Hop Neighbour:\nA Meta-search Method Reinforced by Cluster Descriptors A meta-search engine acts as an agent for the participant search engines. It receives queries from users and redirects them to one or more of the participant search engines for processing. A meta-search engine incorporating many participant search engines is better than a single global search engine in terms of the number of pages indexed and the freshness of the indexes. The meta-search engine stores descriptive data (i.e., descriptors) about the index maintained by each participant search engine so that it can estimate the relevance of each search engine when a query is received. The ability for the meta-search engine to select the most relevant search engines determines the quality of the final result. To facilitate the selection process, the document space covered by each search engine must be described not only concisely but also precisely. Existing methods tend to focus on the conciseness of the descriptors by keeping a descriptor for a search engine 's entire index. This paper proposes to cluster a search engine's document space into clusters and keep a descriptor for each cluster. We show that cluster descriptors can provide a finer and more accurate representation of the document space, and hence enable the meta-search engine to improve the selection of relevant search engines. Two cluster-based search engine selection scenarios (i.e., independent and high-correlation) are discussed in this paper. Experiments verify that the cluster-based search engine selection can effectively identify the most relevant search engines and improve the quality of the search results consequently. 1\n\n1-Hop Neighbour:\nWeb Genre Visualization Web users vary widely in terms of their expertise on the topics for which they search, the amount of detail they seek, etc. Unfortunately, today's one-size-fits-all Web search services do not cater to such individual preferences. For example, it is difficult to query for documents that give extensive detail but assume modest prior expertise. We describe how shallow text classification techniques can be used to sort the documents returned by Web search services according to genre dimensions such as level of expertise and amount of detail, and propose a simple visualization interface that helps users rapidly find appropriate documents. (Keywords: document genre; information retrieval, visualisation, text classification, shallow linguistic processing) Motivation Consider two users seeking information about Pearson' correlation coefficient. Alice is writing a data-analysis program and needs a web page to remind her of the equations. Bob, a teacher, wants to point his pupils to an overview that isn't bogged down in equations.\n\n2-Hop Neighbour:\nTowards a Highly-Scalable and Effective Metasearch Engine A metasearch engine is a system that supports unified access to multiple local search engines. Database selection is one of the main challenges in building a large-scale metasearch engine. The problem is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. In order to enable accurate selection, metadata that reect the contents of each search engine need to be collected and used. In this paper, we propose a highly scalable and accurate database selection method. This method has several novel features. First, the metadata for representing the contents of all search engines are organized into a single integrated representative. Such a representative yields both computation efficiency and storage efficiency. Second, our selection method is based on a theory for ranking search engines optimally. Experimental results indicate that this new method is very effective. An operational prototype system has been built based on the proposed approach.\n\n2-Hop Neighbour:\nTowards a Highly-Scalable and Effective Metasearch Engine A metasearch engine is a system that supports unified access to multiple local search engines. Database selection is one of the main challenges in building a large-scale metasearch engine. The problem is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. In order to enable accurate selection, metadata that reect the contents of each search engine need to be collected and used. In this paper, we propose a highly scalable and accurate database selection method. This method has several novel features. First, the metadata for representing the contents of all search engines are organized into a single integrated representative. Such a representative yields both computation efficiency and storage efficiency. Second, our selection method is based on a theory for ranking search engines optimally. Experimental results indicate that this new method is very effective. An operational prototype system has been built based on the proposed approach.\n\n2-Hop Neighbour:\nDiscovering Structural Association of Semistructured Data Many semistructured objects are similarly, though not identically, structured. We study the problem of discovering \"typical\" substructures of a collection of semistructured objects. The discovered structures can serve the following purposes: (a) the \"table-of-contents\" for gaining general information of a source, (b) a road map for browsing and querying information sources, (c) a basis for clustering documents, (d) partial schemas for providing standard database access methods, (e) user/customer's interests and browsing patterns. The discovery task is impacted by structural features of semistructured data in a non-trivial way and traditional data mining frameworks are inapplicable. We define this discovery problem and propose a solution. 1 Introduction 1.1 Motivation Many on-line documents, such as HTML, Latex, BibTex, SGML files and those found in digital libraries, are semistructured. Semistructured data arises when the source does not impose a rigid structure (such as the ...\n\n2-Hop Neighbour:\nThe Impact of Database Selection on Distributed Searching Abstract The proliferation of online information resources increases the importance of effective and efficient distributed searching. Distributed searching is cast in three parts – database selection, query processing, and results merging. In this paper we examine the effect of database selection on retrieval performance. We look at retrieval performance in three different distributed retrieval testbeds and distill some general results. First we find that good database selection can result in better retrieval effectiveness than can be achieved in a centralized database. Second we find that good performance can be achieved when only a few sites are selected and that the performance generally increases as more sites are selected. Finally we find that when database selection is employed, it is not necessary to maintain collection wide information (CWI), e.g. global idf. Local information can be used to achieve superior performance. This means that distributed systems can be engineered with more autonomy and less cooperation. This work suggests that improvements in database selection can lead to broader improvements in retrieval performance, even in centralized (i.e. single database) systems. Given a centralized database and a good selection mechanism, retrieval performance can be improved by decomposing that database conceptually and employing a selection step. 1", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Tropos Software Development Methodology: Processes, Models And Diagrams Abstract. Tropos is a novel agent-oriented software development methodology founded on two key features: (i) the notions of agent, goal, plan and various other knowledge level concepts are fundamental primitives used uniformly throughout the software development process; and (ii) a crucial role is assigned to requirements analysis and specification when the system-to-be is analyzed with respect to its intended environment. This paper provides a (first) detailed account of the Tropos methodology. In particular, we describe the basic concepts on which Tropos is founded and the types of models one builds out of them. We also specify the analysis process through which design flows from external to system actors through a goal analysis and delegation. In addition, we provide an abstract syntax for Tropos diagrams and other linguistic constructs. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Methodology and Modelling Technique for Systems of BDI Agents The construction of large-scale embedded software systems demands the use of design methodologies and modelling techniques that support abstraction, inheritance, modularity, and other mechanisms for reducing complexity and preventing error. If multi-agent systems are to become widely accepted as a basis for large-scale applications, adequate agentoriented methodologies and modelling techniques will be essential. This is not just to ensure that systems are reliable, maintainable, and conformant, but to allow their design, implementation, and maintenance to be carried out by software analysts and engineers rather than researchers. In this paper we describe an agent-oriented methodology and modelling technique for systems of agents based upon the Belief-Desire-Intention (BDI) paradigm. Our models extend existing Object-Oriented (OO) models. By building upon and adapting existing, well-understood techniques, we take advantage of their maturity to produce an approach that can be easily lear...\n\n1-Hop Neighbour:\nAnalysis and Design using MaSE and agentTool This paper provides an overview of the work being done at the Air Force Institute of Technology on the Multiagent Systems Engineering methodology and the associated agentTool environment. Our research is focused on discovering methods and techniques for engineering practical multiagent systems. It uses the abstraction provided by multiagent systems for developing intelligent, distributed software systems.\n\n1-Hop Neighbour:\nJACK Intelligent Agents - Components for Intelligent Agents in Java This paper is organised as follows. Section 2 introduces JACK Intelligent Agents, presenting the approach taken by AOS to its design and outlining its major engineering characteristics. The BDI model is discussed briefly in Section 3. Section 4 gives an outline of how to build an application with JACK Intelligent Agents. Finally, in Section 5 we discuss how the use of this framework can be beneficial to both engineers and researchers. For brevity, we will refer to JACK Intelligent Agents simply as \"JACK\".\n\n2-Hop Neighbour:\nA Conceptual Framework for Agent Definition and Development The use of agents of many different kinds in a variety of fields of computer science and artificial intelligence is increasing rapidly and is due, in part, to their wide applicability. The richness of the agent metaphor that leads to many different uses of the term is, however, both a strength and a weakness: its strength lies in the fact that it can be applied in very many different ways in many situations for different purposes; the weakness is that the term agent is now used so frequently that there is no commonly accepted notion of what it is that constitutes an agent. This paper addresses this issue by applying formal methods to provide a defining framework for agent systems. The Z specification language is used to provide an accessible and unified formal account of agent systems, allowing us to escape from the terminological chaos that surrounds agents. In particular, the framework precisely and unambiguously provides meanings for common concepts and terms, enables alternative models of particular classes of system to be described within it, and provides a foundation for subsequent development of increasingly more refined concepts.\n\n2-Hop Neighbour:\nDeclarative Procedural Goals in Intelligent Agent Systems An important concept for intelligent agent systems is goals. Goals have two aspects: declarative (a description of the state sought), and procedural (a set of plans for achieving the goal). A declarative view of goals is necessary in order to reason about important properties of goals, while a procedural view of goals is necessary to ensure that goals can be achieved efficiently in dynamic environments. In this paper we propose a framework for goals which integrates both views. We discuss the requisite properties of goals and the link between the declarative and procedural aspects, then derive a formal semantics which has these properties. We present a high-level plan notation with goals and give its formal semantics. We then show how the use of declarative information permits reasoning (such as the detection and resolution of conflicts) to be performed on goals. 1\n\n2-Hop Neighbour:\nA Methodology for Agent-Oriented Analysis and Design . This article presents Gaia: a methodology for agent-oriented analysis and design. The Gaia methodology is both general, in that it is applicable to a wide range of multi-agent systems, and comprehensive, in that it deals with both the macro-level (societal) and the micro-level (agent) aspects of systems. Gaia is founded on the view of a multi-agent system as a computational organisation consisting of various interacting roles. We illustrate Gaia through a case study (an agent-based business process management system). 1. Introduction Progress in software engineering over the past two decades has been made through the development of increasingly powerful and natural high-level abstractions with which to model and develop complex systems. Procedural abstraction, abstract data types, and, most recently, objects and components are all examples of such abstractions. It is our belief that agents represent a similar advance in abstraction: they may be used by software developers to more n...\n\n2-Hop Neighbour:\nSpecifying Agent Behavior as Concurrent Tasks Approved for public release; distribution unlimited Software agents are currently the subject of much research in many interrelated fields. While much of the agent community has concentrated on building exemplar agent systems, defining theories of agent behavior and inter-agent communications, there has been less emphasis on defining the techniques required to build practical agent systems. While many agent researchers refer to tasks performed by roles within a multiagent system, few really define the what they mean by tasks. We believe that the definition of tasks is critical in order to completely define what an agent within a multiagent system. Tasks not only define the types of internal processing an agent must do, but also how interactions with other agents relate to those internal processes. In this report, we define concurrent tasks, which specify a single thread of control that defines a task that the agent can perform and integrates inter-agent as well as intra-agent interactions. We typically think of concurrent tasks as defining how a role decides what actions to take, not necessarily what the agent does. This is an important distinction when talking about agents since hard-coding specific behavior may not be the ideal case. Often agents incorporate the concept of plans and planning to\n\n2-Hop Neighbour:\nModelling and Design of Multi-Agent Systems Abstract. Agent technologies are now being applied to the development of large-scale commercial and industrial software systems. Such systems are complex, involving hundreds, perhaps thousands of agents, and there is a pressing need for system modelling techniques that permit their complexity to be e ectively managed, and principled methodologies to guide the process of system design. Without adequate techniques to support the design process, such systems will not be su ciently reliable, maintainable or extensible, will be di cult to comprehend, and their elements will not be re-usable. In this paper, we present techniques for modelling agents and multi-agent systems which adapt and extend existing Object-Oriented representation techniques, and a methodology which provides a clear conceptual framework to guide system design and speci cation. We have developed these techniques for systems of agents based upon a particular Belief-Desire-Intention architecture, but have soughttoprovide a framework for the description of agent systems that is su ciently general to be applicable to other agent architectures, and which may be extended in various ways. 1", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Controlling Cooperative Problem Solving in Industrial Multi-Agent Systems using Joint Intentions One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparativ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nTowards Flexible Teamwork in Persistent Teams: Extended Report Teamwork is a critical capability in multi-agent environments. Many such environments mandate that the agents and agent-teams must be persistent i.e., exist over long periods of time. Agents in such persistent teams are bound together by their long-term common interests and goals. This paper focuses on flexible teamwork in such persistent teams. Unfortunately, while previous work has investigated flexible teamwork, persistent teams remain unexplored. For flexible tamwork, one promising approach that has emerged is model-based, i.e., providing agents with general models of teamwork that explicitly specify their commitments in teamwork. Such models enable agents to autonomously reason about coordination. Unfortunately, for persistent teams, such models may lead to coordination and communication actions that while locally optimal, are highly problematic for the team's long-term goals. We present a decisiontheoretic technique to enable persistent teams to overcome such limitations of the m...\n\n1-Hop Neighbour:\nEngineering Executable Agents Using Multi-Context Systems In the area of agent-based computing there are many proposals for specific system architectures, and a number of proposals for general approaches to building agents. As yet, however, there are comparatively few attempts to relate these together, and even fewer attempts to provide methodologies which relate designs to architectures and then to executable agents. This paper provides a first attempt to address this shortcoming. We propose a general method of specifying logic-based agents, which is based on the use of multi-context systems, and give examples of its use. The resulting specifications can be directly executed, and we discuss an implementation which makes this direct execution possible.\n\n1-Hop Neighbour:\nFeasible Formation of Coalitions Among Autonomous Agents in Non-Super-Additive Environments Cooperating and sharing resources by creating coalitions of agents are an important way for autonomous agents to execute tasks and to maximize payoff. Such coalitions will form only if each member of a coalition gains more if it joins the coalition than it could gain otherwise. There are several ways of creating such coalitions and dividing the joint payoff among the members. In this paper we present algorithms for coalition formation and payoff distribution in non-super-additive environments. We focus on a low-complexity kernel-oriented coalition formation algorithm. The properties of this algorithm were examined via simulations. These have shown that the model increases the benefits of the agents within a reasonable time period, and more coalition formations provide more benefits to the agents. Key Words Distributed AI, Coalition Formation, Multi-Agent Systems. This material is based upon work supported in part by the NSF under grant No. IRI-9423967, ARPA/Rome Labs contract F30602...\n\n2-Hop Neighbour:\nCoalition Formation for Large-Scale Electronic Markets Coalition formation is a desirable behavior in a multiagent system, when a group of agents can perform a task more efficiently than any single agent can. Computational and communications complexity of traditional approaches to coalition formation, e.g., through negotiation, make them impractical for large systems. We propose an alternative, physics-motivated mechanism for coalition formation that treats agents as randomly moving, locally interacting entities. A new coalition may form when two agents encounter one another, and it may grow when a single agent encounters it. Such agent-level behavior leads to a macroscopic model that describes how the number and distribution of coalitions change with time. We increase the generality and complexity of the model by letting the agents leave coalitions with some probability. The model is expressed mathematically as a series of differential equations. These equations have steady state solutions that describe the equilibrium distribution of coa...\n\n2-Hop Neighbour:\nMulti-Agent Architectures as Organizational Structures A Multi-Agent System (MAS) is an organization of coordinated autonomous agents that interact in order to achieve particular, possible common goals. Considering real world organizations as an analogy, this paper proposes architectural styles for MAS which adopt concepts from organizational theories. The styles are modeled using the i* framework which o#ers the notions of actor, goal and actor dependency and specified in Formal Tropos. They are evaluated with respect to a set of software quality attributes, such as predictability or adaptability. In addition, we conduct a comparative study of organizational and conventional software architectures using the mobile robot control example from the Software Engineering literature. The research is conducted in the context of Tropos, a comprehensive software system development methodology.\n\n2-Hop Neighbour:\nAn approach to using degrees of belief in BDI agents : The past few years have seen a rise in the popularity of the use of mentalistic attitudes such as beliefs, desires and intentions to describe intelligent agents. Many of the models which formalise such attitudes do not admit degrees of belief, desire and intention. We see this as an understandable simplification, but as a simplification which means that the resulting systems cannot take account of much of the useful information which helps to guide human reasoning about the world. This paper starts to develop a more sophisticated system based upon an existing formal model of these mental attributes. 1 Introduction In the past few years there has been a lot of attention given to building formal models of autonomous software agents; pieces of software which operate to some extent independently of human intervention and which therefore may be considered to have their own goals and the ability to determine how to achieve those goals. Many of these formal models are based on the use of ...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Background Readings for Collection Synthesis' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nFinding Related Pages in the World Wide Web When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach toweb searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related web pages. A related web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers. We describe two algorithms to identify related web pages. These algorithms use only the connectivity information in the web (i.e., the links between pages) and not the content of pages or usage information. We haveimplemented both algorithms and measured their runtime performance. To evaluate the e ectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's \\What's Related \" service [12]. Our study showed that the precision at 10 for our two algorithms are 73 % better and 51 % better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nCentroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...\n\n2-Hop Neighbour:\nText-Based Content Search and Retrieval in ad hoc P2P Communities We consider the problem of content search and retrieval in peer-to-peer (P2P) communities. P2P computing is a potentially powerful model for information sharing between ad hoc groups' of users because of its' low cost of entry and natural model for resource scaling with community size. As P2P communities grow in size, however, locating information distributed across the large number of peers becomes problematic. We present a distributed text-based content search and retrieval algorithm to address this' problem. Our algorithm is' based on a state-of-the-art text-based document ranking algorithm: the vector-space model instantiated with the TFxlDF ranking rule. A naive application of TFxlDF wouM require each peer in a community to collect an inverted index of the entire community. This' is' costly both in terms of bandwidth and storage. Instea & we show how TFxlDF can be approximated given compact summaries of peers' local inverted indexes. We make three contributions: (a) we show how the TFxlDF rule can be adapted to use the index summaries, (b) we provide a heuristic for adaptively determining the set of peers that shouM be contacted for a query, and (c) we show that our algorithm tracks' TFxlDF's performance very closely, regardless of how documents' are distributed throughout the community. Furthermore, our algorithm preserves the main flavor of TFxlDF by retrieving close to the same set of documents for any given query.\n\n2-Hop Neighbour:\nYarrow: A Real-Time Client Side Meta-Search Learner In this paper we report our research on building Yarrow - an intelligent web meta-search engine. The predominant feature of Yarrow is that in contrast to the lack of adaptive learning features in existing metasearch engines, Yarrow is equipped with a practically efficient on-line learning algorithm so that it is capable of helping the user to search for the desired documents with as little feedback as possible. Currently, Yarrow can query eight of the most popular search engines and is able to perform document parsing and indexing, and learning in real-time on client side. Its architecture and performance are also discussed. 1. Introduction As the world wide web evolves and grows so rapidly, web search, an interface between the human users and the vast information gold mine of the web, is becoming a necessary part of people's daily life. Designing and implementing practically effective web search tools is a challenging task. It calls for innovative methods and strategies f...\n\n2-Hop Neighbour:\nDocument Categorization and Query Generation on the World Wide Web Using WebACE We present WebACE, an agent for exploring and categorizing documents on the World Wide Web based on a user profile. The heart of the agent is an unsupervised categorization of a set of documents, combined with a process for generating new queries that is used to search for new related documents and for filtering the resulting documents to extract the ones most closely related to the starting set. The document categories are not given a priori. We present the overall architecture and describe two novel algorithms which provide significant improvement over Hierarchical Agglomeration Clustering and AutoClass algorithms and form the basis for the query generation and search component of the agent. We report on the results of our experiments comparing these new algorithms with more traditional clustering algorithms and we show that our algorithms are fast and scalable. y Authors are listed alphabetically. 1 Introduction The World Wide Web is a vast resource of information and services t...\n\n2-Hop Neighbour:\nInferring Web Communities Through Relaxed Cocitation and Dense Bipartite Graphs Community forming is one of the important activity in the Web. The Web harbors a large number of communities. A community is a group of content creators that manifests itself as a set of interlinked pages. Given a large collection of pages our aim is to find potential communities in the Web. In the literature, Ravi Kumar et al. [18] proposed a trawling method to find potential communities by abstracting a core of the community as a group of pages that form a complete bipartite graph (CBG) (web-page as a node and link as an edge between two nodes). The trawling approach extracts a small group of pages that form a CBG, which is a signature of a potential community.\n\n2-Hop Neighbour:\nClustering Categorical Data: An Approach Based on Dynamical Systems We describe a novel approach for clustering collections of sets, and its application to the analysis and mining of categorical data. By \"categorical data,\" we mean tables with fields that cannot be naturally ordered by a metric --- e.g., the names of producers of automobiles, or the names of products offered by a manufacturer. Our approach is based on an iterative method for assigning and propagating weights on the categorical values in a table; this facilitates a type of similarity measure arising from the cooccurrence of values in the dataset. Our techniques can be studied analytically in terms of certain types of non-linear dynamical systems. We discuss experiments on a variety of tables of synthetic and real data; we find that our iterative methods converge quickly to prominently correlated values of various categorical fields. 1 Introduction Much of the data in databases is categorical: fields in tables whose attributes cannot naturally be ordered as numerical values can. The pro...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Randomized Approach to Planning Biped Locomotion with Prescribed Motions In this paper, we present a new scheme for planning a natural-looking locomotion of a human-like biped figure. Given start and goal positions in a virtual environment, our scheme finds a sequence of motions to move from the start position to the goal using a set of prescribed, live-captured motion clips. Our scheme consists of three parts: roadmap construction, roadmap search, and motion generation. We randomly sample a set of valid configurations of the biped figure for the environment to construct a directed graph, called a roadmap, that guides the locomotion of the figure. Every edge of the roadmap is attached with a live-captured motion clip. Traversing the roadmap, we obtain the sequence of footprints and that of motion clips. We finally adapt the motion sequence to the constraints specified by the footprint sequence to obtain the locomotion.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSpace Discretization for Efficient Human Navigation There is a large body of research on motion control of legs in human models. However, they require specification of global paths in which to move. A method for automatically computing a global motion path for a human in 3D environment of obstacles is presented. Object space is discretized into a 3D grid of uniform cells and an optimal path is generated between two points as a discrete cell path. The grid is treated as graph with orthogonal links of uniform cost. A* search method is applied for path finding. By considering only the cells on the upper surface of objects on which human walks, a large portion of the grid is discarded from the search space, thus boosting efficiency. This is expected to be a higher level mechanism for various local foot placement methods in human animation. Keywords: global navigation, dynamic programming, A* graph search, articulated body models 1. Introduction Human walking is a complex and well studied component of articulated body animation research. T...\n\n2-Hop Neighbour:\nA Probabilistic Approach to Planning Biped Locomotion with Prescribed Motions Typical high-level directives for locomotion of human-like characters are encountered frequently in animation scripts or interactive systems. In this paper, we present a new scheme for planning natural-looking locomotion of a biped figure to facilitate rapid motion prototyping and task-level motion generation. Given start and goal positions in a virtual environment, our scheme gives a sequence of motions to move from the start to the goal using a set of live-captured motion clips.", "role": "user" }, { "content": "<answer>Artificial Intelligence</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Application of ART2 Networks and Self-Organizing Maps to Collaborative Filtering Since the World Wide Web has become widespread, more and more applications exist that are suitable for the application of social information filtering techniques. In collaborative filtering, preferences of a user are estimated through mining data available about the whole user population, implicitly exploiting analogies between users that show similar characteristics.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nConstruction of Adaptive Web-Applications from Reusable Components . The Web has become a ubiquitous environment for application\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nGenerating a Topically Focused VirtualReality Internet Surveys highlight that Internet users are frequently frustrated by failing to locate useful information, and by difficulty in browsing anarchically linked web-structures. We present a new Internet browsing application (called VR-net) that addresses these problems. It first identifies semantic domains consisting of tightly interconnected web-page groupings. The second part populates a 3D virtual world with these information sources, representing all relevant pages plus appropriate structural relations. Users can then easily browse through around a semantically focused virtual library. 1 Introduction The Internet is a probably the most significant global information resource ever created, allowing access to an almost unlimited amount of information. In this paper we describe two inter-related difficulties suffered by Internet users, and their combined influence on web use. We then introduce an integrated \"search and browse\" solution tool that directly tackles both issues. We also examin...\n\n2-Hop Neighbour:\nMining the Link Structure of the World Wide Web The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. We develop algorithms that exploit the hyperlink structure of the WWW for information discovery and categorization, the construction of high-quality resource lists, and the analysis of on-line hyperlinked communities. 1 Introduction The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. There are a number of fundamental reasons for this. The Web is a hypertext corpus of enormous size --- approximately three hundred million Web pages as of this writing --- and it continues to grow at a phenomenal rate. But the variation in pages is even worse than the raw scale of the data: the set of Web pages taken as a whole has almost no unifying structure, wi...\n\n2-Hop Neighbour:\nComparing Statistical and Content-Based Techniques for Answer Validation on the Web Answer Validation is an emerging topic in Question Answering, where open domain systems are often required to rank huge amounts of candidate answers. We present a novel approach to answer validation based on the intuition that the amount of implicit knowledge which connects an answer to a question can be estimated by exploiting the redundancy of Web information. Two techniques are considered in this paper: a statistical approach, which uses the Web to obtain a large amount of pages, and a content-based approach, which analyses text snippets retrieved by the search engine. Both the approaches do not require to download the documents. Experiments carried out on the TREC-2001 judged-answer collection show that a combination of the two approaches achieves a high level of performance (i.e. about 88% success rate).\n\n2-Hop Neighbour:\nThe XML Benchmark Project With standardization efforts of a query language for XML documents drawing to a close, researchers and users increasingly focus their attention on the database technology that has to deliver on the new challenges that the sheer amount of XML documents produced by applications pose to data management: validation, performance evaluation and optimization of XML query processors are the upcoming issues. Following a long tradition in database research, the XML Store Benchmark Project provides a framework to assess an XML database's abilities to cope with a broad spectrum of different queries, typically posed in real-world application scenarios. The benchmark is intended to help both implementors and users to compare XML databases independent of their own, specific application scenario. To this end, the benchmark o ers a set queries each of which is intended to challenge a particular primitive of the query processor or storage engine. The overall workload wepropose consists of a scalable document database and a concise, yet comprehensive set of queries, which covers the major aspects of query processing. The queries' challenges range from stressing the textual character of the document to data analysis queries, but include also typical ad-hoc queries. We complement our research with results obtained from running the benchmark on our XML database platform. They are intended to give a rst baseline, illustrating the state of the art.\n\n2-Hop Neighbour:\nA Case Study in Web Search using TREC Algorithms Web search engines rank potentially relevant pages/sites for a user query. Ranking documents for user queries has also been at the heart of the Text REtrieval Conference (TREC in short) under the label ###### retrieval. The TREC community has developed document ranking algorithms that are known to be the best for searching the document collections used in TREC, which are mainly comprised of newswire text. However, the web search community has developed its own methods to rank web pages/sites, many of which use link structure on the web, and are quite dierentfrom the algorithms developed at TREC. This study evaluates the performance of a state-of-the-art keyword-based document ranking algorithm (coming out of TREC) on a popular web search task: nding the web page/site of an entity, #### companies, universities, organizations, individuals, etc. This form of querying is quite prevalentonthe web. The results from the TREC algorithms are compared to four commercial web search engines. Results show that for nding the web page/site of an entity, commercial web search engines are notably better than a state-of-the-art TREC algorithm. These results are in sharp contrast to results from several previous studies. Keywords Search engines, TREC ad-hoc, keyword-based ranking, linkbased ranking 1.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Finding Text Regions Using Localised Measures We present a method based on statistical properties of local image neighbourhoods for the location of text in real-scene images. This has applications in robot vision, and desktop and wearable computing. The statistical measures we describe extract properties of the image which characterise text, invariant to a large degree to the orientation, scale or colour of the text in the scene. The measures are employed by a neural network to classify regions of an image as text or non-text. We thus avoid the use of different thresholds for the various situations we expect, including when text is too small to read, or when the text plane is not fronto-parallel to the camera. We briefly discuss applications and the possibility of recovery of the text for optical character recognition. 1 Introduction Automatic location and digitisation of text in arbitrary scenes, where the text may or may not be fronto-parallel to the viewing plane, is an area of computer vision which has not yet been ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nLocation and Recovery of Text on Oriented Surfaces We present a method for extracting text from images where the text plane is not necessarily fronto-parallel to the camera. Initially, we locate local image features such as borders and page edges. We then use perceptual grouping on these features to find rectangular regions in the scene. These regions are hypothesised to be pages or planes that may contain text. Edge distributions are then used for the assessment of these potential regions, providing a measure of confidence. It will be shown that the text may then be transformed to a fronto-parallel view suitable, for example, for an OCR system or other higher level recognition. The proposed method is scale independent (of the size of the text). We illustrate the algorithm using various examples. Keywords: Oriented Text, Perspective Recovery of Text, Edge Angle Distribution 1. INTRODUCTION Location and recovery of text in a scene would be useful in the context of wearable computing, desk computing, or unguided robotic motion. Such a...\n\n1-Hop Neighbour:\nEstimating the Orientation and Recovery of Text Planes in a Single Image A method for the fronto-parallel recovery of paragraphs of text under full perspective transformation is presented. The horizontal vanishing point of the text plane is found using an extension of 2D projection profiles. This allows the accurate segmentation of the lines of text. Analysis of the lines will then reveal the style of justification of the paragraph, and provide an estimate of the vertical vanishing point of the plane. The text is finally recovered to a fronto-parallel view suitable for OCR or other higher-level recognition.\n\n1-Hop Neighbour:\nAutomatic Text Detection and Tracking in Digital Video Text which appears in a scene or is graphically added to video can provide an important supplemental source of index information as well as clues for decoding the video's structure and for classification. In this paper we present algorithms for detecting and tracking text in digital video. Our system implements a scalespace feature extractor that feeds an artificial neural processor to detect text blocks. Our text tracking scheme consists of two modules: an SSD (Sum of Squared Difference)-based module to find the initial position and a contour-based module to refine the position. Experiments conducted with a variety of video sources show that our scheme can detect and track text robustly. Keywords Text Detection, Text Tracking, Video Indexing, Digital Libraries, Neural Network I. Introduction The continued proliferation of large amounts of digital video has increased demand for true content based indexing and retrieval systems. Traditionally, content has been indexed primaril...\n\n2-Hop Neighbour:\nCombining Statistical Measures to Find Image Text Regions We present a method based on statistical properties of local image pixels for focussing attention on regions of text in arbitrary scenes where the text plane is not necessarily fronto-parallel to the camera. This is particularly useful for Desktop or Wearable Computing applications. The statistical measures are chosen to reveal charactersitic properties of text. We combine a number of localised measures using a neural network to classify each pixel as text or non-text. We demonstrate our results on typical images. 1. Introduction To automatically enter the contents of a text document into a computer, one can place it on a flatbed scanner and use state of the art Optical Character Recognition (OCR) software to retrieve the characters. However, automatic segmentation and recognition of text in arbitrary scenes, where the text may or may not be fronto-parallel to the viewing plane, is an area of computer vision which has not been extensively researched previously. The problems involved a...\n\n2-Hop Neighbour:\nExtracting Low Resolution Text with an Active Camera for OCR Reading text in any scene is useful in the context of wearable computing, robotic vision or as an aid for visually handicapped people. Here, we present a novel automatic text reading system using an active camera focused on text regions already located in the scene (using our recent work). A region of text found is analysed to determine the optimal zoom that would foveate onto it. Then a number of images are captured over the text region to reconstruct a high-resolution mosaic of the whole region. This magnified image of the text is good enough for reading byhumans or for recognition by OCR. Even with a low resolution camera we obtained very good results.\n\n2-Hop Neighbour:\nA State-of-the-art Review on Multimodal Video Indexing Efficient and effective handling of video documents depends on the availability of indexes. Manual indexing is unfeasible for large video collections. Effective indexing requires a multimodal approach in which either the most appropriate modality is selected or the different modalities are used in collaborative fashion. In this paper we focus on the similarities and differences between the modalities, and survey several methods aiming at automating the time and resource consuming process of video indexing. Furthermore, we put forward a unifying and multimodal framework, which views a video document from the perspective of its author. This framework forms the guiding principle for identifying index types, for which automatic methods are found in literature. It furthermore forms the basis for categorizing these different methods.\n\n2-Hop Neighbour:\nLocalizing and Segmenting Text in Images and Videos Many images---especially those used for page design on web pages---as well as videos contain visible text. If these text occurrences could be detected, segmented, and recognized automatically, they would be a valuable source of high-level semantics for indexing and retrieval. In this paper, we propose a novel method for localizing and segmenting text in complex images and videos. Text lines are identified by using a complex-valued multilayer feed-forward network trained to detect text at a fixed scale and position. The network's output at all scales and positions is integrated into a single text-saliency map, serving as a starting point for candidate text lines. In the case of video, these candidate text lines are refined by exploiting the temporal redundancy of text in video. Localized text lines are then scaled to a fixed height of 100 pixels and segmented into a binary image with black characters on white background. For videos, temporal redundancy is exploited to improve segmentation performance. Input images and videos can be of any size due to a true multiresolution approach. Moreover, the system is not only able to locate and segment text occurrences into large binary images, but is also able to track each text line with sub-pixel accuracy over the entire occurrence in a video, so that one text bitmap is created for all instances of that text line. Therefore, our text segmentation results can also be used for object -based video encoding such as that enabled by MPEG-4.", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Estimating the Usefulness of Search Engines In this paper, we present a statistical method to estimate the usefulness of a search engine for any given query. The estimates can be used by a metasearch engine to choose local search engines to invoke. For a given query, the usefulness of a search engine in this paper is defined to be a combination of the number of documents in the search engine that are sufficiently similar to the query and the average similarity of these documents. Experimental results indicate that the proposed estimation method is quite accurate. 1 Introduction Many search engines have been created on the Internet to help ordinary users find desired data. Each search engine has a corresponding database that defines the set of documents that can be searched by the search engine. Usually, an index for all documents in the database is created and stored in the search engine to speed up query processing. The amount of data in the Internet is huge (it is believed that by the end of 1997, there were more than 300 mil...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBuilding Efficient and Effective Metasearch Engines Frequently a user's information needs are stored in the databases of multiple search engines. It is inconvenient and inefficient for an ordinary user to invoke multiple search engines and identify useful documents from the returned results. To support unified access to multiple search engines, a metasearch engine can be constructed. When a metasearch engine receives a query from a user, it invokes the underlying search engines to retrieve useful information for the user. Metasearch engines have other benefits as a search tool such as increasing the search coverage of the Web and improving the scalability of the search. In this article, we survey techniques that have been proposed to tackle several underlying challenges for building a good metasearch engine. Among the main challenges, the database selection problem is to identify search engines that are likely to return useful documents to a given query. The document selection problem is to determine what documents to retrieve from each identified search engine. The result merging problem is to combine the documents returned from multiple search engines. We will also point out some problems that need to be further researched.\n\n1-Hop Neighbour:\nText Database Selection for Longer Queries A metasearch engine is a system that supports unified access to multiple local search engines. One of the main challenges in building a large-scale metasearch engine is to solve the database (search engine) selection problem, which is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. For the database of each search engine, a representative which indicates approximately the contents of the database is created in advance to enable database selection. The representatives of all databases can be integrated into a single representative to make the selection process more scalable. While an integrated representative with high scalability has just been proposed and has been found to be effective for short queries, its effectiveness for longer queries is significantly lower. In the Internet environment, most queries initially submitted by users are short queries. However, it has been found that better search effectiveness can often be achieved when additional terms are added to the initial queries through query expansion or relevance feedback. The resulting queries are usually longer than the initial queries. In this paper, we propose a new method to construct database representatives and to decide which databases to select for longer queries. Experimental results are given to compare the performance of the new method with that of a previous method.\n\n2-Hop Neighbour:\nGlobal Integration of Visual Databases Different visual databases have been designed in various locations. The global integration of such databases can enable users to access data across the world in a transparent manner. In this paper, we investigate an approach to the design and creation of an integrated information system which supports global visual query access to various visual databases over the Internet. Specifically, a metaserver including a hierarchical metadatabase, a metasearch agent, and a query manager is designed to support such an integration. The metadatabase houses abstracted data about individual remote visual databases. To support visual contentbased queries, the abstracted data in the metadatabase reflect the semantics of each visual database. The query manager extracts the feature contents from the queries. The metasearch agent processes the queries by matching their feature contents with the metadata. A list of relevant database sites is derived for efficient retrieval of the query in the selected dat...\n\n2-Hop Neighbour:\nEvaluating Database Selection Techniques: A Testbed and Experiment We describe a testbed for database selection techniques and an experiment conducted using this testbed. The testbed is a decomposition of the TREC/TIPSTER data that allows analysis of the data along multiple dimensions, including collection-based and temporal-based analysis. We characterize the subcollections in this testbed in terms of number of documents, queries against which the documents have been evaluated for relevance, and distribution of relevant documents. We then present initial results from a study conducted using this testbed that examines the effectiveness of the gGlOSS approach to database selection. The databases from our testbed were ranked using the gGlOSS techniques and compared to the gGlOSS Ideal(l) baseline and a baseline derived from TREC relevance judgements. We have examined the degree to which several gGlOSS estimate functions approximate these baselines. Our initial results confirm that the gGlOSS estimators are excellent predictors of the Ideal(l) ranks but...\n\n2-Hop Neighbour:\nAutomatic Discovery of Language Models for Text Databases The proliferation of text databases within large organizations and on the Internet makes it difficult for a person to know which databases to search. Given language models that describe the contents of each database, a database selection algorithm such as GlOSS can provide assistance by automatically selecting appropriate databases for an information need. Current practice is that each database provides its language model upon request, but this cooperative approach has important limitations. This paper demonstrates that cooperation is not required. Instead, the database selection service can construct its own language models by sampling database contents via the normal process of running queries and retrieving documents. Although random sampling is not possible, it can be approximated with carefully selected queries. This sampling approach avoids the limitations that characterize the cooperative approach, and also enables additional capabilities. Experimental results demonstrate th...\n\n2-Hop Neighbour:\nError-Driven Pruning of Treebank Grammars for Base Noun Phrase Identification Finding simple, non-recursive, base noun phrases is an important subtask for many natural language processing applications. While previous empirical methods for base NP identification have been rather complex, this paper instead proposes a very simple algorithm that is tailored to the relative simplicity of the task. In particular, we present a corpus-based approach for finding base NPs by matching part-ofspeech tag sequences. The training phase of the algorithm is based on two successful techniques: first the base NP grammar is read from a \"treebank\" corpus; then the grammar is improved by selecting rules with high \"benefit\" scores. Using this simple algorithm with a naive heuristic for matching rules, we achieve surprising accuracy in an evaluation on the Penn Treebank Wall Street Journal. 1 Introduction Finding base noun phrases is a sensible first step for many natural language processing (NLP) tasks: Accurate identification of base noun phrases is arguably the most critical comp...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'An Automatic Closed-Loop Methodology for Generating Character Groundtruth for Scanned Documents Abstract—Character groundtruth for real, scanned document images is crucial for evaluating the performance of OCR systems, training OCR algorithms, and validating document degradation models. Unfortunately, manual collection of accurate groundtruth for characters in a real (scanned) document image is not practical because (i) accuracy in delineating groundtruth character bounding boxes is not high enough, (ii) it is extremely laborious and time consuming, and (iii) the manual labor required for this task is prohibitively expensive. In this paper we describe a closed-loop methodology for collecting very accurate groundtruth for scanned documents. We first create ideal documents using a typesetting language. Next we create the groundtruth for the ideal document. The ideal document is then printed, photocopied and then scanned. A registration algorithm estimates the global geometric transformation and then performs a robust local bitmap match to register the ideal document image to the scanned document image. Finally, groundtruth associated with the ideal document image is transformed using the estimated geometric transformation to create the groundtruth for the scanned document image. This methodology is very general and can be used for creating groundtruth for documents in typeset in any language, layout, font, and style. We have demonstrated the method by generating groundtruth for English, Hindi, and FAX document images. The cost of creating groundtruth using our methodology is minimal. If character, word or zone groundtruth is available for any real document, the registration algorithm can be used to generate the corresponding groundtruth for a rescanned version of the document. Index Terms—Automatic real groundtruth, document image analysis, OCR, performance evaluation, image registration, geometric transformations, image warping. ——————— — F ———————— 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nOmniPage vs. Sakhr: Paired Model Evaluation of Two Arabic OCR Products Characterizing the performance of Optical Character Recognition (OCR) systems is crucial for monitoring technical progress, predicting OCR performance, providing scientific explanations for the system behavior and identifying open problems. While research has been done in the past to compare performances of two or more OCR systems, all assume that the accuracies achieved on individual documents in a dataset are independent when, in fact, they are not. In this paper we show that accuracies reported on any dataset are correlated and invoke the appropriate statistical technique --- the paired model --- to compare the accuracies of two recognition systems. Theoretically we show that this method provides tighter confidence intervals than methods used in OCR and computer vision literature. We also propose a new visualization method, which we call the accuracy scatter plot, for providing a visual summary of performance results. This method summarizes the accuracy comparisons on the entire cor...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Accurate Estimation of the Cost of Spatial Selections Optimizing queries that involve operations on spatial data requires estimating the selectivity and cost of these operations. In this paper, we focus on estimating the cost of spatial selections, or window queries, where the query windows and data objects are general polygons. Cost estimation techniques previously proposed in the literature only handle rectangular query windows over rectangular data objects, thus ignoring the very significant cost of exact geometry comparison (the refinement step in a “filter and refine” query processing strategy). The cost of the exact geometry comparison depends on the selectivity of the filtering step and the average number of vertices in the candidate objects identified by this step. In this paper, we introduce a new type of histogram for spatial data that captures the complexity and size of the spatial objects as well as their location. Capturing these attributes makes this type of histogram useful for accurate estimation, as we experimentally demonstrate. We also investigate sampling-based estimation approaches. Sampling can yield better selectivity estimates than histograms for polygon data, but at the high cost of performing exact geometry comparisons for all the sampled objects. 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSelectivity Estimation in Spatial Databases Selectivity estimation of queries is an important and wellstudied problem in relational database systems. In this paper, we examine selectivity estimation in the context of Geographic Information Systems, which manage spatial data such as points, lines, poly-lines and polygons. In particular, we focus on point and range queries over two-dimensional rectangular data. We propose several techniques based on using spatial indices, histograms, binary space partitionings (BSPs), and the novel notion of spatial skew. Our techniques carefully partition the input rectangles into subsets and approximate each partition accurately. We present a detailed experimental study comparing the proposed techniques and the best known sampling and parametric techniques. We evaluate them using synthetic as well as real-life TIGER datasets. Based on our experiments, we identify a BSP based partitioning that we call Min-Skew which consistently provides the most accurate selectivity estimates for spatial queries...\n\n1-Hop Neighbour:\nHow to Avoid Building DataBlades That Know the Value of Everything and the Cost of Nothing The object-relational database management system (ORDBMS) offers many potential benefits for scientific, multimedia and financial applications. However, work remains in the integration of domain-specific class libraries (data cartridges, extenders, DataBlades ® ) into ORDBMS query processing. A major problem is that the standard mechanisms for query selectivity estimation, taken from relational database systems, rely on properties specific to the standard data types; creation of new mechanisms remains extremely difficult because the software interfaces provided by vendors are relatively low-level. In this paper, we discuss extensions of the generalized search tree, or GiST, to support a higher-level but less type-specific approach. Specifically, we discuss the computation of selectivity estimates with confidence intervals using a variety of index-based approaches and present results from an experimental comparison of these methods with several estimators from the literature. 1. Intro...\n\n2-Hop Neighbour:\nImproving Min/Max Aggregation over Spatial Objects We examine the problem of computing MIN/MAX aggregate queries over a collection of spatial objects. Each spatial object is associated with a weight (value), for example, the average temperature or rainfall over the area covered by the object. Given a query rectangle, the MIN/MAX problem computes the minimum/maximum weight among all objects intersecting the query rectangle. Traditionally such queries have been performed as range search queries. Assuming that the objects are indexed by a spatial access method, the MIN/MAX is computed as objects are retrieved. This requires effort proportional to the number of objects intersecting the query interval, which may be large. A better approach is to maintain aggregate information among the index nodes of the spatial access method; then various index paths can be eliminated during the range search. In this paper we propose four optimizations that further improve the performance of MIN/MAX queries. Our experiments show that the proposed optimizations offer drastic performance improvement over previous approaches. Moreover, as a by-product of this work we present an optimized version of the MSB-tree, an index that has been proposed for the MIN/MAX computation over 1-dimensional interval objects.\n\n2-Hop Neighbour:\nEddies: Continuously Adaptive Query Processing In large federated and shared-nothing databases, resources can exhibit widely fluctuating characteristics. Assumptions made at the time a query is submitted will rarely hold throughout the duration of query processing. As a result, traditional static query optimization and execution techniques are ineffective in these environments. In this paper we introduce a query processing mechanism called an eddy, which continuously reorders operators in a query plan as it runs. We characterize the moments of symmetry during which pipelined joins can be easily reordered, and the synchronization barriers that require inputs from different sources to be coordinated. By combining eddies with appropriate join algorithms, we merge the optimization and execution phases of query processing, allowing each tuple to have a flexible ordering of the query operators. This flexibility is controlled by a combination of fluid dynamics and a simple learning algorithm. Our initial implementation demonstrates prom...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Finite-state approaches to Web information extraction Introduction An information agent is a distributed system that receives a goal through its user interface, gathers information relevant to this goal from a variety of sources, processes this content as appropriate, and delivers the results to the users. We focus on the second stage in this generic architecture. We survey a variety of information extraction techniques that enable information agents to automatically gather information from heterogeneous sources. For example, consider an agent that mediates package-delivery requests. To satisfy such requests, the agent might need to retrieve address information from geographic services, ask an advertising service for freight forwarders that serve the destination, request quotes from the relevant freight forwarders, retrieve duties and legal constraints from government sites, get weather information to estimate transportation delays, etc. Information extraction (IE) is a form of shallow document processing that involves populating' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSelective Sampling With Redundant Views Selective sampling, a form of active learning, reduces the cost of labeling training data by asking only for the labels of the most informative unlabeled examples. We introduce a novel approach to selective sampling which we call co-testing. Cotesting can be applied to problems with redundant views (i.e., problems with multiple disjoint sets of attributes that can be used for learning). We analyze the most general algorithm in the co-testing family, naive co-testing, which can be used with virtually any type of learner. Naive co-testing simply selects at random an example on which the existing views disagree. We applied our algorithm to a variety of domains, including three real-world problems: wrapper induction, Web page classification, and discourse trees parsing. The empirical results show that besides reducing the number of labeled examples, naive co-testing may also boost the classification accuracy. Introduction In order to learn a classifier, supervised learn...\n\n1-Hop Neighbour:\nWrapper Induction: Efficiency and Expressiveness The Internet presents numerous sources of useful information---telephone directories, product catalogs, stock quotes, event listings, etc. Recently, many systems have been built that automatically gather and manipulate such information on a user's behalf. However, these resources are usually formatted for use by people (e.g., the relevant content is embedded in HTML pages), so extracting their content is difficult. Most systems use customized wrapper procedures to perform this extraction task. Unfortunately, writing wrappers is tedious and error-prone. As an alternative, we advocate wrapper induction, a technique for automatically constructing wrappers. In this article, we describe six wrapper classes, and use a combination of empirical and analytical techniques to evaluate the computational tradeoffs among them. We first consider expressiveness: how well the classes can handle actual Internet resources, and the extent to which wrappers in one class can mimic those in another. We then...\n\n1-Hop Neighbour:\nA Hierarchical Approach to Wrapper Induction With the tremendous amount of information that becomes available on the Web on a daily basis, the ability to quickly develop information agents has become a crucial problem. A vital component of any Web-based information agent is a set of wrappers that can extract the relevant data from semistructured information sources. Our novel approach to wrapper induction is based on the idea of hierarchical information extraction, which turns the hard problem of extracting data from an arbitrarily complex document into a series of easier extraction tasks. We introduce an inductive algorithm, stalker, that generates high accuracy extraction rules based on user-labeled training examples. Labeling the training data represents the major bottleneck in using wrapper induction techniques, and our experimental results show that stalker does significantly better then other approaches; on one hand, stalker requires up to two orders of magnitude fewer examples than other algorithms, while on the other hand...\n\n2-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n2-Hop Neighbour:\nKnowledge-based Wrapper Generation by Using XML Information extraction is the process of recognizing the particular fragments of a document that constitute its core semantic content. However, most previous information extraction systems were not effective for real-world information sources due to difficulties in acquiring and representing useful domain knowledge and in dealing with structural heterogeneity inherent in different sources. In order to resolve these problems, this paper proposes a scheme of knowledge-based wrapper generation for semi-structured and labeled documents. The implementation of an agent-oriented information extraction system, XTROS, is described. In contrast with previous wrapper learning agents, XTROS represents both the domain knowledge and the wrappers by XML documents to increase modularity, flexibility, and interoperability among multiple parties. XTROS also facilitates simpler implementation of the wrapper generator by exploiting XML parsers and interpreters. XTROS shows good performance on several Web sites in the domain of real estates, and it is expected to be easily adaptable to different domains by plugging in appropriate XML-based domain knowledge. 1\n\n2-Hop Neighbour:\nMultistrategy Learning for Information Extraction Information extraction (IE) is the problem of filling out pre-defined structured summaries from text documents. We are interested in performing IE in non-traditional domains, where much of the text is often ungrammatical, such as electronic bulletin board posts and Web pages. We suggest that the best approach is one that takes into account many different kinds of information, and argue for the suitability of a multistrategy approach. We describe learners for IE drawn from three separate machine learning paradigms: rote memorization, term-space text classification, and relational rule induction. By building regression models mapping from learner confidence to probability of correctness and combining probabilities appropriately, it is possible to improve extraction accuracy over that achieved by any individual learner. We describe three different multistrategy approaches. Experiments on two IE domains, a collection of electronic seminar announcements from a university computer science de...\n\n2-Hop Neighbour:\nAuthoring and Annotation of Web Pages in CREAM Richly interlinked, machine-understandable data constitute the basis for the Semantic Web. We provide a framework, CREAM, that allows for creation of metadata. While the annotation mode of CREAM allows to create metadata for existing web pages, the authoring mode lets authors create metadata --- almost for free --- while putting together the content of a page. As a particularity of our framework, CREAM allows to create relational metadata, i.e. metadata that instantiate interrelated definitions of classes in a domain ontology rather than a comparatively rigid template-like schema as Dublin Core. We discuss some of the requirements one has to meet when developing such an ontology-based framework, e.g. the integration of a metadata crawler, inference services, document management and a meta-ontology, and describe its implementation, viz. Ont-O-Mat a component-based, ontology-driven Web page authoring and annotation tool.\n\n2-Hop Neighbour:\nCooperative Information Agents for Digital Cities This paper presents an architecture for digital cities and shows the roles of agent", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Mining the Link Structure of the World Wide Web The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. We develop algorithms that exploit the hyperlink structure of the WWW for information discovery and categorization, the construction of high-quality resource lists, and the analysis of on-line hyperlinked communities. 1 Introduction The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. There are a number of fundamental reasons for this. The Web is a hypertext corpus of enormous size --- approximately three hundred million Web pages as of this writing --- and it continues to grow at a phenomenal rate. But the variation in pages is even worse than the raw scale of the data: the set of Web pages taken as a whole has almost no unifying structure, wi...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nInformation Retrieval on the World Wide Web and Active Logic: A Survey and Problem Definition As more information becomes available on the World Wide Web (there are currently over 4 billion pages covering most areas of human endeavor), it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest). The first process is tentative and time consuming and the second may not satisfy the user because of many inaccurate and irrelevant results. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves.\n\n1-Hop Neighbour:\nWeb Mining Research: A Survey With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.\n\n2-Hop Neighbour:\nWeb Usage Mining - Languages and Algorithms We propose two new XML applications, XGMML and LOGML. XGMML is a graph description language and LOGML is a web-log report description language. We generate a web graph in XGMML format for a web site using the web robot of the WWWPal system (developed for web visualization and organization). We generate web-log reports in LOGML format for a web site from web log files and the web graph. In this paper, we further illustrate the usefulness of these two XML applications with a web data mining example. Moreover, we show the simplicity with which this mining algorithm can be specified and implemented efficiently using our two XML applications. We provide sample results, namely frequent patterns of users in a web site, with our web data mining algorithm.\n\n2-Hop Neighbour:\nIntelligent Crawling on the World Wide Web with Arbitrary Predicates The enormous growth of the world wide web in recent years has made it important to perform resource discovery efficiently. Consequently, several new ideas have been proposed in recent years; among them a key technique is focused crawling which is able to crawl particular topical portions of the world wide web quickly without having to explore all web pages. In this paper, we propose the novel concept of intelligent crawling which actually learns characteristics of the linkage structure of the world wide web while performing the crawling. Specifically, the intelligent crawler uses the inlinking web page content, candidate URL structure, or other behaviors of the inlinking web pages or siblings in order to estimate the probability that a candidate is useful for a given crawl. This is a much more general framework than the focused crawling technique which is based on a pre-defined understanding of the topical structure of the web. The techniques discussed in this paper are applicable for crawling web pages which satisfy arbitrary user-defined predicates such as topical queries, keyword queries or any combinations of the above. Unlike focused crawling, it is not necessary to provide representative topical examples, since the crawler can learn its way into the appropriate topic. We refer to this technique as intelligent crawling because of its adaptive nature in adjusting to the web page linkage structure. The learning crawler is capable of reusing the knowledge gained in a given crawl in order to provide more efficient crawling for closely related predicates.\n\n2-Hop Neighbour:\nReport on the CONALD Workshop on Learning from Text and the Web Moo], organization and presentation of documents in information retrieval systems [GS, Hof], collaborative filtering [dVN], lexicon learning [GBGH], query reformulation [KK], text generation [Rad] and analysis of the statistical properties of text [MA]. In short, the state of the art in learning from text and the web is that a broad range of methods are currently being applied to many important and interesting tasks. There remain numerous open research questions, however. Broadly, the goals of the work presented at the workshop fall into two overlapping categories: (i) making textual information available in a structured format so that it can be used for complex queries and problem solving, and (ii) assisting users in finding, organizing and managing information represented in text sources. As an example of research aimed at the former goal, Muslea, Minton and Knoblock [MMK] have developed an approach to learning wrappers for semi-structured Web sources, such as restau\n\n2-Hop Neighbour:\nCiteSeer: An Autonomous Web Agent for Automatic Retrieval and Identification of Interesting Publications Published research papers available on the World Wide Web (WWW or Web) are often poorly organized, often exist in non-text form (e.g. Postscript) documents, and increase in quantity daily. Significant amounts of time and effort are commonly needed to find interesting and relevant publications on the Web. We have developed a Web based information agent that assists the user in the process of performing a scientific literature search. Given a set of keywords, the agent uses Web search engines and heuristics to locate and download papers. The papers are parsed in order to extract information features such as the abstract and individually identified citations which are placed into an SQL database. The agent's Web interface can be used to find relevant papers in the database using keyword searches, or by navigating the links between papers formed by the citations. Links to both \"citing\" and \"cited\" publications can be followed. In addition to simple browsing and keyword searches, the agent ...\n\n2-Hop Neighbour:\nInformation Retrieval on the Web: Selected Topics In this paper we review studies on the growth of the Internet and technologies which are useful for information search and retrieval on the Web. In the rst section, we present data on the Internet from several dierent sources, e.g., current as well as projected number of users, hosts and Web sites. Although the numerical gures vary, the overall trends cited by the sources are consistent and point to exponential growth during the coming decade. And Internet users are increasingly using search engines and search services to nd speci c information of interest. However, users are not satis ed with the performance of the current generation of search engines; the slow speed of retrieval, communication delays, and poor quality of retrieved results (e.g., noise and broken links) are commonly cited problems. The main body of our paper focuses on linear algebraic models and techniques for solving these problems. keywords: clustering, indexing, information retrieval, Internet, late...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]